A key element in the success of any business intelligence (BI) program lies in the team you create to deliver solutions and the clout it has within the organization to secure resources and get things done. Although there is no right or wrong way to organize a BI team, there are some key principles that are worth knowing. The following the seven guidelines can help you create a high-performance BI team that delivers outstanding value to your organization.
1. Recruit the best people. Jim Collins in his best-selling book “Good to Great” says great companies first get the right people “on the bus” (and the wrong people off it) and then figure out where to drive it. Collins says that the right people will help an organization figure out and execute a strategy, which is a much more effective approach than recruiting people based on their skills and experience to support a strategy which might become obsolete in short order.
From a BI perspective, this means we shouldn’t hire people just because they know a specific tool or programming language or have previous experience managing a specific task, such as quality assurance. If you need specialists like that, it’s better to outsource such positions to a low-cost provider on a short-term contractual basis. What you really want are people who are ambitious, adaptable, and eager to learn new skills. Although you should demand certain level of technical competency and know-how, you ultimately want people who fundamentally believe that BI can have a transformative effect on the business and possess the business acumen and technical capabilities to make that happen.
Collins adds that the right people “don’t need to be tightly managed or fired up; they will be self--motivated by the inner drive to produce the best results and be part of something great.” In a BI setting, these people won’t just do a job; they’ll figure out the issues and work proactively to get things done. Sure, you’ll have to pay them a lot and provide upward career mobility, but a handful of the “right people” will produce more than a dozen “so-so” people.
2. Create multi-disciplinary teams. In most early stage BI teams, a handful of highly motivated people play a myriad roles: the BI project manager serves as the BI architect and requirements analyst; the BI report developer also builds ETL code and perform quality assurance checks; the technical writer provides training and support. With enough talent, resources, and hutzpah, these small, cross-disciplinary teams deliver fantastic results.
And succeeds breeds bigger budgets, more staff, and greater specialization. In most cases, linear development by groups of specialists replaces small, multi-disciplinary teams. The specialist subgroups (e.g., requirements, modeling, ETL, data warehousing, report development, support) operate in isolation, throwing their output “over the wall” to the next group in the BI assembly line. This industrial era approach inevitably becomes mired in its own processes and project backlogs grow bigger. The once nimble, multi-disciplinary BI team loses penchant for speed and loses its luster in the eyes of the business. (See “Revolutionary BI: When Agile Isn’t Fast Enough.”)
The key to remaining nimble and agile as your BI team grows is to recreate the small, multi-disciplinary teams from your early stage BI initiative. Assign three to five people responsibility for delivering an entire BI solution from source to report. Train them in agile development techniques so they work iteratively with the business to deliver solutions quickly. With multiple, multidisciplinary teams, you may need to reset the architecture once in awhile to align what teams are building on the ground, but this tradeoff is worth it.
Multi-disciplinary teams work collaboratively and quickly to find optimal solutions to critical problems, such as whether to code rules in a report, the data model, or the ETL layer. They also provide staff more leadership opportunities and avenues for learning new skills and development techniques. By providing more opportunities for learning and growth, you will increase your staff’s job satisfaction and loyalty. The most productive BI teams that I’ve seen have worked together for 10 to 15 years.
3. Establish BI Governance. Once a BI team has achieved some quick wins, it needs to recruit the business to run the BI program while it assumes a supportive role. The key indicator of the health of a BI program is the degree to which the business assumes responsibility for its long-term success. Such commitment is expressed in a formal BI governance program.
Most BI governance programs consist of two steering committees that meet regularly to manage the BI initiative. An executive steering committee comprised of BI sponsors from multiple departments meets quarterly to review the BI roadmap, prioritize projects, and secure funding. Second, a working committee comprised of business analysts (i.e., subject matter experts who are intensive consumers of data) meets weekly or monthly to define the BI roadmap, hash out DW definitions and subject areas, suggest enhancements, and select products.
The job of the BI team is to support the two BI governance committees in a reciprocal, trusting relationship. The BI team builds and maintains what the committees want to deploy while the BI team educates the sponsors about the potential of BI to transform their processes and what solutions are feasible at what cost. The BI also provides continuous education about emerging BI trends and technologies that have potential to reinvent the business.
4. Find Purple People. The key to making BI governance programs work is recruiting people who can straddle the worlds of business and information technology (IT). These people are neither blue (i.e., business) nor red (i.e., IT) but a combination of both. These so-called purple people can speak both the language of business and data, making them perfect intermediaries between the two groups.
Astute BI directors are always looking for potential purple people to recruit to their teams. Purple people often hail from the business side where they’ve served as a business analyst or a lieutenant to a BI sponsor. Their personal ties to the people in a department, such as finance, coupled with deep knowledge of business processes and data gives them instant credibility with the business. They spend most of their time in the functional area, talking with business leaders and sitting on advisory boards where they share insights about how the business can best leverage BI to accomplish its goals.
5. Aspire to Becoming a Solutions Provider. The best BI teams aren’t content simply to provision data. BI directors know that if the business is to reap the full value of the BI resource, their teams have to get involved in delivering BI solutions. (See “Evolving Your Team from a Data Provider to a Solutions Provider.”) Although some departments may want to build their own reports and applications, few can sustain the expertise to full exploit of DW and BI capabilities to exploit information as a critical resource.
High-performance BI teams work with each department to build a set of standard interactive reports or dashboards that meet 60% to 80% of the needs of casual users in the department. They then train and support each department’s “Super Users” -- tech-savvy business users or business analysts--to use self-service BI tools to create ad hoc reports on behalf of the casual users in the department, meeting the remaining 20% to 40% of their information requirements. The Super Users, in effect, become extensions of the BI team in each department and provide an additional set of “eyes and ears” to keep track of what’s going on from a BI perspective.
6. Give Your BI Team a Name. A name is a powerful thing that communicates meaning and influences perception. Most business people don’t know what business intelligence or analytics is (or may have faulty notions or ideas that don’t conform with the mission of your team.) So spend time considering appropriate names that clearly communicate what your group does and why it’s important to the business.
For example, a group that created predictive models in a large financial services firm called itself the “Marketing Analytics and Business Insights” group. But it discovered that that people didn’t know what the word “analytics” meant or how it differed from “business insights.” Some department heads were resentful of its million dollar budget and the group felt it continually had to defend itself. So it came up with the tagline: “We analyze information to provide usable insights to the organization.” This branding significantly improved the perceived value of the group and it became a “go to” source for information inside the company.
7. Position BI within an Information Management Department. Finally, the BI team should be organized within a larger information management (IM) department that is separate from IT and reports directly to the CIO or COO. The IM department is responsible for all information-driven applications that support the business. These may include: data warehousing, business intelligence, performance management, advanced analytics, spatial analytics, customer management, and master data management.
The IM department maintains a close alliance with the IT department, whose responsibilities include managing the core computing and networking infrastructure that the IM group uses. For instance, the IM group is responsible for designing and managing the data warehouse, while the IT group is responsible for tuning and operating the physical databases that run the data warehouse and the data center in which all data processing occurs.
Separating IM from IT provides a clear signal to the business that these are two separate domains that require different skill sets and career paths. People in the IM group are much more business- and information-driven than those in the IT department, who are more technology focused.
By following these seven steps, you can transform your BI team into a high-performance organization that earns the respect of the business and enjoys sustained success.
Posted on March 27, 20102 comments
Analytic sandboxes are proving to be a key tactic in liberating business analysts to explore data while preventing the proliferation of spreadmarts and renegade data marts. Many BI teams already provide sandboxes of some sort, but few recognize that there are three tiers of sandboxes that can be deployed individually or in concert to meet the unique needs of every organization
Analytic sandboxes adhere to the maxim, “If you can’t beat them, join them.” They provide a “safe haven” for business analysts to explore enterprise data, combine it with local and external data, and then massage and package the resulting data sets without jeopardizing an organization’s proverbial “single version of truth” or adversely affecting performance for general DW users.
By definition, analytic sandboxes are designed for exploratory analysis, not production reporting or generalized distribution. Ideally, sandboxes come with an expiration date (e.g. 90 days), reinforcing the notion that they are designed for ad hoc analyses, not application development. If analysts want to convert what they’ve created into a scheduled report or application, they need to turn it over to the BI team to “productionize” it.
Unfortunately, analytic sandboxes can’t enforce information policies. Analysts can still export data sets to their desktop machines, email results to colleagues, and create unauthorized production applications. Ultimately, organizations that establish sandboxes must establish policies and procedures for managing information in a consistent manner and provide sufficient education about proper ways to produce and distribution information. Nonetheless, many BI teams are employing analytic sandboxes with reasonable success.
Tiers of Sandboxes
1. DW-Centric Sandboxes. The traditional analytic sandbox carves out a partition within the data warehouse database, upwards of 100GB in size, in which business analysts can create their own data sets by combining DW data with data they upload from their desktops or import from external sources. These DW-centric sandboxes preserve a single instance of enterprise data (i.e., they don’t replicate DW data), make it easier for database and DW administrators to observe what analysts are doing, and help analysts become more comfortable working in a corporate data environment. It’s also easier for the BI team to convert analyses into production applications since the analytic output is already housed in the DW.
However, a DW-centric sandbox can be difficult to manage from a systems perspective. Database administrators must create and maintain partitions and access rights and tune workload management utilities to ensure adequate performance for both general DW users and business analysts. An organization that has dozens or hundreds of analysts, each of whom wants to create large data sets and run complex queries, may bog down performance even with workload management rules in place. Inevitably, the BI team may need to upgrade the DW platform at considerable expense to support the additional workload.
2. Replicated Sandboxes. One way to avoid performance problems and systems management complexities is to replicate the DW to a separate platform designed exclusively for analysts. Many companies have begun to physically separate the production DW from ad hoc analytical activity by purchasing specialized DW appliances.
This approach offloads complex, ad hoc queries issued by a handful of people to a separate machine, leaving the production DW to support standardized report delivery, among other things. DW performance improves significantly without a costly upgrade, and analysts get free reign of a box designed exclusively for their use.
Of course, the downside to this is cost and duplication of data. Organizations must purchase, install, and maintain a separate database platform--which may or may not run the same database and server hardware as the DW. Executives may question why they need a separate machine to handle tasks they thought the DW was going to handle.
In addition, the BI team must establish and maintain a utility to replicate the data to the sandbox, which may take considerable expertise to create and maintain. The replication can be done at the source systems, the ETL layer, the DW layer (via mirrored backup), or the DW storage system. Also, with multiple copies of data, it’s easy for the two systems to get out of sync and for analysts to work with outdated information.
3. Managed Excel Sandboxes. The third tier of analytic sandbox runs on the desktop. New Excel-based analytical tools, such as Microsoft’s PowerPivot and Lyzasoft’s Lyza Workstation, contain in-memory columnar databases that run on desktop machines, giving analysts unheralded power to access, massage, and analyze large volumes of data in a manner that conforms to the way they’ve traditionally done such work (i.e., using Excel versus SQL.)
Although these spreadsheets-on-steroids seem like a BI manager’s worst nightmare, there is a silver lining: analysts who want to share their results have to publish through a server managed by corporate IT. This is why I call this type of sandbox a “managed Excel” environment.
For example, with Microsoft PowerPivot, analysts publish their results to Microsoft SharePoint, which makes the results available to other users via Excel Services, which is a browser-based version of Excel. Excel Services prevents users from changing or downloading the report, preventing unauthorized distribution. In the same way, Lyzasoft lets analysts publish data to the Lyza Commons, where others can view and comment on the output via browser-based collaborative tools.
Of course, where there is a will there is a way and business analysts can and will find ways to circumvent the publishing and distribution features built into PowerPivot and Lyza workbooks and other managed Excel environments. But the collaborative features of their server-based environments are so powerful and compelling that I suspect most business analysts will take the path of least resistance and share information in this controlled manner.
Combining Sandboxes. A managed Excel sandbox might work well in conjunction with the other two sandboxes, especially if the corporate sandboxes have performance or size constraints. For example, analysts could download a subset of data from a centralized sandbox to their managed Excel application, combine it with local data on their desktops, and conduct their analyses using Excel. If they liked what they discovered, they could then run the analysis against the entire DW within the confines of a centralized sandbox.
Our industry is in the early stages of learning how to make most effective use of analytic sandboxes to liberate power users without undermining information consistency. With three (and perhaps more) types of analytic sandboxes, BI teams can tailor the sandbox experience to meet the unique needs of their organization.
Posted on March 22, 20100 comments
In her presentation on “BI Roadmaps” at TDWI’s BI Executive Summit last month, Jill Dyche explained that BI teams can either serve as “data providers” or “solutions providers.” Data providers focus on delivering data in the form of data warehouses, data marts, cubes, and semantic layers that can be used by BI developers in the business units to create reports and analytic applications. Solutions providers, on the other hand, go one step further, by working hand-in-hand with the divisions to develop BI solutions.
I firmly believe that BI teams must evolve into the role of solutions provider if they want to succeed long term. They must interface directly with the business, serving as a strategic partner that advises the business on how to leverage data and BI capabilities to solve business problems and capitalize on business opportunities. Otherwise, they will become isolated and viewed as an IT cost-center whose mission will always be questioned and whose budget will always be on the chopping block.
Data Provisioning by Default. Historically, many BI teams become data providers by default because business units already have reporting and analysis capabilities, which they’ve developed over the years in the absence of corporate support. These business units are loathe to turn over responsibility for BI development to a nascent corporate BI group that doesn’t know its business and wants to impose corporate standards for architecture, semantics, and data processing. Given this environment, most corporate BI teams take what they can get and focus on data provisioning, leaving the business units to weave gold out of the data hay they deliver.
Mired Down by Specialization
However, over time, this separation of powers fails to deliver value. The business units lose skilled report developers, and they don’t follow systematic procedures for gathering requirements, managing projects, and developing software solutions. They end up deploying multiple tools, embedding logic into reports, and spawning multiple, inconsistent views of information. Most of all, they don’t recognize the data resources available to them, and they lack the knowledge and skills to translate data into robust solutions using new and emerging BI technologies and techniques, such as OLAP cubes, in-memory visualization, agile methods, dashboard, scorecards, and predictive analytics.
On the flip side, the corporate BI team gets mired down with a project backlog that it can’t seem to shake. Adopting an industrialized assembly line mindset, it hires specialists to handle every phase of the information factory to improve efficiency (e.g. requirements, ETL, cube building, semantic modeling, etc.) yet it can’t accelerate development easily. Its processes have become too rigid and sequential. When divisions get restless waiting for the BI team to deliver, CFOs and CIOs begin to question their investments and put its budget on the chopping block.
Evolving into Solutions Providers
Rethink Everything. To overcome these obstacles, a corporate BI team needs to rethink its mission and the way it’s organized. It needs to actively engage with the business and take some direct responsibility for delivering business solutions. In some cases, it may serve as an advisor to a business unit which has some BI expertise while in others it may build the entire solution from scratch where no BI expertise exists. By transforming itself from a back-office data provider to a front-office solutions developer, a corporate BI team will add value to the organization and have more fun in the process.
It will also figure out new ways to organize itself to serve the business efficiently. To provide solutions assistance without adding budget, it will break down intra-organizational walls and cross-train specialists to serve on cross-functional project teams that deliver an entire solution from A to Z. Such cross-fertilization will invigorate many developers who will seize the chance to expand their skill sets (although some will quit when forced out of their comfort zones). Most importantly, they will become more productive and before long eliminate the project backlog.
A High Performance BI Team
For example, Blue Cross/Blue Shield of Tennessee has evolved into a BI solutions provider over the course of many years. BI is now housed in an Information Management (IM) organization that reports to the CIO and is separate from the IT organization. The IM group consists of three subgroups: 1) the Data Management group 2) the Information Delivery group and 3) the IM Architecture group.
- The Data Management group is comprised of 1) a data integration team that handles ETL work and data warehouse administration and 2) a database administration team that designs, tunes, and manages IM databases.
- The Information Delivery group consists of 1) a BI and Performance Management team which purchases, installs, and manages BI and PM tools and solutions and provides training and two customer-facing solutions delivery teams that work with business units to build applications. The first is the IM Health Informatics team that builds clinical analytic applications using reporting, OLAP, and predictive analytics capabilities, and the second is the IM Business Informatics team which builds analytic applications for other internal departments (i.e. finance, sales, marketing).
- The IM Architecture group builds and maintains the IM architecture, which consists of the enterprise data warehouse, data marts, and data governance programs, as well as closed loop processing and the integration of structured and unstructured data.
Collaborative Project Teams. Frank Brooks, director of data management and information delivery at BCBS of Tennessee, says that the IM group dynamically allocates resources from each IM team to support business-driven projects. Individuals from the Informatics teams serve as project managers, interfacing directly with the customers. (While Informatics members report to the IM group, many spend most of their in the departments they serve.) One or more members from each of the other IM teams (data integration, database administration, and BI/PM) is assigned to the project team and they collaboratively work to build a comprehensive solution for the customer.
In short, the BI team of BCBS of Tennessee has organized itself as a BI solutions provider, consolidating all the functions needed to deliver comprehensive solutions in one group, reporting to one individual who can ensure the various teams collaborate efficiently and effectively to meet and exceed customer requirements. BCBS of Tennessee has won many awards for its BI solutions and will be speaking at this summer’s TDWI BI Executive Summit in San Diego (August 16-18.)
The message is clear: if you want to deliver value to your organization and assure yourself a long-term, fulfilling career at your company, then don’t be satisfied with being just a data provider. Make sure you evolve into a solutions provider that is viewed as a strategic partner to the business.
Posted on March 16, 20101 comments
With “analytics” a hot buzzword and the newest competitive differentiator in Corporate America, SAS is sitting in the catbird’s seat. With $3.1 billion in sales last year, SAS is by far the leader in analytics software and is prepared to leverage its position to compete head on with the big boys.
It was a clear after spending a day with SAS executives in Steamboat Springs, Colorado that SAS is thinking as big as the towering Rocky Mountains that surrounded our resort hotel. Specifically, SAS is focused on competing with IBM. That’s partly because IBM just acquired its nearest analytics competitor, SPSS, and is now heavily marketing its “Smart Analytics System” initiative, which enables customers to purchase a single integrated analytical solution consisting of hardware, software, tools, consulting services, and support via a single SKU and have it delivered in two weeks. (See “IBM Rediscovers Itself” August, 2009.)
To bulk up against Big Blue and its 4,000 consultants, SAS has teamed up with Accenture, who is also feeling the heat from IBM, but in consulting services. Together, SAS and Accenture will spend $50 million initially to develop new analytics products for six industries. For its part, Accenture sees analytics as a big growth area and partnering with the top analytics software vendor makes sense. And to beef up its corporate image, SAS is building a new 280,000 foot executive briefing center with 690 offices, two auditoriums, and a full café to impress analytics prospects.
SAS has never been known for its partnering ability, but the stakes are high and both SAS and Accenture are motivated by a common, fierce competitor. In the past two years, SAS has honed its partnering skills by investing serious time and money with Teradata, the fruits of which are starting to pay dividends in six-figure sales deals. And now, SAS is aggressively partnering with a slew of upstart database vendors to move analytics into the database, even IBM DB2. This is an interesting surround strategy for SAS: wherever IBM goes, it will find SAS already embedded in the customers database of choice. Not bad!
Other highlights from the SAS Analyst Day:
- SAS executives believe social media analysis is a big potential growth area for the company, leveraging its strength in analytics as well as its 2008 acquisition of Teragram, a leader in natural language processing and advanced linguistic technology. This is a natural market for SAS to dominate, although it faces a slew of nimble startups and established players
- SAS’ data integration products, which are now housed under the DataFlux subsidiary, were the fastest growing parts of its portfolio, with more than 50 percent growth in some markets. While most of the products are not new, SAS executives say the growth comes from the fact that DataFlux provides vendor-neutral data integration products so is not tied to the SAS installed base. Plus, companies are mining ever-growing volumes of data and are recognizing the need for clean, integrated data. “More companies view fact-based decision making as a key to success and are coming to recognize that clean, integrated data is critical for delivering insights,” says Jim Davis, senior vice president and chief marketing officer.
- SAS is no longer overlooking the small- and medium-sized business (SMB) market, ramping up hosted offerings for customer intelligence and other applications that smaller companies can access. In 2009, it acquired 228 new SMB customers.
- SAS has an active hosting business for analytics customers who don’t want to provision servers in their own environment. SAS’ on demand revenues grew 30% and number of customers increased 43% from 2008 to 2009. SAS is building a huge new hosting center so expect more on demand offerings to come from SAS in the months ahead.
Posted on March 7, 20100 comments
One of my takeaways from last week’s BI Executive Summit in Las Vegas is that veteran BI directors are worried about the pace of change at the departmental level. More specifically, they are worried about how to support the business’ desire for new tools and data stores without undermining the data warehousing architecture and single version of truth they have worked so hard to deliver.
At the same time, many have recognized that the corporate BI team they manage has become a bottleneck. They know that if they don’t deliver solutions faster and reduce their project backlog, departments will circumvent them and develop renegade BI solutions that undermine the architectural integrity of the data warehousing environment.
The Wisdom of Letting Go
In terms of TDWI’s BI Maturity Model, these DW veterans have achieved adulthood (i.e. centralized development and EDW) and are on the cusp of landing in the Sage stage. However, to achieve true BI wisdom (i.e. Sage stage), they must do something that is both counterintuitive and terrifying: they must let go. They must empower departments and business units to build their own DW and BI solutions.
Entrusting departments to do the right thing is a terrifying prospect for most BI veterans. They fear that the departments will create islands of analytical information and undermine data consistency with which they have worked so hard to achieve. The thought of empowering departments makes them grip the proverbial BI steering wheel tighter. But asserting control at this stage usually backfires. The only option is to adopt a Zenlike attitude and let go.
Trust in Standards
I'm reminded of the advice that Yoda in the movie “Star Wars” provides his Jedi warriors-in-training: “Let go and trust the force.” But, in this case, DW veterans need to trust their standards. That is, the BI standards that they’ve developed in the BI Competency Center, including definitions for business objects (i.e. business entities and metrics), processes for managing BI projects, techniques for developing BI software, and processes and procedures for managing ETL jobs and handling errors, among other things.
Some DW veterans who have gone down this path add the caveat: “trust but verify.” Although educating and training departmental IT personnel about proper BI development is critical, it’s also important to create validation routines where possible to ensure business units conform to standards.
Engage Departmental Analysts
The cagiest veterans also recognize that the key to making distributed BI development work is to recruit key analysts in each department to serve on a BI Working Committee. The Working Committee defines DW and BI standards, technologies, and architectures and essentially drives the BI effort, reporting their recommendations to the BI Steering Committee comprised of business sponsors for approval. Engaging analysts who are most apt to create renegade BI systems ensures the DW serves their needs and helps ensure buy-in and support.
By adopting a Zen like approach to BI, veteran DW managers can eliminate project backlogs, ensure a high level of customer satisfaction, and achieve BI nirvana.
Posted on February 28, 20100 comments
More than 150 BI Directors and BI Sponsors from small, medium, and large companies plus a dozen or so sponsors attended TDWI’s BI Executive Summit last week, a record turnout.
Here are a few of the things I learned:
- Veteran BI directors are worried about the pace of change at the departmental level. More specifically, they are worried about how to support the business’ desire for new tools and data stores without undermining the data warehousing architecture and single version of truth they have worked so hard to deliver.(See "Zen BI: The Wisdom of Letting Go.")
- Jill Dyche explained that corporate BI teams can either be Data Providers or Solutions Providers. That this is an option is a new concept for me. However, after some thought, I believe that unless BI teams help deliver solutions, the data they provision will be underutilized. Unless the BI team helps solve business problems by delivering business solutions, it can never be viewed as a strategic partner.
- Most BI teams have project backlogs and don’t have a great way to get in front of them. Self-service BI can help eliminate a lot of the onesy and twosey requests for custom reports. BI Porfolios and roadmaps can help prioritize deliverables but executives always override their own priorities. Many veteran BI managers are looking to push more development back into the departments as a way to accelerate projects.
- There is a lot of interest in predictive analytics, dashboards, and the cloud. Those were the top three vote getters to the question, “Which technologies will have the most impact on your BI program in three years?”
- Most of the case studies at the Summit described real-time data delivery environments, often coupled with analytics. GE Rails applied statistical models to real-time data to help customer service agents figure out the optimal repair facility to send railroad cars to get fixed; Linkshare captures and displays Web activity and commissions to external customers (publishers and advertisers), and Seattle Teachers’ Credit Union delivers real-time recommendations to customer service agents.
- There was a lot of interest in how to launch an analytics practice and Aldo Mancini provided some great tips from his experiences at Discover Financial Services. To get SAS analysts to start using the data warehouse as a way to accelerate model development, he had them help design the subject areas and variables that should go into it. Then, he taught them how to use SQL so they could transform data into their desired format.
We’re already gearing up for our next Summit which will be held August 16-18 in San Diego. Hope to see you there!
Posted on February 28, 20100 comments
Everyone wants to move beyond reporting to deliver value-added insights through analytics. The problem is that few organizations know where to begin. Here is a ten-step guide for launching a vibrant analytics practice.
Launching the Practice
Step 1: Find an Analyst. You can’t do analytics without an analyst! Most companies have one or more analysts burrowed inside a department. Look for someone who is bright, curious, and understands key business processes inside and out. The analyst should like to work with numbers, have strong Excel, SQL, OLAP, and database skills, and ideally understand some statistics and data mining tools.
Step 2: Find a Business Person. The quickest way to kill an analytics practice is to talk about predictive models, optimization, or statistics with a business person. Instead, find one or more executives who are receptive to testing key assumptions about how the business works. For example, a retail executive might want to know, “Why do some customers stop buying our product?” A social service agency might want to know, “Which spouses are most likely not to pay alimony?” Ask them to dream up as many hypotheses to their questions as possible and then use those as inputs for your analysis.
Step 3: Gain Sponsorship. If step two piqued an executive’s interest, then you have a sponsor. Tell the sponsor what resources you need, if any, to conduct the test. Perhaps you need permission to free up an analyst for a week or two or hire a consultant to conduct the analysis. Ideally, you should be able to make do with people and tools you have inhouse. A good analyst can work miracles with Excel and SQL and there are many open source data mining packages on the market today as well as low cost statistical add-ins for Excel and BI tools.
Step 4: Don’t Get Uppity. “You never want to come across smarter than the executive you are supporting,” says Matthew Schwartz, a former director of business analytics at Corporate Express. Don’t ever portray the model results as “the truth”; executives don’t trust models unless they make intuitive sense or prove their value in dollars and cents. For example, Schwartz was able to get his director of marketing to buy in to the results of a market basket analysis for Web site recommendations because the director recognized the model’s cross-selling logic: “Ah! It knows that people are buying office kits for new employees.”
Step 5: Make It Actionable. A model is worthless if people can’t act on it. This often means embedding the model in an operational application, such as a Web site or customer-facing application, or distributing the results in reports to salespeople or customer service representatives. In either case, you need to strip out the mathematics and decompose the model so it’s understandable and usable by people in the field. For example, a sales report might say, “These five customers are likely to stop purchasing office products from us because they haven’t bought toner in four weeks.”
Step 6: Make It Proactive. The kiss of death for an analytical model is to tell people something they already know. Rather than tell salespeople about customers who are purchasing fewer products than the prior period are likely to churn (like the example I gave in step five above), tell them about customers who will buy fewer products in the future because they have fallen below a critical statistical threshold and are vulnerable to competitive offers. Or, rather than forecast the number loans that will go into default, identify the characteristics of good loans and bake that criteria into the loan origination process. If you deliver results that enable people to work proactively, you’ll become an overnight hero.
Sustaining the Analytics Practice
Let’s assume your initial modeling efforts worked their magic and garnered you strong executive sponsorship. How do you build and sustain an analytics practice? What organizational and technical strategies do you employ to ensure that your analysts are as productive as possible? The following four steps will solidify your analytics practice.
Step 7: Centralize and Standardize the Data. The thing that slows down analysts the most is having to collect data spread across multiple systems and then clean, harmonize, and integrate it. Only then can they start to analyze the data. Obviously, this is what a data warehouse is designed to do, not an analyst. But a data warehouse only helps if it contains all or most of the data analysts need in a format they can readily use so they don’t have to hunt and reconcile data on their own. Typically, analytical modelers need wide, flat tables with hundreds of attributes to create models.
Step 8: Provide Open Access to Data. Data warehouse administrators need to give analysts access to the data warehouse without having to file a request and wait weeks for an answer. Rather than broker access to the data warehouse, administrators should create analytical sandboxes using partitions and workload management that let analysts upload their own data and comingle it with data in the warehouse. This creates an analytical playground for analysts and keeps them from creating renegade data marts under their desks.
Step 9: Centralize Analysts. Contrary to current practice, it’s best to centralize
analysts in an Analytical Center of Excellence under the supervision of a Director of Analytics. This creates a greater sense of community and camaraderie among analysts and gives them more opportunities for advancement within the organization. It also minimizes the chance that they’ll be lured away by recruiters. Although they may be part of a shared services group, analysts should be physically embedded within the departments they support and have dotted line responsibility to those department heads.
Step 10: Offload Reporting. The quickest to undermine the productivity of your top analysts is force them to field requests for ad hoc reports from business users. To eliminate the reporting backlog, the BI team and analysts need to work together to create a self-service BI architecture that empowers business users to generate their own reports and views. When designed properly, these interactive reports and dashboards will meet 60% to 80% of users’ information needs, freeing up business analysts and BI report developers to focus on more value-added activities.
So there you have it, ten steps to analytical nirvana. Easy to write, hard to do! Keep me informed about your analytics journey and the lessons you learn along the way! I’d love to hear your stories. You can reach me at email@example.com.
Posted on February 26, 20100 comments
There is an emerging type of dashboard product that enables power users to craft ad hoc dashboards for themselves and peers by piecing together elements from existing reports and external Web pages. I’m calling these “Mashboards” because they “mash” together existing charts and tables within a dashboard framework. Other potential terms are “Report Portal,” “Metrics Portal,” and “Dashmart.”
I see Mashboards as the dashboard equivalent of the ad hoc report, which has spearheaded the self-service BI movement in recent years. Vendors began delivering ad hoc reporting tools to ease the report backlog that afflicts most BI deployments and dampens sales of BI tools. Ad hoc reports rely on a semantic layer that enables power users to drag and drop predefined business objects onto a WYSIWYG reporting canvas to create a simple report.
Likewise, Mashboards enable powers to select from predefined report “parts” (e.g., charts, tables, selectors) and drag and drop them onto a WYSIWYG dashboard canvas. Before you can create a Mashboard, IT developers need to create reports using the vendor’s standard report authoring environment. The “report parts” are often self-contained pieces of XML code--or gadgets--that are wired to display predefined sets of data or can be easily associated with data from a semantic layer. Power users can apply filters and selectors to the gadgets without coding.
Mashboards are a great way for organizations to augment enterprise or executive dashboards that are designed to deliver 60% to 80% of casual users’ information needs. The Mashboards can be used to address the other 20% to 40% of those needs on an ad hoc basis or deliver highly personalized dashboard for an executive or manager. (I should note that enterprise dashboards should also be personalizable as well.)
Dashmarts? However, there is a danger that Mashboards will end up becoming just another analytical silo. Their flexibility lends themselves to becoming a visual spreadmart, which is why I’m tempted to call them Dashmarts. However, Mashboards that require power users to source all data elements from existing reports and parts should minimize this to some degree.
All in all, Mashboards are a great additional to a BI portfolio. They provide a new type of ad hoc report that is more visual and easily consumed by casual users. And they are a clever way for vendors to extend the value of their existing reporting and analysis tools.
Posted on February 16, 20100 comments
It’s odd that our industry has established a best practice for creating a layer of abstraction between business users and the data warehouse (i.e., a semantic layer or business objects), but we have not done the same thing on the back end.
Today, when a database administrator adds, changes, or deletes fields in a source system, it breaks the feeds to the data warehouse. Usually, source systems owners don’t notify the data warehousing team of the changes, forcing us to scramble to track down the source of the errors, rerun ETL routines, and patch any residual problems before business users awake in the morning and demand to see their error-free reports.
It’s time we get some sleep at night and create a layer of abstraction that insulates our ETL routines from the vicissitudes of source systems changes. This sounds great, but how?
Eric Colson, whose novel approaches to BI appeared two weeks ago in my blog “Revolutionary BI: When Agile is Not Fast Enough,” has found a simple way to abstract source systems at Netflix. Rather than pulling data directly from source systems, Colson’s BI team pulls data from a file that source systems teams publish to. It’s kind of an old-fashioned publish-and-subscribe messaging system that insulates both sides from changes in the other.
“This has worked wonderfully with the [source systems] teams that are using it so far,” says Colson, who believes this layer of abstraction is critical when source systems change at breakneck speed, like they do at Neflix. “The benefit for the source systems team is that they get to go as fast as they want and don’t have to communicate changes to us. One team migrated a system to the cloud and never even told us! The move was totally transparent.”
On the flip side, the publish-and-subscribe system alleviates Colson’s team from having to 1) access to source systems 2) run queries on those systems 3) know the names and logic governing tables and columns in those systems and 4) keep up with changes in the systems. They also get much better quality data from source systems in this way.
However, Colson admits that he might get push back from some source systems teams. “We are asking them to do more work and take responsibility for the quality of data they publish into the file,” says Colson. “But this gives them a lot more flexibility to make changes without having to coordinate with us.” If the source team wants to add a column, it simply appends it to the end of the file.
This approach is a big mindset change from the way most data warehousing teams interface with source systems teams. The mentality is: “We will fix whatever you give us.” Colson’s technique, on the other hand, forces the source systems teams to design their databases and implement changes with downstream analysis in mind. For example, says Colson, “they will inevitably avoid adding proprietary logic and other weird stuff that would be hard to encapsulate in the file.”
Time to Deploy
Call me a BI rube, but I’ve always assumed that BI teams by default create such an insulating layer between their ETL tools and source systems. Perhaps for companies that don’t operate at the speed of Netflix, ETL tools offer enough abstraction. But, it seems to me that Colson’s solution is a simple, low-cost way to improve the adaptability and quality of data warehousing environments that everyone can and should implement.
Let me know what you think!
Posted on February 8, 20100 comments
Special Note: This analysis was written by Philip Russom, Wayne's colleague in TDWI's research department who covers MDM and data integration.
I don’t know about you, but when I read two, similar announcements from competing software vendors, delivered at pretty much the same time, I can’t help but compare them. So that’s what I’m thinking about today (February 3, 2010), after hearing that IBM intends to acquire Initiate Systems. This bears strong resemblance to Informatica’s announcement a few days ago (on January 28, 2010) about their completion of the acquisition of Siperian.
If you work in data management or a related field, these are noteworthy announcements coming from noteworthy vendors, and, therefore, worth understanding. So let me help you by making some useful comparisons – and some differentiations that you may not be aware of. I’ll organize my thoughts around some questions I’ve just seen in the blogosphere and my Twitter deck.
1 – Are both acquisitions about MDM?
Well, sort of. Siperian is well-known for its master data management (MDM) solution. It has attained one of the holy grails among MDM solutions, in that it works with multiple data domains (that’s data about customers, products, financials, and other domains).
Initiate, on the other hand, is well-known for its identity resolution hub. Initiate’s identity resolution capabilities are commonly embedded within applications for MDM and customer data integration (CDI). The way that I think of it, Initiate’s hub isn’t for MDM per se, but it can improve MDM when embedded within it.
At this point, I need to cycle back to Siperian and point out that it, too, provides identity resolution capabilities. And I forgot to mention that Initiate also has some MDM capabilities. You could say that Siperian is mostly MDM, but with identity resolution and other capabilities, whereas Initiate is mostly about identity resolution, but with MDM and other capabilities.
2 - So, the two acquisitions are about identity resolution?
Yes, but to varying degrees. For example, IBMers were very clear that their primary interest in Initiate is its ability to very accurately match data references to patients in healthcare and citizens in government. IBM’s campaign for a Smarter Planet has strong subsets focused on healthcare and government, two industries where Initiate has reference clients doing sophisticated things with identity resolution. My impression is that IBMers are hoping Initiate’s identity resolution functionality will help them sell more products and services into these industries.
Returning to Informatica and Siperian, let’s recall that for years now the Siperian hub has been integrated with Informatica PowerCenter (similar to pre-existing integration among IBM and Initiate products). Among other things, this integration enables Siperian’s identity resolution functions to be embedded within the PowerCenter platform under the name Informatica Identification Resolution. Hence, identity resolution was one off the key capabilities paving the path to this acquisition.
3 – What do these acquisitions mean for IBM and Informatica?
As noted, IBM is counting on the Initiate product to help their campaigns for Smarter Healthcare and Smarter Government. Informatica has now filled the largest hole in its otherwise comprehensive product line, filling it with one of the better tools available via acquisition. Both IBM and Informatica are aggressively building out their portfolios of diverse data management tools, driven both by user demand and competitive pressures. Since both have customers with growing demands for more diverse data management tool types, both will have no trouble cross-selling the new tools to their existing customer bases, as well as selling their older products to the newly acquired customer bases.
4 -- What do these acquisitions mean for technical users?
In my experience, Informatica and IBM both have rather faithful customers, in that they tend to get most of their data management tools from a primary supplier. Technical users from both customer bases now have more functionality available from a preferred technology supplier.
Posted on February 3, 20100 comments