By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Blog

Business Intelligence Blog Posts

See the most recent Business Intelligence related items below.


The Three Core Activities of MDM (part 2)

Blog by Philip Russom
Research Director for Data Management, TDWI

I’ve just completed a TDWI Best Practices Report titled Next Generation Master Data Management. The goal is to help user organizations understand MDM lifecycle stages so they can better plan and manage them. TDWI will publish the 40-page report in a PDF file on April 2, 2012, and anyone will be able to download it from www.tdwi.org. In the meantime, I’ll provide some “sneak peeks” by blogging excerpts from the report. Here’s the second in a series of three excerpts. If you haven’t already, you should read the first excerpt before continuing.

Collaborative Processes for MDM
By definition, MDM is a collaborative discipline that requires a lot of communication and coordination among several types of people. This is especially true of entity definitions, because there is rarely one person who knows all the details that would go into a standard definition of a customer or other entity. The situation is compounded when multiple definitions of an entity are required to make reference data “fit for purpose” across multiple IT systems, lines of business, and geographies. For example, sales, customer service, and finance all interact with customers, but have different priorities that should be reflected in a comprehensive entity model. Likewise, technical exigencies of the multiple IT systems sharing data may need addressing in the model. And many entities are complex hierarchies or have dependencies that take several people to sort out, as in a bill of material (for products) or a chart of accounts (for financials).

Once a definition is created from a business viewpoint, further collaboration is needed to gain review and approval before applying the definition to IT systems. At some point, business and technical people come together to decide how best to translate the definition into the technical media through which a definition is expressed. Furthermore, technical people working on disparate systems must collaborate to develop the data standards needed for the exchange and synchronization of reference data across systems. Since applying MDM definitions often requires that changes be made to IT systems, managing those changes demands even more collaboration.

That’s a lot of collaboration! To organize the collaboration, many firms put together an organizational structure where all interested parties can come together and communicate according to a well-defined business process. For this purpose, data governance committees or boards have become popular, although stewardship programs and competency centers may also provide a collaborative process for MDM and other data management disciplines (especially data quality).

================================
ANNOUNCEMENTS
Keep an eye out for part 3 in this MDM blog series, coming March 2. I’ll tweet so you know when that blog is posted.

David Loshin and I will moderate the TDWI Solution Summit on Master Data, Quality, and Governance, coming up March 4-6, 2012 in Savannah, Georgia.

Please attend the TDWI Webinar where I will present the findings of my TDWI report Next Generation MDM, on April 10, 2012 Noon ET. Register online for the Webinar.

Posted by Philip Russom, Ph.D. on February 17, 20120 comments


Big Data, Big Mobile, and a Big New Year

Happy New Year to everyone in the TDWI community! I wish you an enjoyable and prosperous year. Squinting down the path ahead, it is indeed going to be a busy year at TDWI as we roll out our World Conferences, Summits, Forums, Seminars, Webinars, Best Practices Reports, Checklists, and more. The next World Conference is coming up February 12-17, in Las Vegas. This event is always one of the major gatherings of the year in business intelligence and data warehousing, and I am looking forward to being there and interacting with attendees, exhibitors, TDWI faculty, and a few croupiers here and there.

In Las Vegas I will be helping out my colleague, Philip Russom, who is chairing the BI Executive Summit, February 13-15. This conference has a theme of “Executing a Data Strategy for Your Enterprise” and will feature a great selection of case studies, expert speakers, and panel sessions. Check out the program to see if this event is important for you to attend.

In Vegas and throughout many of our conferences this year, you will have the chance to learn about big data analytics, which is a big topic for TDWI. Big data is getting increasing airplay in the mainstream media, as evidenced by this recent New York Times column by Thomas Friedman (read down a bit, to the fifth paragraph, past the political commentary). Friedman points out that big data could be the “raw material for new inventions in health care, education, manufacturing, and retailing.” We could not agree more, and are focused on enabling organizations to develop the right technology and data strategies to achieve their goals and ambitions with big data in 2012.

Coming up for me on January 11 is a Webinar, “Mobile Business Intelligence and Analytics: Extending Insight to a Mobile Workforce.” This is coordinated with the just-published Best Practices Report of the same name that I authored. The impact of mobile devices, particularly tablets, on BI and analytics made nearly everyone’s list of key trends in 2012, and with good reason. The potential of mobile devices is exciting for furthering the “right data, right users, right time” goals of many BI implementations. Executives, managers, and frontline employees in operations such as customer sales, service, and support have clear needs for BI alerts, dashboard reports, and capabilities for drill-down analysis while on the go. There are many challenges from a data management perspective, so organizations need to examine carefully how, where, and when to enable mobile BI and analytics. I hope the report provides food for thought and perspectives that are helpful in making decisions about mobile.

I expect that this will be an exciting year in our industry and look forward to blogging about it as we go forward into 2012.



Posted by David Stodder on January 5, 20120 comments


Big Data Analytics: The News from Teradata

Blog by Philip Russom
Research Director for Data Management, TDWI

Just moments ago, Teradata Corporation issued three announcements describing new capabilities, products, and releases. Instead of repeating the details of Teradata’s new stuff -- which you can read on www.teradata.com, etc. -- I’d rather be self-indulgent and use each announcement as a springboard for my own thoughts about the bigger trends in Big Data Analytics these relate to.

Announcement Number One: Teradata Columnar

A few years ago, I was at the Teradata Partners Conference. Instead of attending speaking sessions, I was in a series of meetings for industry analysts and industry influencers. When the topic of columnar databases came up -- and it was my turn to pontificate -- I said something like: “Columnar storage engines will soon be available as just another feature of database management systems from larger, more established vendors.” The room fell quiet, and a cricket chirped in the background. Then, two experts mocked me, while Teradata people were noticeably mum. ;)

Does that make me a prescient visionary? No, not at all. I’ve just been paying attention for the last three decades, as one technology after the next is developed and proved by a small startup, then bought or built by one or more of the leading DBMS vendors. We’ve seen this trend played out with features for everything from security to parallel processing to OLAP to federation to in-memory databases. We’re now seeing the same trend with columnar data stores and other technologies for Big Data Analytics.

Newish vendors like ParAccel and Vertica -- and Sybase long before them -- have proved the usefulness and commercial potential of a columnar approach. Open source DBMSs MySQL and Infobright made similar contributions. In full compliance with the trend I’m describing, IBM and Oracle have released columnar storage engines they built, and now it’s Teradata’s turn. Teradata Columnar is a new capability of Teradata Database 14. What’s new here is that Teradata has integrated both columnar AND row-based tables, thereby making hybrid applications more feasible. All the above is goodness, regardless of vendor, because columnar data stores have compelling advantages for query speed, data compression, bla, bla, bla, and the usual miraculous benefits.

This recurring trend begs the question: What’s the next new innovation that’s on the path to DBMS assimilation? It’s obvious to me that Hadoop and MapReduce are already well down that path. And that brings us to the next Teradata announcement.

Announcement Number Two: Teradata Aster MapReduce Platform

On the upside, MapReduce is the secret sauce that brings advanced analytic capability to a big data repository, whether it’s Hadoop’s file system or a relational database management system (RDBMS). On the downside, MapReduce from most sources is mired in hand-coding and devoid of SQL (to which we’re hand-cuffed in BI). Hence, MapReduce shows great promise for the world of BI, but only if it can evolve to suit the technical requirements of BI and DW professionals.

Evolving MapReduce is what the small vendor Aster Data Systems has always been about, and the evolution continues now that Teradata has acquired Aster. First, Aster showed that MapReduce could be effective with an RDBMS – at least, with its own nCluster database, now called Aster Database 5.0. Aster then showed that MapReduce and SQL can be reconciled, and they received a patent for their innovation in this realm.

Let’s shift gears and look at data warehouse appliances. Despite the term “data warehouse” in the name, these are really “big data analytics appliances.” I say this based on the fact that at least 90% of DW appliance owners use them for multi-terabyte analytics, not data warehousing. Aster is now showing that a MapReduce-based RDBMS can be suited to an appliance, as in the new Aster MapReduce Appliance based on Teradata hardware.

I’ll say more about the evolution of MapReduce in a TDWI Webinar on October 27. Please register online and attend.

Announcement Number Three: Teradata Database 14

Most of the new functionality of Teradata Database 14 seems focused on making the system even more manageable and performable, especially in the context of multiple, diverse, concurrent data warehouse workloads.

The multiple workload problem is a thorny one. From the DW professional’s viewpoint, it’s not easy to optimize a data warehouse for several workloads; so most of EDWs are optimized for a short list of workloads. Since the primary deliverables of the average DW are reports (whether standard or dashboards) and OLAP, most EDW designers consciously decide to optimize for these. But that makes it difficult to add new workloads to a centralized enterprise data warehouse, so new workloads are often distributed to marts, operational data stores, and data staging areas outside the warehouse proper. Examples of “new workloads” include those for real time, detailed source data, non-structured data, and discovery or exploratory analytics (not OLAP).

How DW professionals and vendors are responding to the challenge of multiple workloads constitutes a trend. That’s because the responses affect data warehouse architecture, logical modeling, optimization, performance, platform selection, tool selection, selection of analytic methods, management strategies for big data, and so on.

Note that the multiple workload challenge is both a user design issue and a vendor platform capability issue. Yet, I think the former can win out over the latter. A good design on a weak platform can succeed, though you’ll probably end up with a heavily distributed DW architecture. Conversely a bad design on a strong platform can fail, especially if you expect the platform to be the design. Technology and design issues aside, I must also point out that the placement of a DW workload can be influenced by organizational issues, like sponsorship, funding, and compliance.

So, what do you think? Let me know!

===============================
Want to learn more about Big Data Analytics? Attend the TDWI Forum on Big Data Analytics for Business Insight. There's more information online.

Posted by Philip Russom, Ph.D. on September 22, 20110 comments


It's All in the Memory: New Battleground for BI and Analytics

Where is the biggest battleground today in the business intelligence and analytics software market? On the technology front, one of the main battles is in the addressable memory space of systems that feature 64-bit computing and operating system platforms. The “in-memory” revolution is upon us, and no BI or analytics vendor wants to be left out. Large memory platforms will be critical to users working with tools for big data analytics, data discovery, data visualization, and more.

While the development of large-memory computing is not really new, it took a while for the software industry to adapt to 64-bit hardware processing and operating system platforms. Throw in the difficult learning curve for creating software to work with parallel processing, and it’s easy to see why the move from older systems has taken time. When large memory and parallel processing platforms were exotic, the slow pace of adaptation might have been acceptable. Now, with mainstream systems offering up to a terabyte of addressable memory, organizations can’t wait to try them out for BI and analytics.

Traditionally, designers of these systems have had to adjust to the limits of the I/O bottleneck. The preprocessing and design work for indexing and aggregating data has been necessary because of the performance constraints involved in getting data from disk through the I/O bottleneck. If large memory systems can ease or eliminate that constraint for the majority of users’ analysis needs, then the boundaries for analytics applications can be pushed out.

Users can perform “data discovery,” asking questions that lead to more questions, without as much concern for what this iterative, ad hoc style of investigation might mean to overall performance. Unlike with BI reports that simply update standard views of data, users can engage in exploratory data inquiries without knowing exactly where they will end up. Large-memory systems can offer volumes of detailed data on systems deployed closer to users. With the right tools, line-of-business (LOB) decision makers can dive into the data to test predictive models and perform fine-grained analysis on their own rather than wait for IT’s specialized business analysts and statisticians to do it for them.

Data discovery vendors such as QlikTech, Tableau, and TIBCO Spotfire have prospered by jumping first to seize market opportunities. However, the biggest coming battle may be between SAP and Oracle. Earlier this year, SAP introduced HANA, which competes with Oracle’s Exadata by offering in-memory analytics along with traditional disk-based storage in an appliance. Oracle has been readying a response, which will most likely come at Oracle Open World in early October and be aimed at taking in-memory capabilities for BI and analytics further. In the coming year, Oracle and SAP will battle to show which vendor is better at using analytics to increase the business value of ERP investments. In-memory capabilities will make it easier for these and other vendors to deploy rich analytics for ERP that are tailored to vertical industry and LOB requirements.

Large memory is not the whole story when it comes to the future of BI and analytics. However, it is a technology trend that users will notice firsthand through deeper, more visual, and more timely data analysis.

 

Posted by David Stodder on September 15, 20110 comments


Going Mobile with BI and Analytics

On airplanes, at coffee bars, at ballgames, and even while waiting out an oil change, I am, like many of you, encountering people intensely focused on their mobile smartphones and tablets. I can’t say that I’ve been nosy enough to check out whether those I’ve seen are using the devices for business intelligence, but some – at least the fellow at the oil change shop – do seem to be working with spreadsheets and charts, not just enjoying social media or entertainment. As technology and software options evolve, there’s less and less standing in the way of people using the devices for BI. The revolution is coming.

Mobile is on my mind in part because I am working on an upcoming TDWI Best Practices report, “Mobile BI and Analytics: Extending Intelligence to a Mobile Workforce.” If you would still like to participate in the research, we would be glad to have your input. The survey is still open.

Also, I recently had a chance to talk about mobile BI on a CIO Talk Radio program dedicated to this subject. The Internet-based show is aired through Voice America Business Radio and is hosted by Sanjog Aul, vice president of Programs for the Chicago Chapter of the Society for Information Management (SIM). Also appearing on the program was Howard Dresner, chief research officer of Dresner Advisory Services, and well known for his many years as the lead analyst for BI at Gartner. Howard, of course, had a lot of interesting things to say, and I enjoyed our discussion very much. If you would like to hear the program, follow this link

In my initial analysis of the TDWI survey results, I am seeing that senior executives currently dominate as users of mobile BI. This is expected; senior executives often are the first to try “the new toys” for data access and analysis. However, the survey shows that #1 benefit organizations seek to achieve from implementing mobile BI and analytics is the improvement of sales, service, and support. This indicates a strong desire to put mobile BI in the hands of frontline managers and other personnel who are in daily touch with customers.

If you have experiences with mobile BI and analytics or thoughts about how you see this technology evolving, please drop me a line at [email protected].



Posted by David Stodder on September 9, 20110 comments


Advanced Analytics versus Online Analytic Processing (OLAP)

Blog by Philip Russom
Research Director for Data Management, TDWI

The current hype and hubbub around big data analytics has shifted our focus on what’s usually called “advanced analytics.” That’s an umbrella term for analytic techniques and tool types based on data mining, statistical analysis, or complex SQL – sometimes natural language processing and artificial intelligence, as well.

The term has been around since the late 1990s, so you’d think I’d get used to it. But I have to admit that the term “advanced analytics” rubs me the wrong way for two reasons:

First, it’s not a good description of what users are doing or what the technology does. Instead of “advanced analytics,” a better term would be “discovery analytics,” because that’s what users are doing. Or we could call it “exploratory analytics.” In other words, the user is typically a business analyst who is exploring data broadly to discover new business facts that no one in the enterprise knew before. These facts can then be turned into an analytic model or some equivalent for tracking over time.

Second, the thing that chaffs me most is that the way the term “advanced analytics” has been applied for fifteen years excludes online analytic processing (OLAP). Huh!? Does that mean that OLAP is “primitive analytics”? Is OLAP somehow incapable of being advanced?

I personally don’t think so. In fact, depending on how you design and implement it, OLAP can be quite advanced. For example, OLAP is very much about dimensions. In the 90s, eight dimensions was considered an advanced implementation. Nowadays I regularly talk with people who have twenty or more. I realize there’s a difference between advanced and mature. But I have to say that I’ve seen lots of mature OLAP implementations that support hundreds of cubes, hundreds of OLAP reports, and thousands of users. Over the years, different approaches to OLAP (multidimensional, relational, desktop, etc.) have consolidated into a hybrid OLAP, such that most vendor products today are quite mature, feature rich, and flexible.

Here’s another, related issue. While researching a new TDWI report on big data analytics, I ran across a few people (users, consultants, and vendors) who think that “advanced analytics” (or whatever you want to call it) will render OLAP obsolete. Therefore, user organizations should expunge OLAP from their BI portfolios. Uh, no. I don’t see that happening.

In defense of OLAP, it’s by far the most common form of analytics in BI today, and for good reasons. Once you get used to multidimensional thinking, OLAP is very natural, because most business questions are themselves multidimensional. For example, “What are western region sales revenues in Q4 2010?” intersects dimensions for geography, function, money, and time. Discoveries made in OLAP are easily “institutionalized” or “operationalized” (much more so than advanced analytics), so OLAP analyses are repeated over time with consistency. Since dimensions are easily expressed as parameters, an OLAP-based report can be as easy to use as a parameterized report, thereby putting OLAP-based analytics within the comprehension of a vast range of possible end-users.

The scope of discovery of an analytic method seems to be an important concern right now, as seen the current fascination with big data analytics. In that context, a possible limitation of OLAP is that most implementations are tightly coupled to datasets called cubes. If the information someone hopes to discover is not in a cube, then that can be a problem. Even so, so-called relational OLAP can be a solution, and OLAP tools are so friendly nowadays that just about anyone can create a cube. Depending on how an OLAP implementation is designed and which vendor tools are used, a cube can limit the scope of discovery, just as any analytic dataset can – even if it’s multi-terabyte big data.

In my mind, advanced analytics is very much about open-ended exploration and discovery in large volumes of fairly raw source data. But OLAP is about a more controlled discovery of combinations of carefully prepared dimensional datasets. The way I see it: a cube is a closed system that enables combinatorial analytics. Given the richness of cubes users are designing nowadays, there’s a gargantuan number of combinations for a wide range of users to explore.

So, OLAP’s not going away. Users would be nuts to abandon their large investments in such a handy technology. And it’s like most situations in IT. Few things go away. Organizations just keep adding more tools types and best practices to their portfolios. Therefore, user organizations should expect to maintain their useful investments in OLAP, while also digging deeper into other forms of exploratory and discovery analytics.

So, what do you think, folks? Let me know. Thanks!

Posted by Philip Russom, Ph.D. on August 5, 20110 comments


Big Data Analytics: Avoid the Analytic Cul-De-Sac

Blog by Philip Russom
Research Director for Data Management, TDWI

Do you know what a cul-de-sac is? In French, it literally means “bottom of the bag.” But figuratively it means what most Americans would call a “dead-end street.” In residential real estate, a cul-de-sac is a desirable place to live. In analytics, a cul-de-sac is where the epiphanies of advanced analytics never get off a dead-end street to be fully leveraged elsewhere in the enterprise.

The current hype around big data analytics has most discussions of analytics focused on “discovery” analytics. That’s where a business analyst or similar user employs an advanced analytics tool (based on data mining, statistics, natural language processing, complex SQL, etc.) to discover facts never known before. For example, the analyst may discover the root cause for a new form of customer churn, a new partner behavior that’s potentially fraudulent, or the hidden costs that erode otherwise profitable customers.

While researching a new TDWI report on big data analytics, I’ve run across a number of business analysts who revel in the chase around the cul de sac, but can’t be bothered with operationalizing their epiphanies. “That’s someone else’s job,” one guy told me. Here’s what I mean.

Too often analysts drive through a figurative big data “bottom of the bag,” until just the right dataset yields an epiphany. Then they share their findings with managers and move on to the next analytic project.

This is an analytic cul-de-sac, when the analyst does not also take the findings off the dead-end street and “operationalize” them. In other words, once you discover the new form of churn, analytic models, metrics, reports, warehouse data, and so on need to be updated, so the appropriate managers can easily spot the churn and do something about quickly, if it returns. Likewise, hidden costs, once revealed, should be operationalized in analytics (and possibly reports and warehouses), so managers can better track and study costs over time, to keep them down.

I think that most analysts and similar users are avoiding analytic cul-de-sacs, by being sure that discovered epiphanies are operationalized by someone (whether by the actual analyst or another team member). I’m just saying that the product of analytics isn’t necessarily being leveraged to the hilt in every organization.

To avoid analytic cul-de-sacs and similar squanderings of insight, you might want to review some of the processes around your use of advanced analytics. In particular, be sure the process extends beyond discovery into operationalizing the epiphanies of analytics.

So, what do you think, folks? Let me know. Thanks!

Posted by Philip Russom, Ph.D. on July 21, 20110 comments


Agile BI and DW: Dynamic, Continuous, and Never Done

Delivering value sooner and being adaptable to business change are two of the most important objectives today in business intelligence (BI) and data warehouse development. They are also two of the most difficult objectives to achieve. “Agility,” the theme of the upcoming TDWI World Conference and BI Executive Summit, to be held together the week of August 7 in San Diego, is about implementing methodologies and tools to that will shorten the distance to business value and make it easier to keep adding value throughout development and maintenance cycles.

We’re very excited about the programs for these two educational events. Earlier this week, I had the pleasure of moderating a Webinar aimed at giving attendees a preview of how the agility theme will play out during the week’s keynotes and sessions. The Webinar featured Paul Kautza, TDWI Director of Education, and two Agile experts who will be speaking and leading seminars at the conference: Ken Collier and Ralph Hughes.

Agile methodology has become a mainstream trend in software development circles, but it is much less mature in BI and DW. A Webinar attendee asked whether any Agile-trained expert could do Agile BI. “No,” answered Ken Collier. “Agile BI/DW training requires both Agile expertise as well as BI/DW expertise due to the nuances of commercial off-the-shelf (COTS) system integration, disparate skill sets and technologies, and large data volumes.” Ralph Hughes agreed, adding that “generic Agile folks can do crazy things and run their teams right into the ground.” Ralph then offered several innovations that he sees as necessary, including planning work against the warehouse’s reference architecture and pipelining work functions so everyone has a full sprint to work their specialty. He also advocated small, mandated test data sets for functional demos and full-volume data sets for loading and re-demo-ing after the iteration.

If you are just getting interested in Agile or are in the thick of implementing Agile for BI and DW projects, I would recommend listening to the Webinar, during which Ken and Ralph offered many wise bits of advice that they will explain in greater depth at the conference. The BI Executive Summit will feature management-oriented sessions on Agile, including a session by Ralph, but will also take a broader view of how innovations in BI and DW are enabling these systems to better support business requirements for greater agility, flexibility, and adaptability. These innovations include mobile, self-service, and cloud-based BI.

As working with information becomes integral to more lines of business and operations, patience with long development and deployment cycles will get increasingly thin. The time is ripe for organizations to explore what Agile methodologies as well as recent technology innovations can do to deliver business value sooner and continuously, in a virtuous cycle that does not end. In Ken Collier’s words, “The most effective Agile teams view the life of a BI/DW system as a dynamic system that is never done.”

Posted by David Stodder on July 14, 20110 comments


IBM Cognos 10: Upward and Outward

IBM Cognos this week released Cognos 10, a major new release of its business intelligence software that contains lots of new goodies that are sure to bring smiles to its installed base and tempt some SAP BusinessObjects customers to jump ship.

I spent two days in Ottawa in September getting the IBM and Cognos 10 pitch. Here are highlights:

Company

- Strategy. Analytics is key to IBM’s future growth, which means Cognos is the apple of IBM's eye. IBM has spent $14 billion acquiring 25 companies since 2006 (inicluding $5M for Cognos) and there is no sign its buying spree will end soon.

- Services. IBM’s newly created Business Analytics Optimization services arm earned $9.4 billion last year, and IBM is hiring consultants like crazy to keep up with forecasted demand. Its target is 8,000 consultants worldwide.

- Licenses. Cognos 8 had double digit license growth the past five quarters, while the newly launched mid-market product, Cognos Express, which runs on the in-memory database TM1, has hundreds of customers in 25 countries. And TM1 is hot, generating double digit growth on its own during the past year

Cognos 10

- Visual Integration. Whereas Cognos 8 integrated the underlying architecture of Cognos’ once distinct products (reports, query, OLAP, dashboards, planning), Cognos 10 integrates the user experience. The new BI Workspace blurs the visual boundaries between these capabilities so users can seamlessly traverse from reporting to analysis to dashboards to planning and back again.

- Mashboards. Cognos 10 lets users create their own workspaces from widgets consisting of predefined report components. In other words, a mashboard. Report developers (using Report Studio Professional) simply “widgetize” charts, metrics, and even entire reports and they are automatically added to a library that users can access when creating their own personal workspaces.

- Progressive Interaction. Cognos 10 will expose additional functionality as users need it. For example, Business Insight users can click a button that says “do more” to expose functionality from Business Insight Advanced (replaces Report Studio Professional) to add objects and dimensions not available in the Business Insight library.

- Annotation. Users can annotate at the workspace, widget, or cell level (i.e., within a grid).

- Personal data. Users can add personal data from Excel and Cognos 10 will track, audit, and secure the data. I haven’t seen this demo’d yet, so I’m eager to see how it works.

- Active Reports. Cognos 10 lets report developers create interactive, Web-based reports, burst tailored versions to thousands of users who can interact with them offline. Information Builders coined the term Active Reports for much the same technology, so IBM Cognos should be careful about infringing trademarks here.

- Improved Query Performance. Cognos 10 has improved query performance by automatically generating optimized SQL or MDX depending on the source and implementing a dimensionally-aware, secure shared cache that can cache queries, metadata, members, and tuples.

- Lifecycle Manager – This lets administrators compare versions of reports visually to validate the efficacy of software upgrades and migrations from development to test to production. This is a very useful feature.

There’s a lot to like in Cognos 10 and it should give IBM Cognos a long-lasting stream of new and upgraded license revenue.

Posted on October 28, 20100 comments


The Spanner: The Next Generation BI Developer

To succeed with business intelligence (BI), sometimes you have to buck tradition, especially if you work at a fast-paced company in a volatile industry.

And that’s what Eric Colson did when he took the helm of Neflix’ BI team last year. He quickly discovered that his team of BI specialists moved too slowly to successfully meet business needs. “Coordination costs [among our BI specialists] were killing us,” says Colson.

Subsequently, Colson introduced the notion of a “spanner”—a BI developer who builds an entire BI solution singlehandedly. The person “spans” all BI domains, from gathering requirements to sourcing, profiling, and modeling data to ETL and report development to metadata management and Q&A testing.

Colson claims that one spanner works much faster and more effectively than a team of specialists. They work faster because they don’t have to wait for other people or teams to complete tasks or spend time in meetings coordinating development. They work more effectively because they are not biased to any one layer of the BI stack and thus embed rules where most appropriate. “A traditional BI team often makes changes in the wrong layer because no one sees the big picture,” Colson says.

Also, since spanners aren’t bound by a written contract (i.e., requirements document) created by someone else, they are free to make course corrections as they go along and “discover” the optimal solution as it unfolds. This degree of autonomy also means that spanners have higher job satisfaction and are more dedicated and accountable. One final benefit: there’s no fingerpointing, if something fails.

Not For Everyone

Of course, there are downsides to spanning. First, not every developer is capable of spanning. Some don’t have the skills, and others don’t have the interest. “We have lost some people,” admits Colson. Finding the right people isn’t easy, and you must pay a premium in salary to attract and retain them. Plus, software license costs increase because each spanner needs a full license to each BI tool in your stack.

Second, not every company is well suited spanners. Many companies won’t allocate enough money to attract and retain spanners. And mature companies in regulated or risk-averse industries may work better with a traditional BI organization and development approach.

Simplicity

Nonethless, experience shows that the simplest solution is often the best one. In that regard, spanners could be the wave of the future.

Colson says that using spanners eliminates much of the complexity of running BI programs and development projects. The only thing you need is a unifying data model and BI platform and a set of common principles, such as “avoid putting logic in code” or “account ID is a fundamental unifier.” The rest falls into the hands of the spanners who rely on their skills, experience, and judgment to create robust local applications within an enterprise architecture. Thus, with spanners, you no longer need business requirement analysts or requirements documents, a BI methodology, project managers , and a QA team, says Colson.

This is certainly pretty radical stuff, but Colson has proven that thinking and acting outside the box works, at least at Neflix. Perhaps it’s time you consider following suit!

Posted on October 21, 20100 comments