By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

Q&A: BI and the Data Dilemma

Growing data volumes are challenging BI practitioners and IT staff alike. We explore new approaches to dealing with the problem.

As business intelligence expands within an enterprise, new technologies (such as search) are ever more important to giving users what they need, especially when dealing with larger amounts of data.

We spoke to Miriam G. Tuerk, president and CEO of Infobright about the challenges facing corporations today, the new approaches being employed to dealing with increasing data volumes, and what will make user queries more efficient. Ms. Tuerk's 20 years of experience include work in the consulting and telecommunications sectors covering Canada, U.S., European, and Asian markets.

BI This Week: What challenges do corporations face in managing data today compared to a decade ago?

Miriam G. Tuerk: In the "old" days -- which in this ever-changing tech world means the 1990s -- companies had a lot less data, fewer and less-diverse users, and business needs were typically satisfied by running canned reports. The enterprise database was often used to meet this need, or a data warehouse was used as a silo for business intelligence.

Today we are facing a very different environment, with rapidly growing volumes of data, many more users with different needs, and a diverse workload. Plus, many more types of companies depend on business intelligence than ever before.

Here are examples of two very different requirements for today's data warehouses:

  • The first requirement targets users who run repetitive queries. Consider a data warehouse that supports a call center for a cell phone company. Each time a customer calls, the system calls up his or her profile. This is a repetitive OLTP-like query that requires a specifically designed and engineered system to optimize the performance of these queries.
  • A second requirement for data warehousing is analytics. Here, marketing, finance, sales, compliance, risk management, and operations groups are performing ad hoc, changing, and unknown queries such as: "How did a particular 2007 Christmas sales campaign perform compared to our 2006 campaign?" or "Let's analyze why there are more mortgage defaults in this area over the last 12 months versus the last five years."

IT invests a lot of resources and time to do the work to support all of this.

The problem is that there are far more requirements than resources and money to solve them. Infobright has taken up this challenge - and developed technology that can provide a solution with far fewer resources, that costs a lot less, and is much faster to implement.

Indexing data is a tried-and-true approach to managing data for ready access, but you believe this approach isn't scalable. Can you explain why?

Indexing and doing a lot of physical data modeling makes sense in the first example I cited -- a high volume of anticipated, repetitive queries. It doesn't work well at all for the example of unplanned, complex queries. It just takes too much time and effort, and no one can accurately predict what kind of information they'll need in the future. The other issue is that as you index more data, you are adding to the size of the database, and that often negatively impacts performance.

Our approach uses knowledge about the data (derived on data load) to create an optimized system. There are three keys to our architecture -- data packs, the Knowledge Grid, and the Optimizer.

When data is loaded into Infobright, it is tightly compressed and stored in "data packs" using our column-oriented data store. Our Knowledge Grid automatically creates a highly compact set of metadata, which stores information about the relationship between packs and statistical information about the contents. Our Optimizer uses the Knowledge Grid to determine the minimum number of data packs that need to be decompressed to satisfy a query.

When a query is initiated, Infobright searches the grid to find which data packs, if any, are required to resolve the query. Only the relevant data packs are opened which speeds performance for all queries.

Many of today's BI tools were designed to give the business user an easy way to query the company's data, yet many of these so-called advanced solutions are failing to provide users the data they truly want. Why?

The fault usually lies with the underlying data and data warehouse, not the BI tool itself. Often times IT can't provide access to all the data users need because it is either stored in multiple databases and is hard to extract and consolidate, or it is too expensive to replicate all of it in a data warehouse. For unplanned queries, especially those that require access to large amounts of data, the traditional indexing approach to data warehousing results in unacceptably slow performance.

The challenge we're meeting is in providing the right answers quickly to any query. We do this by providing quick access to the data across all parts of a company's business - the finance system, the marketing and sales system, and the transactional system. Infobright takes data from all of those of systems and simplifies and consolidate it, making it searchable on a deeper and more meaningful level.

If users often run queries that go nowhere, what's the solution for squeezing the most relevant and accurate data from huge datasets?

The Infobright approach is all about working smarter, not harder, with these huge datasets. We completely eliminated the need to forecast what queries you want to run or what questions you want to ask - so our customers can run ad hoc, complex queries on huge data volumes. No other vendor has managed to do this; instead they technically design and program the software based upon the expected question. This, in our view, is a backwards approach.

Which markets and users are adopting this approach today?

The online marketing, advertising, and financial services sectors are early adopters of any technology that handles high volumes of unpredictable data. They need this because the inability to respond at lightening speed in their industries can sometimes mean the kiss of death for these companies. However, many other markets are finding they have the same need, so our solution is providing great value in other industries as well.

What is the future of this technology as companies tackle an ever-increasing amount of data?

The future is in providing a simple, Google-like experience for all sorts of queries from the simplest to the most complex. More importantly, the users need to be able to just run their queries against the data warehouse without any manual intervention from the IT department. The big, hidden story in BI today is that the growing amount of corporate data is causing IT to drown under the workload it requires to support it.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.