By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Blog

Data Analysis and Design Blog Posts

See the most recent Data Analysis and Design related items below.


The Sociology of KPIs

While teaching a course on performance dashboards recently, I had a minor epiphany. The reason key performance indicators (KPIs) are so difficult to create is because you need a degree in sociology to predict the impact they will have on human and organizational behavior. My advice to the class was: Don’t try to design perfect KPIs on the first try; rather put them in play, see what behaviors they drive—both good and bad—and then adjust quickly.

Interpreting KPIs. The first challenge with KPIs is that without adequate training and socialization people will interpret results differently. For example, if a KPI’s status is red but its trend is positive, what should a user think? Perhaps, someone already spotted the problem and applied a remedy which is now working. Conversely, what if a KPI’s status is green but its trend is downward? A green light indicates performance is above average, but should you take action now before the green light turns yellow? Tools that let users annotate KPIs can help users take the right action and teach novices how the company works.

Driving Behavior. The second challenge with KPIs is using them to drive right behaviors. If we apply incentives to KPIs—such as attaching merit pay to performance measured by the KPIs—we in effect are conducting a giant sociology experiment. Since humans are irascible creatures embedded in complex (daresay dysfunctional) organizational systems, understanding the true impact of KPIs is impossible to predict.

The mistake most KPI teams make is focusing on one behavior at the expense of another that is outside their purview. For example, call center executives who want to boost productivity may create a metric that rewards agents for the number of calls taken per hour. Incented by this metric, agents will be tempted to terminate calls or transfer them to other departments without resolving them. This will have a dramatic affect customer satisfaction, which may be the responsibility of another team. Higher-level executives need to intervene and make sure a counterbalancing metric is introduced that rewards agents for both productivity and first-time call resolution.

KPI Ecosystems. KPI design teams need to think of KPI ecosystems. Although two metrics may conflict with each other by driving contradictory behavior, this is ok. In a strange way, this conflict empowers workers to make judicious decisions. Humans have a great capacity to live in and reconcile the tension between two opposites (although not without some anxiety.) With strong leadership and proper training, employees can effectively balance countervailing metrics.

Buy In. It’s also imperative that you get workers’ input before you implement incentive-based metrics. That’s because workers understand the nuances of the processes being measured and can identify whether the metrics are realistic and attainable and any potential loopholes that unscrupulous workers might exploit. Putting metrics out for broad-based review helps ensure the buy in of the people whose behavior you are trying to measure and change.

Defining KPIs is a sociology experiment and your workers are the test subjects. Treat them with respect, and your experiment has a better chance of success. But remember, it is an experiment, and if it fails, that’s part of the process. Refine the metrics and try again until you get it right.

For more information on designing effective metrics, see Wayne’s report titled “Performance Management Strategies: How to Create and Deploy Effective Metrics.”


Posted by Wayne Eckerson on July 2, 20090 comments


Less is More: Designing Performance Metrics

America’s biggest problem today is glut. The recession notwithstanding, we are saturated with stuff. I was heartened to read in the Wall Street Journal recently that supermarkets and discount retailers are cutting back on the number of items per category and brand that they carry. For example, Walgreen Co. is cutting back the number of superglues it carries from 25 to 11. Of course, 11 items is still an over-abundance but at least it’s a start.

We are also drowning in data. We’ve established personal coping mechanisms (or not) to deal with a never ceasing stream of email, direct mail, voice mail messages. But we are still vulnerable to the glut of metrics that our companies spit at us through an endless variety of reports. To cope, some largely ignore the data, making decisions based on gut instincts, while others pluck numbers from various reports and insert them into a personalized spreadsheet to do their analysis. Dashboards put a pretty face on metrics but often don’t do enough to slice through the tangle.

Strategy rolls down, and metrics roll up.

To deal with the glut of metrics, we need to take a step back and understand what we are trying to accomplish. Executives need to identify a handful of strategic objectives and devise metrics to measure progress against them. Each of these high-level metrics then cascades into additional metrics at each successive level of the organization. Each metric supports processes at an increasingly granular level and is tailored to a small number of employees who are accountable for its results. Activity at each level is then aggregated to deliver an enterprise view. In this way, strategy rolls down and metrics roll up.

The Power of One. So what is the right number of strategic metrics? One targeted metric may be all that is needed. British Airways reportedly turned itself around in the 1980s by focusing a single metric: the timely arrival and departure of airplanes. The CEO personally called any airport manager when a British Airways plane was delayed over a certain time to discover the reason for the hold up. The metric and the threat of a call from the CEO triggered a chain reaction of process improvements throughout the organization to ensure the event did not repeat itself.

Consequently, the airline reaped sizable benefits: it reduced costs involved in reaccommodating passengers who would have missed connecting flights and it avoided alienating those customers; it improved the moral of employees who no longer had to deal with angry or upset passengers; and it improved the performance of suppliers and partners who no longer had to rejigger schedules.

Ideally, employees each track about 3 to 7 metrics, each of which supports one or more high-level metrics. This is a reasonable number of manage and about the maximum number of things an individual can focus on effectively. More than that and the metrics lose their punch. Collectively, the organization may still have thousands of metrics it needs to track but all emanate from one—or more realistically about three to five strategic objectives which translate into 10 to 20 high-level metrics.

Thus, an effective dashboard strategy starts with defining strategic objectives and the metrics that support them. Dashboard designers should remember that less is more when creating performance metrics.

For more information on designing effective metrics, see Wayne’s report titled “Performance Management Strategies: How to Create and Deploy Effective Metrics.”


Posted by Wayne Eckerson on July 1, 20090 comments


Managing System Change

BI environments are like personal computers: after a year or two, performance starts to degrade and you are never quite sure why. The best explanation is that these systems start accumulating a lot of “gunk” that is hard to identify and difficult to eliminate.

Personal computers, for example, become infected with viruses, spyware, and other malware that wreak havoc on performance. But we cause many problems ourselves by installing lots of poorly designed software, adding too many memory-resident programs, accidentally deleting key systems files, changing configuration settings, and failing to perform routine maintenance. And when the system finally freezes up, we execute unscheduled (i.e. three-finger) shutdowns, which usually compound performance issues. Many of us quickly get to the point where it’s easier and cheaper to replace our personal computers rather than try to fix them.

Unfortunately, BI environments are much harder and more expensive to return to a pristine environment. Over time, many queries become suboptimized because of changes we make to logical models, physical schema, or indexes or because we create incompatibilities when we upgrade or replace drivers and other software. Each time we touch any part of the BI environment, we create a ripple effect of problems that makes IT adverse to making any changes at all, even to fix known problems! One data architect recently confessed to me, “I’ve been trying 10 years to get permission to get rid of one table in our data warehousing schema that is adversely affecting performance, but I haven’t succeeded.”

But when IT is slow to make changes and maintenance efforts begins to dwarf development initiatives, then the business revolts and refuses to work with IT and fund its projects.

The above architect said the solution is “better regression testing.” The idea is that if we perform continuous regression testing, IT will be less hesitant to change things because it will see quickly whether the impact is deleterious or not. However, this is like using a hammer and chisel to chop down a tree. It will work but it’s not very effective.

The better approach is to implement end-to-end metadata so you can see what impact any change in one part of the BI environment will have on every other part. Of course, a metadata management system has been an elusive goal for many years. But we are starting to see new classes of tools emerge that begin to support impact analysis and data lineage. ETL vendors, such as Informatica and IBM, have long offered metadata management tools for the parts of the BI environment they touch. And a new class of tools that I call data warehouse automation tools, which automatically generate star schema and semantic layers for reporting, also provide a glimmer of hope for easier change management and reporting. These tools include Kalido, BI Ready, Wherescape, and Composite Software with its new BI Accelerator product. You’ll hear more about these tools from me in future blogs.

Posted by Wayne Eckerson on May 6, 20090 comments