By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

Q&A: The Seven Metrics of Highly Successful EDW Programs

How can you measure the success of your enterprise data warehouse initiative? Ralph Hughes has devised a concise set of seven factors to evaluate.

[Editor's note: Ralph Hughes is conducting a session at TDWI Las Vegas (February 22-27, 2015) entitled The Seven Metrics of Highly Successful EDW Programs, an overview course for managers and business stakeholders that will focus on a core set of metrics that are easy to collect and that clearly illuminate project team performance.]

Ralph Hughes is the chief systems architect for Ceregenics, an agile DW/BI consulting firm. He is also the creator of the Agile Data Warehousing method and author of a book by the same name.

BI This Week: You're leading a half-day course for the "leadership and management" track at this year's conference. What need among DW/BI managers and team leaders are you hoping to address in your session?

Ralph Hughes: I lead a group of solutions architects who help companies transition their DW/BI development teams to agile delivery methods in order to get three- to four-fold increases in delivery speeds and application quality. Over the fifteen years we've provided this service, we've learned that there's a big difference between "doing agile" and "doing agile right."

Generic agile methods stipulate that you just rely on "self-organized teams," but that's pretty naïve, especially when building applications as complex as data warehousing. Agile teams are vulnerable to many of the same dysfunctions as any other group of humans. Anyone who needs to know that a team is truly performing well must have a window on team performance from day one, so they can catch problematic behaviors and delivery gaps while there's still time and money to correct the project's trajectory. This course provides a quick set of metrics that will visualize the most important aspects team performance for project sponsors, IT management, and team leaders.

As complicated as software development is, wouldn't a manager need hundreds of metrics to know how a team is performing? How did you settle upon just seven to include in this class?

Let me answer that question from two directions.

First, when you lead a lot of teams, you naturally encounter performance indicators that only turn positive if a team is doing a lot of little things right. These are the "high-powered" metrics that tell you a lot in one glance. To create this course, my consultants and I have taken the best of the high-powered metrics we've used in the past and selected a set that covers the full arc of the software development process.

Second, sometimes it's good to restrict yourself to a small number of metrics. When you can create only a few windows into a complex system, you have to take a moment to really think through the process you're trying to manage and identify its fundamentals. You don't want a lot of little indicators to distract you from what really matters. So, of course there are many metrics to choose from, but we want to provide a lean starter set that not only puts managers in an information-rich position quickly, but also structurally encourages them to be more mentally engaged with the teams their leading, not just relying on a one-size-fits-all dashboard that could eventually overlook something important.

What framework did you choose to employ when you decided upon just seven metrics for this baseline?

Our experience with big, agile DW/BI projects has shown us that high-performance teams don't just "do agile." Instead, they master ways to perform the complete software engineering process in repeatable increments. We have pictured that engineering process as a series of six major steps, and provided a metric for each one: project planning, tactical planning, programming, testing, benefits realization, and adapting your process. Of course, there's agile versions for each of these steps, but if you think about it, even traditionally managed teams should be performing these project phases effectively, so our starter set of metrics provides a solid view of team performance no matter what management style you've decide to employ.

You just outlined six software engineering steps, but there are seven metrics. What's missing?

Good catch. We decided to "double team" the all-important notion of benefits realization. Software development exists to deliver value to the customer. Agile methods in particular are all about constantly delivering value. How do you know for sure that a team is delivering value? You could ask the team if they've given the customers what they asked for. Certainly that's important from a contract point of view, but what if the customer says "Too bad, this application is still not what we need"?

We learned through hard experience that you must measure both of these delivery concepts. First, define value expected from each component when you slate it for development, and document that you delivered those components to the business. Second, measure that the business is actually using that software consistently, which is a good indicator that they like what they got. With two measures for benefits realization, the total set of metrics adds up to seven.

The performance of every development team seems to hinge upon a lot of human factors, which are hard to quantify well enough to measure. How does your starter set for DW/BI managers address this challenge?

Our course does introduce attendees to quantifying the "soft aspects" of application development. The key is to substitute consistently subjective units of measure (such as story points for level-of-effort estimates) for inconsistently objective measures (such as labor-hour estimates). Consistently subjective measures provide a reliable depiction of events that can be presented as a true trend in team performance, despite the fact they are subjective.

We also adapt a classic marketing metric, the net promoters' score, to quantify the subjective impressions that teammates have about themselves as a team. On the team, there are usually folks who know a problem exists, they even have a good idea about how to fix it, but that information does not bubble to the surface. Managers, sponsors, and other team leaders need to learn of the problem and then see whether the corrective actions taken have improved the situation. The net promoter's score is a fantastic way to uncover hidden knowledge among the developers and quantify the impact of new policies over time. The fact that the notion you're measuring is soft or subjective does not matter here. You're not creating a metric for a court of law. You're just need a way for the team to express what they already collectively know, a way that can be used later to show them that their new work habits have fixed the problem.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.