By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

When Implementations Fail: Lessons and 10 Best Practices for BI Professionals

Problems with the Affordable Care Act website rollout offers lessons that BI professionals can apply to their own project implementations. We offer 10 best practices to help you avoid such problems in your own environment.

Most of us have at one time or another been involved in a transactional or analytic system that was placed in production prematurely with missing or less-than-fully-tested functionality. In many situations, this was due to intensive pressure (if not outright dictates) from user organizations or the executive that the IT organization reports into to "meet the deadline, no matter what it takes."

The October 2013 rollout of the Affordable Care Act (aka Obamacare) website, Healthcare.gov, provides a clear example of the problems and negative effects that can result when a system is released prematurely and either fails completely or does not meet expectations. Among the many problems with the website were its inability to handle many concurrent users, system freezes, loss of user input data, and integration problems with other systems including the insurance companies users had selected.

There were multiple causes for this fiasco. These included late scope changes, the need to integrate many databases, insufficient code reviews, seemingly poor project management, multiple vendors and subsequent "finger pointing," and the requirement that users create an account before they could begin to shop for coverage. These problems were exacerbated by the intense sponsor pressure to "go live" without adequate system and stress testing.

Unfortunately, when a system fails during initial implementation, and the causes are not quickly remedied, it can be difficult to regain credibility with the user community. Consider the situation when a data warehouse meets or exceeds expectations relative to user access, response time, ease of use, source content, and other metrics -- but initial queries yield incorrect results. This is likely due to data quality issues, perhaps caused by inadequate data cleansing resulting from the need to "cut corners" in order to meet project deadlines. The overall credibility of the warehouse will suffer, and even when the problems are ultimately fixed, users may not fully trust the results they obtain for a long time.

If a premature rollout can cause such problems, why do they still occur, especially when the IT organization is aware of potential problems? Over the years, I have observed several implementation failures and one of the most common reasons is that individual metrics did not closely align with the best interests of the organization. These metrics (and their impact) range from the promise of a bonus for meeting a deadline to the potential loss of employment for missing one. I have observed both, and in one instance, an organization had terminated a contract with an application service provider only to have its own internally developed replacement fail on rollout! I have also personally delayed an implementation when it became apparent to me that user training and procedures were inadequate even though the system itself was production ready.

What lessons can we learn from these situations, and what can we do to prevent such mistakes? Here are 10 best practices:

  • Closely monitor implementation schedules and issue warnings when problems arise that might (should?) delay the Go-Live day.

  • Be aware that a sponsor's metrics may not coincide with the good of your organization. If the sponsor's bonus is dependent on meeting an implementation deadline, help the sponsor see the negative career impact of a failed implementation.

  • Make sure written status reports reflect problems; avoid painting a rosy picture when a serious thorn is uncovered. Yes, this may be considered a CYA response, but the impact of being blamed for a major system failure, especially in a highly political organization, may make this necessary. Furthermore, in a mission-critical system, the impact of a failed implementation may have a serious, negative organizational impact.

  • Develop suitable test plans; even if every unit test succeeds, integration tests may not. User acceptance testing is important as well to ensure that the system delivers what the users expect.

  • Ensure that training and operational procedures are part of your project plan, not an afterthought. This should be verified as part of user acceptance testing.

  • Avoid shortcuts that may compromise system credibility. Data quality should be addressed as part of the project, not after- the-fact when a problem is discovered.

  • When issues are identified (and there will almost always be a few), prioritize them so those that would prevent Go-Live are addressed first and "nice-to-haves" are postponed if necessary. Make sure that user training and procedures are modified to reflect what is actually being implemented, not what was initially promised but is now being delayed.

  • If something does go wrong, encourage cooperation to fix the problem; avoid finger pointing and confrontation that will likely only make the situation worse.

  • If at all possible, a "the buck stops here" person should bear ultimate responsibility for the implementation; giving a committee (rather than a single person) ultimate responsibility is likely to lead to conflicts among committee members when there is a problem.

  • Recognize that, unless it involves a compliance issue, an implementation date is a target, not an absolute necessity. If a critical problem exists, don't assume that it will be resolved at the last minute. Hope is not a strategy.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.