Enterprise Data Strategy: It’s Not Magic
A data strategy that does not anticipate business and technology change will likely be doomed to irrelevance.
- By David Stodder
- January 22, 2013
In Harry Potter movies, science fiction stories, or old TV shows such as “Bewitched,” magical characters make it look easy to move forward and backward in time, or even with a twitch and a blink, stop time altogether as they rearrange things to produce a more suitable outcome. Enterprise data strategists, alas, do not have such luxury; they must develop comprehensive plans for building and protecting data assets even as technology and business components continuously change.
There may be data strategists who do possess magical powers, but those I have met live in a real world of here-and-now constraints and a fog of unanticipated events that can make it hard to predict the future. Yet, they know that a data strategy that does not anticipate business and technology change will likely be doomed to irrelevance.
“Data Strategy for Your Enterprise” is the theme of TDWI’s first World Conference of 2013, to be held February 17-22 in Las Vegas, Nevada. Co-located with the World Conference will be the TDWI BI Executive Summit, which will feature case studies, expert sessions, and panel discussions that are devoted to the complementary theme of “Enterprise Data Strategies for Analytics and BI.”
Both programs will address critical technology trends and practices and how they can be deployed to realize higher business value. Topics will include data quality, data governance, master data management, operational BI, self-service BI, Hadoop, big data, and more. Educational sessions at the conferences will help you develop better strategies for aligning business and IT, which is essential for sustaining a successful enterprise data strategy.
Data Supply Chains and Workloads
The World Conference’s two keynotes will address themes that I see as vital to the task of developing adaptable enterprise data strategies. Evan Levy notes in the description of his Monday (February 18) keynote presentation that “it is no longer sufficient to manage and track where data is created and consumed -- we must also know how it moves and migrates.” To address this challenge, Levy will describe a “data supply chain” approach to help organizations gain an integrated view of its data resources, more so than has been possible with limited and static data management perspectives. Organizations in heavily regulated industries (such as financial services and health care in particular) need to govern data movement and migration effectively to adhere to policy requirements.
On Thursday, February 21, William McKnight will deliver a keynote focused on how to evaluate “current, projected, and envisioned” workloads so that they may be supported by the right storage selection, which, incidentally, may not be the enterprise data warehouse. With big data technologies such as Hadoop available, as well as cloud computing and other online data services, data strategists have an array of options to choose from for particular workloads or types of data. McKnight’s workload focus can help data strategists align technology choices with business users’ actual workload demands, which will enable organizations to avoid allocating too much or too little. (Note: You can read my colleague James Powell’s recent interview with William McKnight here.)
Big Data and the Warehouse
Of course, one of the biggest debates among data strategists is how to deploy big data technologies such as Hadoop and integrate them more effectively with existing data warehousing and BI. This will surely be a hot topic at both TDWI conferences in Las Vegas. Demand for data that is not currently in the warehouse is growing; in the research for the just-published TDWI Best Practices Report, Achieving Greater Agility with Business Intelligence, we found that it is increasingly common for users to request changes to BI reporting requirements so that they can tap new data sources such as Hadoop files.
One practice growing in popularity is to bring Web logs and other unstructured data into Hadoop files for users to evaluate first, before it is put through rigorous transformation and quality processes involved in loading it into the data warehouse. If users and data analysts deem it valuable and worthy of further exploration, it can be brought into the warehouse and governed appropriately. If not, the organization will have avoided the wasted effort of bringing in unnecessary data.
Integrating and expanding data strategies to fit evolving user needs and to incorporate new types of data will be a big part of the exciting journey in this field throughout 2013. I hope you can begin your journey in Las Vegas!