The Data Lake Is a Method that Cures Hadoop Madness
For years, Hadoop users have been managing data in ad hoc and arbitrary ways. It's time for more method and less madness.
- By Philip Russom
- August 4, 2017
About the time Hadoop turned 10 years old, long-time users realized that the range of use cases that Hadoop clusters could support viably was limited by the lack of data management. Mature IT professionals continue to be appalled by the governance-free data dumping and lack of audit trail that's all too common with Hadoop. Business users are frustrated by the low business value and trust they get from Hadoop data.
To cure the madness, they're all turning to the data lake.
Maddening Problems with Hadoop Data Management
For those of us who believe in the established best practices of data management, most production implementations of Hadoop (especially those in Internet firms) seem like the Wild West.
Data dumping. Hadoop's low-cost storage, linear scalability, and analytic processing power tempt users to dump any and all data into a cluster. That temptation is a problem when data is not documented as it is loaded, which creates the lack of metadata management typical of Hadoop implementations.
As Hadoop matures into multitenant use cases, users from different functions do not coordinate loads, resulting in data copies that skew analytics outcomes. Data dumping makes data redundant, hard to retrieve, impossible to audit, and generally not trustworthy for traditional users.
No apparent organization. When designing a Hadoop cluster, too many users simply decide how many nodes to start with, then wing it from there. There is little or no design effort put into how data will be organized within Hadoop, especially when users assume a "dump now, process later" method. This works fine with algorithmic, discovery-oriented analytics, but not with the set-based, query-oriented data exploration that enterprise users hope to do with Hadoop data.
Hadoop is usually siloed. This makes sense when we consider that early adopters of Hadoop implemented it to support one use case at a time, usually analytics with Web logs in Internet firms. As Hadoop moves into mainstream industries, it is increasingly integrated with traditional enterprise systems. However, Hadoop silos persist.
Data Lake Methods that Cure Hadoop Madness
Let's be honest. Data lake best practices are quite minimal compared to the rigors of operational databases and data warehouses. Designing and using a data lake involves just a few rules of thumb.
Rules for data ingestion. By definition, a Hadoop-based data lake is optimized for the quick ingestion of raw, detailed source data, with little or no improvement up front. Instead, lake data is improved as it is read, processed, and analyzed by users, tools, and applications.
To keep fast-paced data ingestion from deteriorating into data dumping -- turning a data lake into a data swamp! -- rules must control who can load what data and when. For example, users who need "sandboxes" (e.g., data analysts, data scientists, and some business users) should be allowed limited dumping, but stricter rules apply to everyone else.
Just enough organization. As Hadoop evolves from single-tenant clusters to multitenant ones, users need better organized data. In Hadoop, there are ways to create the equivalent of data volumes (as in all database management systems), which means that data lake designers can organize subject areas. TDWI has already found data lakes with areas for marketing, sales, supply chain, and healthcare data.
In a related trend, as Hadoop users work with their data, they eventually realize that they need Hbase or Hive tables for certain subject areas. These are not full-blown relational databases, but their simple data stores provide just enough structure as required for query-based practices such as data exploration, single customer view, and reporting. Hence, designers of Hadoop-based data lakes can turn to Hbase or Hive if they, too, need just enough organization for their data. Furthermore, a few vendors now offer hub products that add even more structure to Hadoop for data lake and similar use cases.
Integration with other data environments. Hadoop is already established in modern multiplatform data warehouse environments; likewise, the data lake will soon become common in warehousing, as it improves Hadoop. TDWI has also found Hadoop-based data lakes in other enterprise data environments, especially those for omnichannel marketing, the digital supply chain, and multimodule ERP.
On the one hand, Hadoop is very liberating, especially for data analysts who need to ride the prairie unfettered by barbed wire. On the other hand, we need better controls and structure so we can more effectively find, query, use, analyze, audit, and trust Hadoop data. The catch is to apply just enough method to Hadoop madness without limiting its liberating properties. That's exactly what the data lake offers for Hadoop.
For more details about data lake methods and madness, read TDWI Best Practices Report: Data Lakes: Purposes, Practices, Patterns, and Platforms.
Philip Russom is director of TDWI Research for data management and oversees many of TDWI’s research-oriented publications, services, and events. He is a well-known figure in data warehousing and business intelligence, having published over 600 research reports, magazine articles, opinion columns, speeches, Webinars, and more. Before joining TDWI in 2005, Russom was an industry analyst covering BI at Forrester Research and Giga Information Group. He also ran his own business as an independent industry analyst and BI consultant and was a contributing editor with leading IT magazines. Before that, Russom worked in technical and marketing positions for various database vendors. You can reach him at [email protected], @prussom on Twitter, and on LinkedIn at linkedin.com/in/philiprussom.