RESEARCH & RESOURCES

Executive Summary | Data Lakes: Purposes, Practices, Patterns, and Platforms

When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics, and operations. Even so, traditional, latent data practices are possible, too.

Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. With the right end-user tools, a data lake can enable the self-service data practices that both technical and business users need. These practices wring business value from big data, other new data sources, and burgeoning enterprise data; these assets are not mere cost centers. Furthermore, a data lake can modernize and extend programs for data warehousing, analytics, data integration, and other data-driven solutions.

The chief beneficiaries of data lakes as identified by this report’s survey are analytics, new self-service data practices, value from big data, and warehouse modernization. However, lakes also face barriers, namely immature governance, integration, user skills, and security for Hadoop.

The data lake is top of mind for half of data management professionals, but not a pressing requirement for the rest. A quarter of organizations surveyed already have at least one data lake in production, typically as a data warehouse extension. Another quarter will enter production in a year. At this rate, the data lake is already established, and it will be common soon.

Most users (82%) are beset by evolving data types, structures, sources, and volumes; they are considering a data lake to cope with data’s exploding diversity and scale. Most of them (68%) find it increasingly difficult to cope via relational databases, so they are considering Hadoop as their data lake platform. Seventy-nine percent of users that already have a lake say that most of its data is raw source with some areas for structured data, and those areas will grow as they understand the lake better.

Data lakes are owned by data warehouse teams, central IT, and lines of business, in that order. Data lake workers include an array of data engineers, data architects, data analysts, data developers, and data scientists. One-third of those are consultants. Most full-time employees are mature data management professionals cross-trained in big data, Hadoop, and advanced analytics.

Most data lakes focus on analytics, but others fall into categories based on their owners or use cases, such as data lakes for marketing, sales, healthcare, and fraud detection. Most use cases for data lakes demand business metadata, self-service functions, SQL, multiple data ingestion methods, and multilayered security. Hadoop is weak in these areas, so users are filling Hadoop’s gaps with multiple tools from vendor and open-source communities.

There are two broad types of data lakes based on which data platform is used: Hadoop-based data lakes and relational data lakes. Today, Hadoop is far more common than relational databases as a lake platform. However, a quarter of survey respondents who have data lake experience say that their lake spans both. Those platforms may be on premises, on clouds, or both. Hence, some data lakes are multiplatform and hybrid, as are most data warehouses today.

To help users prepare, this report defines data lake types, then discusses their emerging best practices, enabling technologies, and real-world use cases. The report’s survey quantifies users’ trends and readiness for data lakes, and the report’s user stories document real-world activities.

Diyotta, HPE Security – Data Security, IBM, SAS, and Talend sponsored the research and writing of this report.

About the Author

Philip Russom, Ph.D., is senior director of TDWI Research for data management and is a well-known figure in data warehousing, integration, and quality, having published over 600 research reports, magazine articles, opinion columns, and speeches over a 20-year period. Before joining TDWI in 2005, Russom was an industry analyst covering data management at Forrester Research and Giga Information Group. He also ran his own business as an independent industry analyst and consultant, was a contributing editor with leading IT magazines, and a product manager at database vendors. His Ph.D. is from Yale. You can reach him by email ([email protected]), on Twitter (twitter.com/prussom), and on LinkedIn (linkedin.com/in/philiprussom).


TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.