By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Data Virtualization Offers a Fresh Approach to Data Lakes

To overcome many issues related to data lakes, enterprises are turning to a more progressive data integration approach: data virtualization.

Physical data lakes promised to take in all of an organization's data, but that process has proven to be difficult and not without challenges. Not all data could be stored in the same data lake, spawning additional, siloed data lakes that were tough to integrate. Context and associations were lost, and governance was lacking.

For Further Reading:

Minimizing the Complexities of Machine Learning with Data Virtualization

Choosing Data Virtualization/Federation Tools

Don't Let Data Integration Be the Downfall of Your Cloud Data Lake

As a result, organizations started to adopt a logical data lake architecture -- a mixed approach centered on a physical data lake but incorporating a virtual data layer to facilitate access to all data assets regardless of location and format. The data is connected, maintaining strong associations and relationships. It also allows you to document the data and offer that data in the format business users need.

Data volumes have been growing continuously over the years at ever-faster rates. A recent Forbes article reports that 90 percent of the world's data was created in just the last two years. With the proliferation of new digital devices, the continued growth of online commerce, and the emergence of IoT devices, the variety of new data types continues to grow as well.

In addition, because of mobile and edge computing, machine-generated data, and the ability to capture the activity of transactional systems, data is being refreshed at ever-accelerating rates. The volume, variety, and velocity that characterize big data have grown exponentially and show no signs of slowing down.

Attempts to Solve the Big Data Challenge and Avoid Silos

Hadoop, which can store massive amounts of data while applying enormous processing power, seemed purpose-built for addressing the big data problem. Hadoop is at the center of a rich ecosystem of big data technologies primarily used to support advanced analytics initiatives, including predictive analytics, data mining, and machine learning (ML).

Hadoop systems can handle various forms of structured and unstructured data, giving users more flexibility for collecting, processing, and analyzing data than relational databases and data warehouses provide. The larger problem remained, however: it was just too difficult and time-consuming to physically integrate all the sources of data into a Hadoop system.

The original data lakes attempted to solve the challenges of big data by loading all the data into a central repository, thus eliminating data silos. Data was kept in its raw form and was only transformed when needed. SQL-on-Hadoop systems such as Spark or Presto promised to provide the magic that would turn big data lakes into usable information for business.

New offerings by major cloud vendors are beginning to blend the concepts of SaaS with big data, blurring the lines between HDFS, Amazon S3, and Azure data lake storage. Next-generation cloud MPP systems such as Snowflake and Redshift are now almost indistinguishable from SQL-on-Hadoop systems such as Spark or Presto (think Qubole or Databricks, to name a few). As organizations shift from viewing data as a static resource to a resource in motion, they will seek out cloud-based iterations of the traditional data lake architecture.

Unfortunately, even the most modernized data lake fails to solve the silo issue, and this is due in part to the ease with which data can be added to the lake. In addition, data indiscriminately ingested into a data lake loses its provenance and its unique associations, making it difficult for business users to understand and find the information they need.

Good governance is key to a usable data lake, but the "load first, ask later" strategy that guides many a data lake can easily lead to one that is ungoverned, with multiple, uncontrolled copies of the same data, stale versions, and unused tables. It is best to begin a data lake implementation by first asking what data should go in the lake, for what purpose, and with what granularity. However, often the data lake gets created first, leading to unrealistic expectations for the raw data. Additionally, due to data restrictions and local laws, some data cannot be replicated into the lake and must be left as separate silos.

The Data Lake, Reimagined

To overcome these data-silo issues, many enterprises are turning to a more progressive data integration approach: data virtualization (DV). In fact, data virtualization shares many ideas with data lakes. Both architectures begin with the premise of making all data available to end users. In both architectures, the broad access to large data volumes is used to better support BI, analytics, and other evolving trends such as machine learning and AI.

However, the implementation details of these two approaches are radically different. DV eliminates the need to physically replicate data. In this way, data virtualization enables a "logical data lake" architecture, a mixed approach leveraging a physical data lake with a virtual layer on top.

This approach offers many advantages over physical data lakes, specifically:

  • It uses a logical approach to provide access to all data assets, regardless of location and format, and without replication. Copying data becomes an option rather than a necessity.
  • It enables stakeholders to define complex, derived models that use data from any of the connected systems while also keeping track of the data's lineage, transformation history, and definitions.
  • It is centered around a big data system (the physical data lake), so it can leverage its processing power and storage capabilities in a smarter way.

Logical data lakes can be implemented in the cloud or on premises and do not require the costly, time-consuming replacement of physical hardware. Due to the flexibility of data virtualization, logical data lakes can complement the existing data infrastructure of any organization at any stage in the modernization journey. More important, data virtualization overcomes the issues that limit the success of data lakes in enterprise analytics initiatives.

About the Author

Ravi Shankar is senior vice president and chief marketing officer at Denodo, a provider of data virtualization software. He is responsible for Denodo’s global marketing efforts, including product marketing, demand generation, communications, and partner marketing. Ravi brings to his role more than 25 years of marketing leadership from enterprise software leaders such as Oracle and Informatica. Ravi holds an MBA from the Haas School of Business at the University of California, Berkeley. You can contact the author at [email protected].


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.