TDWI Articles

Clearing Up Cloud Confusion for Data Warehousing (Part 2 of 2)

Before moving your data warehouse or lake to the cloud, it’s vital to have clear skies and a good view of the horizon.

In my previous article, I explored two modern architectural patterns -- the cloud data warehouse and the data lakehouse -- that have been capturing mind share in the overall cloud data warehousing market. Both align closely with the traditional, logically centralized concept of a data warehouse, even though the physical implementation may well take advantage of some aspects of distributed storage or processing in the cloud.

For Further Reading:

Data Lakehouses: The Key to Unlocking the Value of Your Unstructured Data

Sunrise at the Lakehouse: Why the Future Looks Bright for the Data Lake’s Successor

The Key to a Successful Cloud Migration Is What Comes After

In this article, I will discuss the other two patterns -- data fabric and data mesh -- that complete the lineup of modern data warehousing approaches. They stray further from logical centralization. Indeed, in the case of the data mesh, distributed thinking forms the core of the approach.

Along the way, we will hopefully create some clear blue sky for thinking about the path that makes sense for you, as described in more depth in my new book Cloud Data Warehousing, Volume I.

The Data Fabric

The data fabric pattern is most simply seen as a modernization of the logical data warehouse (LDW) pattern first described in the early 2010s. This approach was the industry’s first attempt to take a step away from a fully physically centralized data warehouse. It allowed for a variety of data stores -- many in the relational paradigm -- to be part of the larger data warehouse concept.

The LDW itself emerged from the data virtualization products that had appeared more than a decade earlier. These tools offered users a SQL query interface that could connect in real time to a wide variety of sources -- both relational and non-relational -- distributed across the IT environment. The LDW simply placed a traditional, physically centralized data warehouse at the heart of the system as a core repository of reconciled, historical, and dependable data.

The fundamental architectural challenge of the LDW is to ensure that the transformations used on the fly in data virtualization are identical to those already used to populate the central warehouse (and associated marts). If not, different routes to the same data lead to inconsistent results.

The data fabric addresses this issue by applying AI and machine learning to “activate” the metadata that underpins all population of and access to the distributed data stores. Such active metadata is extensive and deep, integrated, consistent, and up-to-the-second. It is built on a knowledge graph, incorporating appropriate ontologies and semantics. It further allows automation and easier maintenance of all data preparation, storage, and access activities.

Promoted principally by Gartner since 2020 and supported by a wide range of vendors, the data fabric pattern offers a viable approach to cloud data warehousing from on-premises data warehouses, either centralized or logical.

The Data Mesh

Data mesh, in contrast, proposes a completely new and unique approach to supporting the needs of cloud data warehousing. Introduced by Zhamak Dehghani, then of Thoughtworks, in two blog posts in 2019 and 2020, the data mesh has become a greatly hyped approach to delivering cloud data warehousing and analytics needs. It is particularly popular in cloud-savvy, software engineering communities.

In contrast to all the other patterns, the purist data mesh approach strongly opposes any centralization of data, processing, or responsibility. Many definitions of a data mesh exist, but one of the clearest was that provided by Thoughtworks in 2020: “a domain-driven analytical data architecture where data is treated as a product and owned by teams that most intimately know and consume the data ... apply[ing] the principles of modern software engineering and the learnings from building robust, internet-scale solutions to unlock the true potential of enterprise data.”

The first part of the definition draws attention to the governance of data creation. This is done in a highly distributed manner by the teams and functions most familiar with the data, i.e., the “data owners.” This is, in essence, a focus on responsibility for data quality and is, of course, a welcome and vital aspect of delivering data/information for decision-making support. The data mesh pattern proposes that such governance and all data storage and manipulation should be entirely distributed, on the basis that centralized solutions may be bottlenecks to innovation and change.

The second part of this definition calls for the use of tools and techniques, particularly based on microservices architecture concepts, that have proven successful in the building of distributed, cloud-centric applications based on a domain-driven approach. However, such apps have been almost entirely operational in nature and the wisdom of applying these principles to the informational environment remains to be proven.

The required software and methodologies for a data mesh are still in an early emergent stage, and implementing the pattern as originally defined will likely require advanced software engineering skills. Many data warehousing vendors do claim to support the data mesh, although their approach is often to focus on distributed governance -- based on domain-driven design and data-as-a-product -- overlaid virtually on a more traditional and more centralized physical implementation.

Sunshine and Clouds

On this short flight to the cloud, we have seen four different modern approaches to data warehousing that have emerged in the past few years. Each has its strengths and weaknesses, trading off the benefits of centralized versus decentralized architectures; modern, open-source components versus more traditional database technologies; and even data management versus software development skills. Furthermore, we can see that your specific starting point will influence how easy your journey will be and how easily the desired endpoint can be achieved.

To arrive at the sunlit uplands of successful cloud data warehousing, therefore, the first step on your journey is to investigate and compare in more depth the architecture patterns briefly described here and understand which one best suits your business needs, bridges your current and future technology environments, and suits your existing and available skills.

Bon voyage!

About the Author

Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing in 1988. With over 40 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer, and author of “Data Warehouse -- from Architecture to Implementation" and "Business unIntelligence--Insight and Innovation beyond Analytics and Big Data" as well as numerous white papers. As founder and principal of 9sight Consulting, Devlin develops new architectural models and provides international, strategic thought leadership from Cornwall. His latest book, "Cloud Data Warehousing, Volume I: Architecting Data Warehouse, Lakehouse, Mesh, and Fabric," is now available.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.