Throughout the history of data warehousing, data integration processes have been critical for sheltering transaction processing systems from additional processing loads to maintain performance service-level agreements (SLAs)—and for ensuring that data originating from different sources could be combined to enable more robust and complete enterprise reporting and analytics. Many existing data warehouses have relied on a conventional data integration process: extract data from the sources, move the extracted data sets to a staging area, standardize and cleanse the extracted data, then schedule bulk data loads into the target data warehouse.
However, these decades-old techniques for data integration are rapidly becoming obsolete as organizations adopt hybrid architectures to support modern real-time analytics and machine learning projects. Whether you need to synchronize data across on-premises/cloud boundaries, integrate data from SaaS/PaaS providers, or ingest and process streaming data in real time, the traditional extract, transform, and load (ETL) approach is insufficient to enable real-time predictive and prescriptive analytics. In this webinar, we discuss how traditional data integration may be impeding modern analytics and explore new ideas for using an enterprise integration fabric to modernize data integration.
Attendees will learn how to:
Individual, Student, & Team memberships available.