Best Practices for Cloud Data Pipelines
TDWI Speaker: David Loshin, President of Knowledge Integrity
Date: Wednesday, October 23, 2019
Time: 9:00 a.m. PT, 12:00 p.m. ET
Conventional data warehouse architectures are engineered for lock-step propagation of data into the target warehouse environment. Data sets extracted from operational or transaction processing systems are dumped to a staging area where transformations are applied prior to warehouse loading. This approach might be satisfactory for on-premises data warehouses that are updated in batch. However, as reporting and analytics environments expand across multiple cloud platforms into a hybrid environment, the conventional extract, transform, and load (ETL) process is unlikely to meet the evolving needs of the modern analytics consumer communities.
In this webinar we explore the alternatives for developing and managing cloud data pipelines and consider the characteristics that are expected of the modern analytics environment. Attendees will learn about:
- Ingesting and handling data in real time
- The complexity of custom-coded data pipelines
- Skills requirements for data pipeline development
- Opportunities for reducing complexity using extract, load, and transform (ELT)
David Loshin