Prerequisite: None
Steve Fuller
VP Solutions Consulting and Engineering
Stardog
Brian Jones
Senior Manager, Analytics Data Architects
Qlik
Rama Ryali
VP – Product Evangelism & Strategy
RightData
Modern data engineering processes—also known as DataOps pipelines—continuously integrate, transform, and prepare data for production deployment.
To reduce the time and cost necessary to build, deploy, and manage ETL and other workflows, modern DataOps pipelines are being equipped with AI/ML-driven automation of many repeatable functions. Leveraging AI/ML to proactively provision resources and respond to technical, workload, and performance issues before they become showstoppers, automation also enables organizations to scale up their data pipelines while at the same time scaling down the human effort needed to manage it 24/7.
However, the DataOps pipeline involves many functions—in such segments as discovery, cleansing, and governance—where manual processes are still necessary to ensure proper handling and oversight. In this sponsor panel, TDWI senior research director James Kobielus will lead experts in an in-depth discussion of what’s next for DataOps automation.
The discussion will focus on several key issues:
- What is the business case for automating the DataOps pipeline?
- What are the principal approaches for automating the DataOps pipeline?
- Which DataOps functions are the most highly automated and which resist further automation?
- What are the core enabling platforms and tools that enterprises must adopt in order to automate their DataOps pipelines?
- How difficult, costly, and time-consuming is it to automate a DataOps pipeline from end to end?
- Do DataOps professionals need to deepen their competency in AI/ML in order to automate their pipelines?
- How can DataOps pipeline automation augment the productivity of enterprise data management professionals?