Data integration is the foundation of modern business. When organizations manage the ingestion, cleansing, and transformation of disparate data sources within high-performance data integration pipelines, they can drive analytics insights into every decision.
In order to keep pace with fast-changing business requirements, enterprises must keep pace with best practices in data integration. Chief among these is migrating data integration pipelines to scalable, resilient, and agile cloud platforms.
In this keynote presentation, TDWI senior research director James Kobielus will discuss:
- What are the principal trends driving evolution of data integration tools, platforms, and practices?
- What are the architecture, capabilities, and benefits of a modern enterprise data integration stack?
- How will the enterprise data integration team of the future need to be organized and what roles, skills, tools, and processes will be most essential to its core functions?
- What will be the role of no-code, visual, and AI-driven data integration tools in allowing non-traditional data integration professionals—the “citizen data engineers”—to do their jobs effectively?
- What are the most important new data integration practices for ingesting unstructured, semistructured, and other new data sources and for supporting sophisticated new use cases in artificial intelligence, distributed analytics, and low-latency streaming?
- To what extent will data integration, machine learning operationalization, and DevOps pipelines, practices, and teams need to converge?
- What is the potential impact of generative AI and large language models on how data integration processes are built, orchestrated, managed, and governed?
- What new automation approaches will have the greatest impact on reducing the cost, improving the agility and resiliency, and boosting the performance of data integration processes?
- For what use cases will reverse ETL take root in most enterprise data integration environments alongside ETL and ELT?
- To what extent can enterprises leverage or reuse components from their current data engineering platforms as they migrate toward the future AI-driven, streaming, and cloud-native data integration pipelines?