By using website you agree to our use of cookies as described in our cookie policy. Learn More

Course Description

T6P Data Engineering for Batch, Streaming, and Data Pipelines

May 12, 2020

2:15 pm - 5:30 pm

Duration: Half Day Course

Level: Intermediate to Advanced

Prerequisite: None

John O'Brien


Radiant Advisors

Data integration has evolved from traditional batch ETL into data engineering with new concepts and technologies that are proving to be more agile, scalable, and affordable. Understanding the key concepts and relationships of modern data integration will allow data integration teams to evolve into data engineering teams that manage batch ETL, streaming data ingestions, and data pipelines in a single architecture.

This course will have additional emphasis on how to organize and simplify data ingestion environments by exploring how database replication works to stream data from operational databases in a non-intrusive way. We will also look at the DataOps principles and data integration design patterns that should be adopted to deliver the data needed in a portable architecture across on-premises and cloud environments.

You Will Learn

  • Essential data ingestion factors including database replication, streams, and managed files
  • How streaming data hubs act as a data broker with Apache Kafka
  • How to engineer data pipelines with integration design patterns
  • How to develop, deploy, and manage data pipelines with DataOps

Geared To

  • Integration architects
  • DW/ETL developers
  • Data engineers
  • Data scientists
  • Developers
  • Database administrators
  • Data architects

The clock is ticking.

Register Now

Register Online

Rest easy—online registrations for this conference are secure. Our secured server environment keeps your information private.