RESEARCH & RESOURCES

Available On-Demand - This webinar has been recorded and is now available for download.

Evolution of the Data Lake—Implementing Real-Time Change Data in Hadoop

TDWI Speaker: Krish Krishnan, Founder, Sixth Sense Advisors

Date: Wednesday, December 20, 2017

Time: 9:00 a.m. PT, 12:00 p.m. ET

Webinar Abstract

A ten-fold increase in worldwide data by 2025 is one of many predictions about big data.

With such growth rates in data, the “data lake” is a very popular concept today. Everybody touts their platform capabilities for the data lake, and it is all about Apache Hadoop. With its proven cost-effective, highly scalable, and reliable means of storing vast data sets on cost-effective commodity hardware regardless of format, it seems to be the ideal analytics repository. However, the power of discovery that comes with the lack of a schema also creates a barrier for integrating well-understood transaction data that is more comfortably stored in a relational database. Rapidly changing data can quickly turn a data lake into a data swamp.

Apache Kafka to the rescue! Rapidly becoming an enterprise standard for information hubs, Kafka’s foundation of commodity hardware for highly scalable and reliable storage goes beyond Hadoop with the addition of a schema registry, self-compressing storage that understands the concept of a "key," and other characteristics that assume data will change. The combination of Kafka and Hadoop can be the key to delivering the next generation data lake platform.

Intrigued? Confused? Unsure? Yes, you will be and should be. Join us for a webinar on the topic and learn the puzzle and the solution.

TDWI will introduce:

  • What is Kafka?
  • What are the differences between traditional ETL tools and Kafka?
  • Why Kafka at the heart of an information hub?
  • Delivering operational data value—IoT transformation success
  • A market perspective; Kafka extensions to commercial Hadoop

IBM will introduce:

  • Dynamic transaction data delivery directly to the Hadoop data lake or to a Kafka landing zone and information hub

You will leave this session understanding:

  • Why Kafka is an ideal companion to the Hadoop data lake
  • Ways to use Kafka as an information hub
  • The challenges of managing the operational schema data destined for the lake
  • The challenges of maintaining transactional integrity when analyzing the data in the lake
  • Considerations around maintaining an audit trail
  • How IBM data replication offerings can help

Krish Krishnan


Your e-mail address is used to communicate with you about your registration, related products and services, and offers from select vendors. Refer to our Privacy Policy for additional information.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.