CEO Q&A: Democratizing Data with Self-Service Platforms
Building applications that take advantage of streaming data can be tricky. Kenny Gorman, cofounder and CEO of Eventador, offers tips for removing barriers and making sure the systems don't become obsolete quickly.
- By James E. Powell
- March 30, 2020
Streaming data isn't just about big data -- it's about fast data as well. How can organizations lower barriers to application developers who need to work with streaming data? How can tech teams support those developers? We asked Kenny Gorman, cofounder and CEO of Eventador, for his insight.
Upside: What are the barriers to entry for building solutions that process/handle/analyze streaming data?
Kenny Gorman: Companies have figured out that streaming data holds more value than data at rest. They are building applications that make use of streaming data because these applications are more compelling for customers.
However, streaming data isn't without complications. New architectures, mindsets, and toolsets are required -- data engineers, data scientists, and application developers must adapt to this new paradigm.
Apache Kafka has come to the forefront as a de facto standard as the backbone of streaming data systems. Although stream processors and applications are excellent back-end components, inspecting, querying, and building them are still difficult, time-consuming, and expensive tasks. They typically require deep expertise in low-level languages and a number of new microservices and components.
Continuous SQL, sometimes called "Streaming SQL," is a powerful yet simple mechanism to create stream processors, computations, and applications on streams of data. Developers don't need to write processor scaffolding and logic in low-level languages such as Java and Scala. Instead, teams can focus on solving core business problems in a higher-level language such as SQL.
How can organizations lower the barriers to entry to application developers who need to build solutions based on streaming data?
Companies across industries are looking for higher-level tooling and systems that accelerate development time frames, expand the number of employees using the data, and utilize the cloud to its full benefit. They simply cannot keep up if they rely on bespoke systems for every new project. They need some sort of common platform to make use of streaming data and democratize it across the company.
This has led to the creation of streaming data systems that allow for self-service of timely streaming data for use in projects and applications across the enterprise. Customers are given a simple interface to run Continuous SQL on streams of data to route, filter, aggregate, and mutate the data for use in a variety of ways.
How can technical teams best support them?
As companies adopt streaming technologies, the teams they rely on to build applications using streaming data quickly become overrun. To combat this, back-end engineering teams need to enable users to create the feeds the applications (either built or bought) need.
Giving application developers the ability to create computations on streaming data by using familiar SQL is one of the most powerful things the technical, back-end engineering team can do to support the teams building what are often business-critical streaming applications.
What are some of the most important considerations for building streaming data systems that don't become obsolete in a year?
It's true that the streaming data ecosystem is moving at a rapid pace. The toolset and technology platform a team chooses are critical. They need to solve today's problem and have an eye toward the future as well. We're seeing large and growing adoption from fintech (usually for fraud detection pipelines), IoT companies, and network security companies. In fact, organizations across a broad range of industries are building out full streaming data ecosystems.
How does streaming data change the role of developers, data scientists, and others in the data pipeline?
The adoption of streaming data has challenged teams with the new paradigms and design patterns that come with adopting a new technology. As they start to use streaming data in their day-to-day life, they need a platform to ease the transition and make it easier to deal with the massive firehose of data that streams bring to bear. Ultimately, using streaming data technologies will make their work more useful and valuable to the organization and drive entirely new areas of innovation.
What's the future for streaming data and how will it impact different teams across the organization?
Apache Kafka started the streaming data revolution, and now companies are flocking to implement a corporate data structure that uses streaming data as the backbone.
However, without a scalable, production-grade streaming platform that allows users across the enterprise to easily develop applications on boundless streams of data, the enterprise cannot keep up with the competition. As companies adopt streams and the self-service platforms that make them widely useful, organizations will flourish. We can't wait to see the next generation of amazing applications built being fed streaming data at their core.
Tell us about your company's streaming data solution.
The Eventador Platform leverages best-of-breed open source technologies such as Apache Flink and lets users build stream processors declaratively using tried-and-true ANSI SQL, which has been the standard for querying data for a couple of decades and is widely known. The Eventador Platform helps users leverage the benefits of technologies such as Apache Kafka and Apache Flink while delivering a continuous SQL engine that solves the problem of querying and materializing results on boundless streams of data.
James E. Powell is the editorial director of TDWI, including research reports, the Business Intelligence Journal, and Upside newsletter. You can contact him
via email here.