TDWI Articles

Q&A: An Introduction to Streaming AI

Generative AI is getting all the buzz lately, but a different kind of AI -- streaming AI -- may have a greater impact on an enterprise’s analytics. Lenley Hensarling, chief product officer at Aerospike, explains.

Upside: Interest in AI is certainly high now, but with all the problems, e.g., fictitious legal citations, how can AI play a trusted role within an enterprise?

For Further Reading:

Why Generative AI Will Change Employee Provisioning, Dynamics, and Conflict

The Problem and Promise of Generative AI

Why Conversational AI Is a Game Changer for Support at Scale

Lenley Hensarling: Large language models (LLMs) are a set of techniques at the heart of generative AI. The term “generative” implies that it produces outputs that appear to result from intelligent thought. What is going on is a relatively simple algorithm that searches for the next likely “thought” to tack on to a series, for example a string of text, sequence of video frames, or a set of options for a drug compound. This ability to quickly generate and compare multiple paths against a desired outcome is amazing.

However, human intelligence is a combination of biological, ethical, social, and cultural factors that develop over time and occur within the context of family, community, and nation -- not to mention that this all moves through time, in real time, with many facets that shape our problem-solving methods.

The AI community calls inaccurate outcomes, such as the made-up legal case citations you mentioned, “hallucinations” and stresses the importance of grounding these outcomes in “truths.” Unfortunately, in this case, the necessary grounding was lacking, and users of the technology encouraged the hallucinations. The approach, which relied on a subjective test, “looks good to you, looks good to me,” is insufficient.

The legal citations example makes for a good cautionary tale but doesn’t tell the entire story. If you feed LLMs the right data, they can provide helpful results, but those results still need to be fact-checked and edited. Can it save time? Sure, but it’s important to question the quality of the work and, more important, its adherence to a code of conduct. It is akin to handing a brilliant child a text and asking them questions outside of their experience and broader sense of the world -- some would call it an incomplete education.

The value of AI and machine learning (ML) lies in more bounded cases of clear, logical models coupled with neural learning and guided by limits coded into decisioning programs. These models are improving with increased capabilities and greater fidelity by handling more up-to-date data. The data from many sources is streamed in real time, and the most recent “learnings” by processing those patterns are applied to make predictions and decisions, for example: is this machine going to fail, how will it fail, and what can we do about it? For known patterns or unexpected correlations observed by the models, AI/ML technology of the “old” kind can alert and guide us to a solution -- all in near real time.

What is streaming AI and how is it different from generative AI?

Streaming AI is about continuously training ML models using real-time data, sometimes with human involvement. The incoming data streams from many sources are analyzed, combined with contextual information, and matched against features that carry condensed information and intelligence specific to the given problem. ML algorithms continually generate these features using the most current data available.

On the other hand, as noted earlier, generative AI focuses on generating responses based on a “seed” and then a pattern for finding the next thing to tack on. This works to generate content that conforms to certain parameters the model has “learned.” It is bounded, but not in a way that the boundaries can be easily understood. Until the recent rise of LLMs, considerable effort was invested in making ML models explainable to humans. The question was: how does the model arrive at its result? The “I have no idea” response is hard for humans to accept. In the made-up legal case citations example, the LLM program generated a motion that argued a point, but when asked to explain or validate its path, it just made some stuff up. This is like the inexperienced child analogy I mentioned.

What are the benefits of streaming AI? What are its drawbacks and limitations?

For Further Reading:

Why Generative AI Will Change Employee Provisioning, Dynamics, and Conflict

The Problem and Promise of Generative AI

Why Conversational AI Is a Game Changer for Support at Scale

Streaming AI provides organizations unparalleled situational awareness by leveraging real-time data from multiple sources. This enables proactive decision-making and timely responses to changing conditions. Streaming AI optimizes operations by automating tasks, improving efficiency, and reducing manual intervention. It can enhance personalization and improve customer experiences by tailoring interactions and recommendations based on real-time data. It also applies to proactive maintenance of equipment, including automobiles.

Streaming AI does have drawbacks and limitations. Implementing it requires a robust infrastructure capable of handling large volumes of data while ensuring consistent performance. Additionally, the quality and accuracy of the data sources are crucial for reliable outcomes, which can pose challenges for enterprises struggling with incomplete or outdated data. In this more traditional application of AI/ML, human expertise usually serves as a checkpoint, augmenting rather than replacing pure AI.

How does streaming AI achieve these benefits (such as optimizing operations)?

Streaming AI achieves its benefits by processing and analyzing real-time data streams from specific points within an enterprise and melding them with key external data sources to place the decisions in financial, ecological, governmental, or societal contexts. Streaming AI can identify patterns, detect anomalies, and generate insights in real time by continuously ingesting and analyzing events as they happen. This enables organizations to take immediate action, make more accurate decisions, and optimize operations based on current information. Waste is minimized by determining the need for change or intervention at the earliest point.

To maximize the potential of any AI, including streaming AI, isn't data quality more critical than ever?

Data quality is crucial for maximizing the potential of AI systems, and enterprises must address the challenges of incomplete or outdated data. Organizations can incorporate data validation and data cleansing processes into their data pipelines, augmenting and filtering the continuous flow of real-time data. This ensures the data used to feed AI models is as reliable and accurate as possible. This modern data management practice helps mitigate the impact of poor data quality and improves the reliability of AI-driven insights and decisions.

Typically, we ask guests about their predictions for technology in the next one to two years, but AI is evolving rapidly. Where do you foresee the technology being in the next six months?

Over the next six months, we can expect AI's continued advancement and adoption. With the rapid pace of AI development, breakthroughs and innovations are likely to emerge. The focus will likely revolve around improving the accuracy, reliability, and explainability of AI systems. With streaming AI, we will see advanced techniques to handle real-time data stream processing.

In the world of generative AI, we will see more targeted applications of the technology. It will come into its own as it becomes more focused, with AI/ML applications that act as fact-checkers, compare recommendations or generated content with previous decisions and outcomes, and flag inconsistencies.

There will also likely be an increased emphasis on ethical considerations and regulatory frameworks. As AI evolves, organizations must remain updated on the latest developments and best practices to effectively leverage AI technologies in their operations. Organizations will increasingly recognize the value of all AI/ML techniques to optimize operations, drive informed decision-making, and provide a competitive edge in the dynamic business landscape. The more focused the application of the technology, the greater the immediate benefits will be.

[Editor’s note: Lenley Hensarling is the chief product officer at Aerospike. He has more than 30 years of experience in engineering management, product management, and operational management at both startups and large successful software companies. He previously held executive positions at Novell, Enterworks, JD Edwards, EnterpriseDB, and Oracle. You can reach him at [email protected] or via LinkedIn.]

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.