By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

In the Know on Generative AI with Fern Halper

Fern Halper, TDWI’s vice president and senior director of research for advanced analytics, breaks down some of the key terms and concepts involved in today’s generative AI movement.

In this “Speaking of Data” podcast, TDWI’s Fern Halper looks at some key terms and concepts related to generative AI. Halper is vice president and senior research director for advanced analytics at TDWI. [Editor’s note: Speaker quotations have been edited for length and clarity.]

For Further Reading:

Executive Q&A: How Generative AI Is Changing How We Think About Analytics

Identify Generative AI’s Inherent Risks to Protect Your Business

Generative AI and Its Implications for Data and Analytics

Halper began with a look at the concept of foundational models, which she defined as “models pretrained on huge amounts of data and then fine-tuned for specific tasks, such as natural language processing or computer vision.”

One example Halper offered was the large language model, GPT4, which serves as the foundational model for ChatGPT -- an AI tuned specifically for more conversational text interactions. With this particular tuning, she explained, the model is able to predict which words are most likely to go together.

In the context of generative AI, pretrained and fine-tuned have particular meanings, Halper noted. For example, pretraining is the stage where the model is trained on a massive amount of data to learn the structure of a language or facts about the world or to develop some reasoning capacity. From there, the fine-tuning stage trains the model on a narrower, more-specialized data set to prepare it for use in a specific task. As an example, Halper said that a general language model can be fine-tuned using a company’s marketing and sales data to create an AI-powered chatbot to answer questions about the company’s products and services.

One key issue with foundational models Halper pointed out is the expense -- both financial and computational -- of developing them.

“At this point,” she said, “many companies are playing with subscription-based versions of open source tools, but they’re realizing that they may not want to allow their proprietary data out of their control in that way. In response to this, some vendors are offering versions of these tools that companies can pull behind their firewalls to keep their data secure.” Cloud providers are a good place to look for this option, Halper added.

This brought up the subject of prompt engineering -- the real heart of generative AI use.

“A prompt in generative AI is the input to the model -- the instruction or cue that generates the result you’re after,” Halper said. “A well-engineered prompt needs to be clear and specific and may include several examples of your desired result.”

As prompt engineering has grown in importance, several techniques have arisen. Halper mentioned:

  • Zero shot: This is a prompt that doesn’t include any data from the training set or any examples

  • Few shot: This prompt includes several examples of properly classified data to assist the model in correctly classifying the data the user is interested in

  • Chain of thought: In this type of prompt, a more complex problem is broken down into a number of subsidiary steps for the model to follow

However, Halper pointed out, much in the same way many people who are analyzing data aren’t data scientists, not everyone who writes prompts and uses generative AI in this way needs to be thought of as a prompt engineer.

“Anyone who uses generative AI would benefit from some kind of prompt engineering training,” she said, “to help them track their train of thought and get to the insights they’re looking for faster, which is the goal of the process anyway.”

Another aspect of generative AI Halper said all users need to be mindful of is errors in the model’s output.

“A hallucination is any case of incorrect output from an AI model,” she said. “For example, when Google first introduced its AI chatbot overseas, there was a very public instance of it providing incorrect information in response to a question about the James Webb Space Telescope. There’s also the example of the lawyers who used ChatGPT to write a brief that wound up including citations of cases that didn’t exist.”

All this is proof that users can’t automatically trust the output of generative AI tools, she said. They will still need to understand what they’re talking about and have some way of confirming or denying the results of the process.

The conversation wrapped up with a brief overview of embeddings in terms of generative AI and large language models.

[Editor’s note: You can listen to the complete podcast on demand.]

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.