Navigating the New Era with Generative AI Literacy
Data literacy has been a strong focus among data-savvy organizations. However, with the advent of advanced generative AI tools, this focus is quickly changing. Learn the components you should be focused on in the new era of generative AI literacy.
- By Troy Hiltbrand
- February 2, 2024
Over the last decade, the term heard across analytics organizations was data literacy. Companies invested in ensuring that their staff understood the basics of working with data, turning that data into information, and using that information to drive effective business decisions.
However, as technology has evolved, that focus on data literacy has quickly transitioned into a focus on generative AI literacy -- a new breed of data literacy built on the core tenet of data literacy: data collection and curation, data visualization, and interpretation.
With the advent of generative AI tools from industry leaders such as OpenAI, Google, Microsoft, and Anthropic, companies need their employees to know how to leverage these tools to create business value. Ultimately, data literacy and generative AI literacy have the same goals -- to drive effective business decision-making and to create organizational value.
Let’s look at four concepts to focus on as you develop a workforce with generative AI literacy: prompt engineering, hallucinations, ethics, and innovative thinking.
At the heart of generative AI lies the ability to provide it clear and concise instructions. Generative AI uses statistical modeling to generate results based on a set of parameters. These parameters are usually in the form of a prompt. The prompt can be text that represents a question or command combined with ancillary background information. The prompt can also be graphical, in the form of a mock-up sketch, photo, or visual representation of the problem set that you are trying to resolve. When a prompt is entered into the model, the tool generates a response.
Creating a prompt is akin to having a conversation with a business analyst in the process of defining requirements. The more clearly the requirements are defined, the better and more comprehensive the output solution will be and the more targeted to the problem that you are working to solve.
Many of these tools can process a prompt based on previous prompts within the same session. The series of prompts further refines the generative AI tool’s ability to produce valid and useful output. As we have learned over decades of requirements gathering, the higher the quality of inputs, the higher the quality of output. The better that employees can get at refining their prompts, the better they are able to generate results that can have meaningful business impact.
Generative AI tools are amazing because they can generate large amounts of high-quality output very quickly. At the same time, they take a greedy optimization approach to problems. Their aim is to generate a response as often as possible and as quickly as possible. This involves making guesses at what constitutes a useful response based on the prompt and the trillions of diverse data points that make up the model.
However, sometimes the guesses the model makes are not correct. These are called hallucinations. Examples, of hallucinations include quoting sources that don’t actually exist, generating images that look good but have certain surreal attributes (such as people with extra fingers), or paraphrasing information in a way that produces inaccuracies.
Having blind faith in the results coming from a generative AI tool can be very dangerous. Users must be able to review the generated content and recognize hallucinations. Users must learn where to use generative AI tools, where to trust the content coming from them, and where to apply a “trust but verify” mentality. Delegating responsibilities entirely to generative AI tools without sufficient quality assurance could well result in a situation that subtracts business value rather than adds to it.
Companies have long worked to achieve an ethical workforce, but with generative AI, they will need to step up their game significantly in terms of training and education.
Because generative AI tools produce content almost indistinguishable from that of humans, there are many ethical questions that must be addressed, such as what level of human performance should be replaced with generative AI, what information is ethical to use as the inputs of both model development and prompt engineering, where generated AI content should be used, and whether it ultimately enhances society and the business or destroys it.
As generative AI becomes easier to use and more accessible, these ethical decisions quickly leave the confines of the companies that create the tools and the regulatory agencies that govern them and become problems that everyday users face. This requires that in addition to teaching your employees about how to use the technology, you must establish guidelines, policies, and procedures to ensure that their usage in your business context falls within the bounds of ethical activity.
Generative AI’s power is due in part to its ability to accept such a wide array of inputs and prompts, but this also requires that employees learn to expand their thinking. As repetitive tasks are automated away, employees will be free to think more innovatively, which is not always intuitive for them. Educational institutions have focused on teaching students to learn facts for many years but are now being required to teach students how to think in terms of problem sets, alternative approaches, and innovative solution discovery.
Until this next generation of innovative thinkers is fully embedded into the workforce, it becomes the responsibility of businesses to focus on teaching their workforce to think innovatively. This requires more than just creating a curriculum that teaches the basics of innovation. It also requires that businesses establish environments where employees can practice innovative thinking, including providing both the freedom and guidelines to practice effectively with it.
With the coming decade, our efforts to develop data literacy within our workforce will evolve to developing generative AI literacy. The basic building blocks of data literacy will continue to be important but will not be sufficient to take our employees into the future. Understanding how and when to use the tools will become a new vector of education and training amongst our staffs. The abstract concepts of ethics and innovation will need to be embedded into our culture and the way we do business to ensure that we are competitive and have a workforce who can help us as we strive to continually generate business value.
Troy Hiltbrand is the chief information officer at Amare Global where he is responsible for its enterprise systems, data architecture, and IT operations. You can reach the author via email.