By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

Three Things Leaders Must Consider for the Coming Age of Artificial General Intelligence

Artificial general intelligence, a term coined by scientists, describes computing systems that could eventually be as smart as humans. Practically speaking, what does this mean, and what are its implications?

With progress comes change, and artificial general intelligence (AGI) will create opportunities never before available in human history. We believe there are three important considerations for global leaders readying for AGI: how our data will shape it, why built-in ethics and values are so important, and what role AGI will play in human society.

What follows are collected excerpts from our new book, The Datapreneurs, exploring these three questions’ profound implications.

Our Data Shapes AI

AGI will mine humanity’s available data from everywhere we allow it. With that information, AI can provide new insights, raise questions not considered before, and help society in countless ways.

Complex data is a collection of many different, unique formats like video, still images, audio, music, books, and various text communications. A computer’s ability to handle and understand complex data types will open a new world of applications that can extract information from these data sources, potentially combining them with other data types to reveal new insights.

Organizations of all types, including governments, can develop elaborate digital models defining their business, operations, laws, rules, and cultures. At the most basic level, models represent how we believe entities and processes work in the physical world or how we want them to work.

As our computing techniques and data volumes advance, we can accurately model the world around us --not just buildings, machines, and products -- but organizations, business processes, human language, art, laws, and policy. In addition, we can model potential future situations, run simulations, and use them to improve how our businesses and social systems work.

These models can improve continuously using real-world data. With the right information, they adapt to shifting conditions and guide us toward the future. In other words, a model-driven world drives evolution.

We need specific mechanisms to manage data governance effectively, such as the ability to inventory all data assets, model data and business concepts, and handle data quality issues. Most importantly, they must also provide secure access control and rights management [and consider privacy implications].

Therefore, the creation of those models comes with significant responsibility.

Build from Solid Ethics and Values

Because social orders are under constant stress, the values, laws, and regulations that embody social contracts require reexamination and modification when new factors come into play.

The machines we create will someday exceed our capabilities, and they will embrace the values we define. They have the potential for profoundly positive benefits to humanity and help everyone lead happier and more productive lives.

Many are familiar with science fiction writer Isaac Asimov’s Three Laws of Robotics. However, he also created another set of laws that he felt should govern human conduct, which he laid out in an essay in Robot Visions. He called these the Laws of Humanics.

  • First Law: A human being may not injure another human being, or, through inaction, allow a human being to come to harm.

  • Second Law: A human being must give orders to a robot that preserve robotic existence, unless such orders cause harm or discomfort to human beings.

  • Third Law: A human being must not harm a robot, or, through inaction, allow a robot to come to harm, unless such harm is needed to keep a human being from harm or to allow a vital order to be carried out.

As we advance into the model-driven era, “ethics models” need to define proper and inappropriate actions and provide boundaries for software applications and robots. Developing successful ethics models will be critical to the future relationship between people and machines.

One important consideration is access. Individuals with easy access to AGI systems will possess tremendous advantages in acquiring education, knowledge, financial resources, and other things that drive success. Unless access to these capabilities is available to everyone, the few will become even richer and more powerful. Modern societies could start to look like the feudal states of the Middle Ages. That kind of power-and-wealth dynamic is incompatible with democracy and unsustainable.

We can and will overcome these challenges, and the rising tide can lift all boats. But these issues will not solve themselves. We must think deeply about them and design solutions before the disruptions take full force.

The new social contract must consist of a set of rules agreed to by the world’s governments, businesses, and other institutions defining what intelligent machines can and cannot do and how people can and cannot use them.

Consider What Role AGI Will Fill

Today, people and machines co-exist. Each has strengths and weaknesses; the combination is at a tenuous equilibrium. However, the scales are tilting, ultimately leading to a potential imbalance with profound ethical implications.

For Further Reading:

The Problem and Promise of Generative AI

Understanding the Disruptive Nature of Generative AI

Anticipating Generative AI and Moving to Lower-Cost Hybrid Models

In the future, it seems likely that robots will be capable of performing most physical tasks, and intelligent models within them will be capable of performing most intellectual tasks.

What is the societal impact in a world where smart machines are general-purpose, matching the capabilities of people and exceeding them in many ways?

How will humans earn a living when intelligent machines do most manufacturing, transportation, distribution, and knowledge-work jobs? In modern capitalism, tremendous gaps exist between the powerful and wealthy few and the not-so-powerful and not-wealthy many. These inequities could become even more pronounced as machines take over much of the work to drive the economy and improve human well-being. Corporations and their leaders will own these fantastic productivity tools, and most people will be customers.

Therefore, leaders must help determine how to strike the right balance among people, corporate power, and profitability.

A Bright Future Ahead -- With the Proper Guidance

Machines will likely possess artificial general intelligence within the next decade. It is only a matter of when.

People will develop solutions to the profound ethical issues raised by tomorrow’s robots and intelligent machines. The process will be messy. In history, every major technological advance has been used, for good and bad. Ultimately, though, common sense prevails, and society establishes laws and regulations that oversee the use of technology.

These questions will likely be among society’s most critical policy issues in the decades ahead. Computer scientists, business leaders, government officials, academics, ethicists, and theologians must work together.

[Editor’s note: Copyright © 2023 Bob Muglia and Steve Hamm. Excerpted by permission of Skyhorse Publishing, Inc.]

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.