By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Should AI Require Societal Informed Consent?

In many aspects of our lives, people must grant their permission in order to achieve something -- set the terms of a contract or complete a transaction, for example. Should similar informed consent be part of our AI use as well?

Nobody asks bystanders to sign a consent form before they get hit by a self-driving car. The car just hits them. The driver had to sign consent forms to purchase their car, letting the corporation off the hook for much of what goes wrong. However, the driver -- perhaps the most likely person to be killed by it -- never secures the consent of all the people exposed to that vehicle; these innocent bystanders get no say in whether they agree to be exposed to possible harm.

For Further Reading:

Executive Q&A: How Generative AI Is Changing How We Think About Analytics

Using AI to Advance Analytics

The Five Ds for AI Project Deployment Success

Informed consent is a core concept holding together the rule-based international order. If you sign a contract, then you are legally bound to its terms. If you undergo a medical procedure, you read the forms and sign your name, absolving medical practitioners from liability. If you get an app from the App Store, you sign a user license agreement that protects the app developer, not you. 

However, if you create a new piece of technology that might endanger, harm, or kill people, there is no consent form for the public to sign. We accept that risk despite the logical inconsistency. Why?

The concept of societal informed consent has been discussed in engineering ethics literature for more than a decade, and yet the idea has not found its way into society, where the average person goes about their day assuming that technology is generally helpful and not too risky.

In most cases, technology is generally helpful and not too risky, but not in all cases. As artificial intelligence grows more powerful and is applied to more new fields (many of which may be inappropriate), these cases will multiply. How will technology producers know when their technologies are not wanted if they never ask the public?

Giving a detailed consent form to everyone in the U.S., for example, is incredibly impractical. One of the characteristics of a representative democracy is that -- at least in theory -- our elected officials are looking out for the well-being of the public. Certainly, we can think of innumerable issues where the government is already doing this work: foreign policy, education, crime, and so on. 

It is time for the government and the public to have a new conversation, one about technology -- specifically artificial intelligence. In the past we’ve always given technology the benefit of the doubt; tech was “innocent until proven guilty” and a long-time familiar phrase in and around Silicon Valley has been “it’s better to ask forgiveness, not permission.” We no longer live in that world.

Interestingly, in light of cases such as Theranos, FTX, and Silicon Valley Bank, it is the tech leaders themselves who are pushing this conversation on risk, many focusing on long-term “runaway” AI risk, as many movies have depicted. Certainly, the government should act to figure out how to avoid these doomsday scenarios. Society certainly does not consent to that, and the government clearly ought to try to prevent such risks to society.

Short of the doomsday scenario, though, there are other technological changes to which people may or may not consent. Should we, as a society, let AI in social media act as a weapon of social-psychological mass destruction, spreading misinformation, propaganda, and more? Should we, as a society, use AI in cars, knowing that occasionally they will kill bystanders? Should we, as a society, use AI in medicine, knowing that it may allow patients to die? If medical professionals might ask for consent from the patient for the use of AI in some but not all cases, how do we decide which ones? 

Someone will decide, and it's most likely to be the technology-producer’s corporate lawyers. They will assuredly not have the best interests of the public in their hearts as they write consent forms for users (not everyone else) which place all risk upon the user and none upon the technology-producer. Bystanders be damned. The rest of society and their conception of what the world should properly look like never enters the realm of consideration.

Society needs to have a conversation about technology. We are already having this conversation in fragmented form, in many localities, but it needs to be a society-wide conversation because all of society is at stake. No one gets to live in peace, unaffected by these new technologies. We can’t escape either, whether it is a fever-dream doomsday scenario, our neighbor becoming radicalized by social media, or a self-driving car hitting a pedestrian.

Let’s have this conversation as a society and work together to decide what kind of future we all want.

About the Author

Brian Patrick Green is director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University in Santa Clara, CA. Green is the author of the book Space Ethics and co-author of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. You can reach him at the Markkula Center website or on LinkedIn.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.