The Impact of Artificial Intelligence on Human Rights
Adopting AI can affect not just your workers but how you deal with privacy and discrimination issues.
- By Ben Hartwig
- June 29, 2020
As humans become more reliant on machines to make processes more efficient and inform their decisions, the potential for a conflict between artificial intelligence and human rights has emerged. If left unchecked, artificial intelligence can create inequality and can even be used to actively deny human rights across the globe. However, if used optimally, AI can enhance human rights, increase shared prosperity, and create a better future for us all.
It is ultimately up to businesses to carefully consider the opportunities new technologies provide and how they can best leverage these opportunities while being conscious of the impact on human rights.
Here are some of the possible consequences of adopting artificial intelligence and factors you may want to consider.
AI often replaces the role of humans. Technology makes processes more efficient by allowing a machine to take over some of the manual and low-level tasks that humans once controlled, such as assembly line work. It is estimated that 47 percent of jobs at high risk of automation may be taken over by machines by 2030. However, new technology is even replacing some higher order tasks, such as driving or filling prescription orders.
The use of AI could ultimately result in mass job losses and greater income disparity. However, automation does not need to have a net negative effect; it can often lead to an overall positive effect on the workforce by creating economic growth and reducing prices. Business leaders can then transition their workforce to new jobs that require higher-level thinking and soft skills such as interpersonal skills and emotional intelligence.
Lack of Privacy
AI gathers massive amounts of information, including streams of data from mobile devices and other electronics, and extrapolates from it so that professionals can make data-driven decisions based on unique insights. When companies have such massive amounts of information about their existing clientele, potential customers, and competitors, the collective's right to privacy may be threatened, especially as AI evolves and new ways to use personal information are discovered.
Companies that want to avoid compromising privacy should put additional safeguards in place, such as data anonymization techniques or actively screening algorithms for privacy issues.
Discrimination Against Job-Seekers and Customers
AI may result in discriminatory outcomes when algorithms are applied to hiring and firing decisions. Hiring practices may violate federal or state discrimination laws, especially if the training data is based on a homogenous group. For another example, if a company gathers information based on public records, companies should avoid assuming information about potential candidates' personalities or skills based solely on this information. A human evaluation in conjunction with the results derived from AI can more accurately determine whether a candidate matches the business's vision.
Total reliance on AI can have disastrous effects for businesses. For example, some AI programs that use facial recognition for theft protection may discriminate against shoppers of certain races or ethnicities. Although the GDPR requires the AI user to explain how the algorithm works and affects the final decision, the CCPA does not currently require this extra layer of protection for consumers. To insulate themselves from potential claims of discrimination caused by total reliance on AI, companies may wish to have a human verify any results before taking action.
Freedom of Expression
One particular human rights concern regarding AI is the possibility that whole groups may become silenced due to the use of AI. Social media platforms use algorithms that decide which viewpoints will receive traction online. In one Facebook experiment, researchers were able to manipulate the messaging that users received, which resulted in them perceiving the world in a specific way.
In some cases, AI is being used by computer-assisted writing software that prepares news stories and other content, so a human may not even be involved in the dissemination of information. If public opinion values objective journalism, companies may prioritize maintaining a balance between freedom of expression and the desire for more efficient information systems. Likewise, social media channels may want to maintain a public persona of inclusivity and diversity by being careful not to restrict minority viewpoints or the freedom of expression.
A Framework for the Fusion of AI and Human Rights
Most human rights advocates who focus on this issue recommend conducting an assessment of how AI may impact human rights and argue that corporations play an instrumental part in maintaining these rights. Businesses that want to instill confidence in consumers often do so by appearing transparent and innovative. By embracing technology and being clear about how it helps the business to make decisions, businesses can often appeal to a larger group of consumers and retain their existing clientele.
Additionally, companies that use AI should consider providing additional safeguards to mitigate the potential risk of discrimination. To avoid claims regarding the sale or misuse of personal information, companies can take additional steps to protect the privacy of their users.
By being conscious of the risks to human rights, you can use AI to better your business, promote growth, and improve customer interaction while avoiding some of the consequences that may result from the implementation of AI.
Ben Hartwig is a web operations director at InfoTracer. He authors guides on marketing and entire cyber security posture and enjoys sharing the best practices. You can contact him via LinkedIn. You can contact the author via LinkedIn.