top of page
Robot Hand

 

Human Values in the Loop Design Principles for Ethical AI

             

Artificial Intelligence

“This has to be a human system we live in.”

–Sandy Pentland

This essay from Deloitte Review attempts to illustrate that ethical principles can serve as design principles for organizations seeking to deploy innovative AI technologies that are economically profitable as well as beneficial, fair, and autonomy-preserving for people and societies. Specifically, we propose impact, justice, and autonomy as three core principles that can usefully guide discussions around AI’s ethical implications.

 

“FIRST, DO NO HARM”

"DATA FOR GOOD"

"JUSTICE: TREATING PEOPLE FAIRLY"

 

"For example, few drivers or airline pilots fully understand the inner workings of their semiautonomous vehicles. But through a combination of training, assurances provided by safety regulation, the manufacturer’s reputation for safety, and tacit knowledge acquired from using their vehicles, the user develops a working sense of the conditions under which the vehicles can be trusted to help them achieve their goal of safely getting from point A to point B. It is notable that recent examples of semiautonomous vehicle crashes have resulted from unwarranted levels of trust placed in driver assistance systems. For example, recall the state agency that, in using an AI algorithm to flag potentially fraudulent UI claims, chose to selectively “nudge” claimants toward honest behavior rather than selectively cut off benefits based on the algorithm’s output.

This use of behavioral nudges allowed the agency to avoid the unintentional maleficence of denying AI Themes.

 

Intelligibility, transparency, trustworthiness, accountability

Examples

• Explainable AI algorithms helping judges or hiring managers make better decisions

• A vehicle operator understanding when to trust autopilot technology

• An AI-based tool informing decisionmakers when they are being “nudged”

• A chatbot not masquerading as a real human needed benefits to legitimate claimants. But nudging can also have implications for autonomy.

For example, nudge interventions shouldn’t mislead with false information or otherwise manipulate people to act in ways that go against their self-interest. Recall the ethical imperative to avoid “manipulative or distorting external forces.”

https://www2.deloitte.com/content/dam/insights/us/articles/6452_human-values-in-the-loop/DI_DR26-Human-values-in-the-loop.pdf

bottom of page