As humans and AI increasingly work together to make decisions, researchers are looking at ways to ensure human bias does not affect the data or algorithms used to inform those decisions.
The MIT-IBM Watson AI Lab’s efforts on shared prosperity are drawing on recent advances in AI and computational cognitive modeling, such as contractual approaches to ethics, to describe principles that people use in decision-making and determine how human minds apply them. The goal is to build machines that apply certain human values and principles in decision-making. IBM scientists also devised an independent bias rating system can determine the fairness of an AI system.
Identifying and mitigating bias in AI systems is essential to building trust between humans and machines that learn. As AI systems find, understand, and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and cognitively biased, leading us to adopt more impartial or egalitarian views. In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.