Subscribe to our Newsletter

Building a better society with better AI

“As humans, we are highly biased,” says Beena Ammanath, the global head of the Deloitte AI Institute, and tech and AI ethics lead at Deloitte. “And as these biases get baked into the systems, there is very high likelihood of sections of society being left behind—underrepresented minorities, people who don’t have access to certain tools—and it can drive more inequity in the world.”      

Projects that begin with good intentions — to create equal outcomes or mitigate past inequities — can still end up biased if systems are trained with biased data or researchers aren’t accounting for how their own perspectives affect lines of research.

Thus far, adjusting for AI biases has often been reactive with the discovery of biased algorithms or underrepresented demographics emerging after the fact, says Ammanath. But, companies now have to learn how to be proactive, to mitigate these issues early on, and to take accountability for missteps in their AI endeavors. 

Algorithmic bias in AI

In AI, bias appears in the form of algorithmic bias. “Algorithmic bias is a set of several challenges in constructing an AI model,” explains Kirk Bresniker, chief architect at Hewlett Packard Labs and vice president at Hewlett Packard Enterprise (HPE). “We can have a challenge because we have an algorithm that is not capable of handling diverse inputs, or because we haven’t gathered broad enough sets of data to incorporate into the training of our model. In either case, we have insufficient data.”

Algorithmic bias can also come from inaccurate processing, data being modified, or someone injecting a false signal. Whether intentional or not, the bias results in unfair outcomes, perhaps privileging one group or excluding another altogether.

As an example, Ammanath describes an algorithm designed to recognize different types of shoes such as flip flops, sandals, formal shoes, and sneakers. However, when it was released, the algorithm couldn’t recognize women’s shoes with heels. The development team was a group of fresh college grads—all male—who never thought of training it on the heels of women’s shoes. 

“This is a trivial example, but you realize that the data set was limited,” Ammanath said. “Now think of a similar algorithm using historical data to diagnose a disease or an illness. What if it wasn’t trained on certain body types or certain genders or certain races? Those impacts are huge.      

Critically, she says If you don’t have that diversity at the table, you are going to miss certain scenarios.”    

Leave a Reply

Your email address will not be published. Required fields are marked *