Opinion #HR
Read time: 03'33''
18 April 2022
How we speak about diversity matters – we do unintended harm when we ignore this
Unsplash © Jiroe

Diversity in data is more than an intellectual challenge – it’s a very human one

Businesses rely on data to make decisions every day. Good data can be the difference between being guesswork and intelligence – it can drive strategy and help shape brands’ products and image. However, data is only useful in the context of the rules we create.

In the mid-50s, Kodak – the photography company that sold almost all the film used in US cameras at the time –­ introduced the Shirley Card. This picture of a female model, glamorously dressed in the latest fashions and furs, was used by photo labs as a reference to calibrate skin tones, shadows and light when developing a consumer’s snaps.

The original “Shirley” was a Kodak employee. But other models would come to feature on the cards over the next couple of decades – all standard-bearers for colour reproduction. The common factor between them all? They were white.

As a result, Kodak’s film was calibrated for white skin and therefore unable to capture the full range of skin tones accurately. The data – in this case a picture ­– was biased.

21st century bias

The Shirley Card is by no means the first-time example of when bias – unconscious or not – has led to a business creating a flawed product or advertising that marginalises or excludes a group of people, nor should it come as a surprise that non-diverse groups tend to make non-diverse decisions.

Kodak eventually began to address its film’s flaws in the 70s, partly in response to advertisers who complained the film stock was not accurately replicating the full dynamic range of their products. But nonetheless, even in 1978, filmmaker Jean-Luc Godard refused to shoot on Kodak film because he deemed it racist.

Today, most companies are working towards more inclusive cultures, and we have far more sophisticated data tools at our disposal. But that doesn’t make tackling bias any less complicated, nor the need for more representative teams any less pressing.

Skin tones in photography remain a challenge, for example. In 2018, the “Gender Shades” project examined three gender classification algorithms from IBM, Microsoft and Face++ to see how accurately they detected gender. Photos of subjects were grouped by gender, skin type – using the six Fitzpatrick types and separated into two groups of lighter and darker skin – and the intersection of gender and skin type. All three algorithms performed better overall on those with lighter skin tones, with an error rate from 11.8% to 19.2%. They also all performed better at identifying male faces, and misgendered darker females the most.

“Automated systems are not inherently neutral,” concluded the researchers. “They reflect the priorities, preferences, and prejudices – the coded gaze – of those who have the power to mould artificial intelligence.”

Diverse data requires diverse teams

So, how do we avoid these prejudices and refocus the lens to truly reflect our society? We can’t – at least not entirely. Data is about identifying patterns, so it ceases to be useful if it becomes too granular. Even at a sociological level, bias can be useful because it helps us operate in unfamiliar environments.

But we can take steps to reduce bias and improve the overall quality of our data. We can collect more of it to make it more representative and test it rigorously, or we can employ comparative models. We can even layer on algorithms to adjust for bias, and then run algorithms on those algorithms.

There will still be unknown unknowns. Some of these may surface during development, and others may not be apparent until you go live. Unfortunately, such mistakes can be very public. For example, Apple made the headlines in 2014 when it introduced a Health app that tracked several metrics – including users’ sodium intake ­– yet neglected to factor in the menstrual cycle. Ultimately, what this goes to show is that while data provides insight, it still requires people to make decisions.

I’m conscious that as a white, male managing director of a data science company that I need a diverse team that can bring their own perspectives and subjective insight. An important way to avoid bias is to employ diverse teams – or, at the least, to ensure a diverse range of people review the data. Despite being a relatively small business, our team is both highly international and split 50/50 by gender, yet women only make up 19% of people in the UK tech industry. Only 8.5% of senior leaders in the sector come from a BAME background.

Unless we address these imbalances, mistakes will continue to be made.

A diversity of experience helps ensure you don’t miss the bigger picture or the finer details that are fundamental to how a model performs in the real world. It can also provide a check on the ethical use of technology like facial recognition.

Aside from the moral considerations, unchecked bias means that businesses might be missing a significant proportion of their audience because they are not representing them well. Those companies that build their plans around 100% of their customers have a competitive advantage: less biased data leads to better decisions.

Data can and should be used as a force for good. But good intentions must also be matched by the full spectrum of human experience. In this, we have some way to go. But recognising bias and building more diverse teams will help move us in the right direction.

Matt Andrew is UK managing director at Ekimetrics.