It’s not AI you should be worried about—it’s your own bias

AI can be scary, especially when it comes to decision making that concerns actual people. When done right, though, AI can be the most useful instrument in the D&I toolbox.

Avatar photo
By Adam Etzion, HR Analyst @ Gloat
Trulli

AI can be scary, especially when it comes to decision making that concerns actual people.

After all, nobody wants their fate to be decided by an uncaring machine, which can only take cold data into account, leaving the human element completely unacknowledged.

Because of this, especially when it comes to diversity and inclusion, AI and automation can really seem like the enemy – but in actuality, when done right, AI can be the most useful instrument in the D&I toolbox.

How?

It all comes down to bias.

Everyone’s a little bit biased

Bias is an inherent part of human mentality, and uncovering, acknowledging and addressing it is a constant effort. That’s one of the reasons D&I is such an important business function; if we could simply stop being biased, we would – but addressing bias is an ongoing process, not an “over-and-done-with” deal.

The fact that bias is so inherent to human thinking also means that even the most informed and well-meaning managers are fallible when it comes to opening up new opportunities for inclusivity, and can accidentally overlook relevant individuals, even in a diverse workforce. For under-represented employees, that kind of oversight, even if it isn’t malicious, can make the work environment feel non-inclusive, inaccessible and even hostile, undoing D&I efforts and diminishing the organization’s ability to benefit from the diversity of its human capital.

So how can organizations overcome this human setback, and create a truly inclusive environment with equal access to career advancement opportunities and an atmosphere in which employees feel safe and empowered to raise a hand and get involved?

If the “human element” is the problem with bias, AI-based tools just might be the solution.

For under-represented employees, that kind of oversight, even if it isn’t malicious, can make the work environment feel non-inclusive, inaccessible and even hostile

An impartial platform

It’s true that AI doesn’t “see” the “human element” in whatever decision making process it’s involved – but that also means it doesn’t insert bias into that process, either.

If used correctly, AI can therefore create a truly inclusive, color- gender- and background-neutral work environment when applied to HR, and allow individuals to flourish on the merit of their professional abilities alone. In fact, if they perform as intended, AI-based HR platforms have the potential to significantly raise the confidence and trust of employees in their organization, ensuring they’re treated equally and fairly by the “institution,” and, perhaps more importantly, that they have access to a system which allows them to be seen and heard when they need to be.

It’s not just employees who benefit from this, either; managers within the company may have their attention brought to employees they would not have considered viable candidates for projects and jobs before. An inclusive, bias-free HR platform doesn’t just open up opportunities for employees – it unlocks previously under-utilized parts of the workforce for the organization.

But while all of this sounds idyllic, AI is not without its pitfalls when it comes to bias – which is why the way AI is implemented needs to be thoroughly thought through.

Bias creep

The biggest problem with bias, from an AI perspective, is a concept called “bias creep.”

Put simply, if an AI is trained on datasets which reflect a biased reality, it may continue to perpetuate that bias in its decision-making processes.

One of the most famous cases of AI perpetuating human bias comes from a camera-making company who trained an AI to take photos when subjects weren’t closing or squinting their eyes. Because the developers were part of an un-diverse workforce, all of the subjects the AI was trained on had rounder eye structures, predominantly associated with people of European ancestry. When the cameras were sold to consumers, the AI didn’t work as intended for people with more slanted eye-shapes – predominantly people of Asian descent – and instructed them not to squint.

This is a great example of why it’s important to have a diverse workforce – which would have prevented this problem from the get-go – but it’s also a really good example of how bias can inadvertently creep into algorithms, even when no one intentionally set out to put it there.

Today, most large enterprises have Chief Diversity Officers in their management, who are specifically minded to diversity in all parts of the business. That means more than just creating an inclusive and diverse workforce; it also means product, marketing, manufacturing, sales – and every other part of the business – see diversity and inclusion as major considerations in the way they function.

It also means that D&I professionals need to be able to tweak and update relevant AI functions as new diversity and inclusion considerations arise.

Bias check

To overcome bias creep, AI-based systems – especially HR ones – need to routinely check and re-examine their function in light of the D&I officer’s changing specifications.

There are many ways to do this (at Gloat, for instance, we have an Anti-Bias Dataset which is routinely updated and checked against by our data science team, and which can be modified by the D&I officer according to their company’s specific needs) but mainly, it’s important that these checks and balances exist.

Once these safeties are in place, however, an AI-based HR environment is more than just a “nice-to-have” feature; it can become a critical tool which ensures no one in your workforce is overlooked or disregarded, and an essential step in creating a more inclusive, fair, and dependable organization.

Gloat live On-Demand: 2 Days, 13 Sessions, 20 World Renowned Business Leaders

Watch now →

Related