Is it possible to prevent unintentional bias in AI?

Connie Schiefer
| Connie Schiefer
Artificial Intelligence, Bias & Diversity
Is it possible to prevent unintentional bias in AI?

A top priority for many businesses today is creating a diverse workplace. Companies are increasingly tapping into artificial intelligence (AI) to help them connect with a more diverse candidate pool and improve their recruiting processes. But recent events, such as Amazon scrapping its recruiting AI tool because it showed bias against women, have raised flags around the issue of bias in AI and the importance of taking action to prevent this from happening.

While AI can offer tremendous benefits to the recruitment process, organizations must be thoughtful and deliberate about preventing bias from happening. Even though most AI teams are well-intentioned, many are not consciously putting processes into place to assess and track for potential bias in the way questions are being asked, interpreted, and responded to. Without these checks and balances, they may be unknowingly setting themselves up for trouble.

Consider this example: An airline has a height requirement for hiring cabin crew members due to safety reasons and plans to ask a candidate’s height as part of its vetting process. There is no bias issue here. However, if they wanted to ask different height requirements for males versus females, I would explore with them—are you asking this for safety reasons or for cosmetic ones? The latter could create an unfair bias in the hiring process.

Here’s another example of bias—and how it can impact diversity. A company wants to ensure that employees have a way to get to work and asks candidates if they have reliable transportation, which is an unbiased question. However, asking candidates if they have a car, and then disqualifying them if they don’t, is different. An individual may still be able to do the job even if they don’t have a car. This question can create bias against certain populations who may not have cars, such as low-income applicants.

AI can help

When used correctly, conversational AI (CAI), also called intelligent bots, can help organizations make better-recruiting decisions and boost diversity in hiring. In fact, CAI is more likely to prevent bias in those first conversations with candidates than human recruiters. Consider when we meet someone for the first time. Most of us are making unconscious judgments based on feelings and intuition. AI doesn’t experience those human emotions—and is able to screen applicants more fairly based on objective criteria.

Preventing Unintentional Bias

As companies increasingly develop AI tools, it’s important that they begin with the right mindset: being vigilant about preventing bias before, during, and after building a product. There are two big ways they can do this:

Create a diverse AI team.

One of the best ways to prevent unintentional bias is to ensure that the team building the tools represents a variety of perspectives. Be thoughtful about who you hire for your AI team, and look for people with different backgrounds, genders, ethnicities, and years of experience.

Establish governance.

Creating processes that check for bias—both before a tool goes into production and once it’s launched—can help an organization identify and address issues quickly. Have your team review how questions are being asked and the responses that the system is providing.

For example, are you asking questions in a way that enables a man to answer more positively than a woman by the way it’s being phrased? Are there cultural or regional differences, such as English not being a candidate’s first language, that need to be taken into consideration so that the system is able to answer, acknowledge, and understand the intent of responses?

We spend a lot of time at Mya creating these governance structures and analyzing conversational design to prevent and address any unintentional bias. Our security and privacy team work with our AI team to address not just legal—but ethical—considerations around candidate questions. At Mya, we also have ‘conversation reviewers’ who are regularly reviewing how our system is interacting with candidates. This enables us to identify any issues right away and continuously improve the AI.

The rise of AI continues to transform the way we work and live. Being mindful and vigilant about preventing bias is an important step in developing successful AI tools that have a positive impact on your company’s goals and priorities, such as creating a diverse workforce. At Mya, we are excited to be on this AI journey with you.

Related Reading

View All Blog Posts >>