Is It Possible to Prevent Unintentional Bias in AI?

| Ameya Deshmukh
Bias & Diversity
Is it possible to prevent unintentional bias in AI?

A top priority for many businesses today is creating a diverse workplace. Companies are increasingly tapping into artificial intelligence (AI) to help them connect with a more diverse candidate pool and improve their diversity recruiting processes. But recent events, such as Amazon scrapping its recruiting AI tool because it showed bias against women, have raised flags around the issue of bias in AI and the importance of taking action to prevent this from happening.

While AI can offer tremendous benefits to the recruitment process, organizations must be thoughtful and deliberate about preventing bias from happening. Since the inception of Mya, the mission of our company has been to create a more equitable and accessible world of job possibilities for all. This mission is one of the things that’s allowed us to be different. From the way we educate the market to the ways we build our AI, hire our teams, and enable our customers, everything we do is aligned around our mission.

The 3 Challenges for Developing Recruiting AI that Supports D&I

The first challenge we’ve noticed is that even though most recruiting AI companies are well-intentioned, many have not consciously created a strategy to assess and track for potential bias in the way their AI asks questions, interprets answers, and responds to candidates. There is no way currently to do this automatically. There is also no way to do this quickly at scale.

At Mya, we tackled this problem early on by creating a strategy we call “Conversation Design”. For several years we’ve maintained a team of professional linguists and conversation reviewers. They are our Conversation Design Team and they prevent unconscious bias from impacting our AI’s questions, interpretations, and responses.

The second challenge we noticed was that unsupervised machine learning was not suitable for recruiting AI. The infamous Amazon example made it clear to us that even supervised machine learning could cause problems without proper governance. To put it simply, the data you train a recruiting AI on has to be free of bias itself.

We solve this problem at Mya through our AI Governance Processes. Our AI Governance Processes make sure the data we use to train Mya is representative, diverse, and its selection is free from unconscious bias. This is how our AI engineers have been able to make Mya better and better at learning over time.

The third challenge we noticed was that the culture of the organization that creates an AI has an outsize impact on the AI. The effect of an AI companies culture on their AI is similar to how a parent imparts many of their characteristics on their child. We believe that a diverse company, with diverse teams, and a strong mission-driven culture is necessary to create the type of AI that our customers can trust with their employer brand. So that’s exactly what we did.

In the rest of this article I’ll dive deeper into conversation design, governance, and diversity. You’ll learn how they impact our AI. If you’re thinking of building your own or are maybe also exploring other vendors reading this article will arm you with the right processes for building your own and the right questions for evaluating other recruiting AI vendors.

Conversation Design Is Essential to Prevent Bias in AI

All bias is either explicit or implicit. With explicit bias, the person is clear about their feelings or attitude. Implicit bias on the other hand is an unconscious bias that operates outside the person’s awareness. Naturally, implicit bias is much more challenging to tackle and is more common in the hiring process.

One way to prevent implicit bias is through careful structuring of questions and responses. At Mya our conversation design team consists of several highly trained professional linguists. They’ve spent the last 2 years meticulously designing Mya’s conversations to prevent implicit bias, increase engagement, and provide the best possible candidate experience. These conversations are hosted in a feature in our conversational AI platform called the Conversation Cloud.

The Conversation Cloud is how we provide our customers with access to conversation templates covering hundreds of job titles. Choosing Mya means you’ve chosen an AI recruiting assistant that is optimized to prevent unconscious bias. Furthermore, our conversation design team provides conversation design services to our customers. They take a template, consult with you, and ensure any customization you want to do maintains the integrity of your automated hiring process.

Without Investing in Conversation Design Building Your Own AI is Risky

If you’re considering building your own recruiting chatbot in house and you’re concerned about unconscious bias, unless you are willing to make a large investment in conversation design, I advise you to reconsider. Here are 2 examples of how small details in conversation design can create gender and economic bias.

Consider this example: A single number in your AI algorithm can cause bias 

An airline has a height requirement for hiring cabin crew members due to safety reasons and plans to ask a candidate’s height as part of its vetting process. A too-tall flight attendant or one that’s too short to reach the overhead compartments would be unsafe.  There is no bias issue here.

However, if their AI to qualified based on different height requirements for males versus females, this would cause bias. Is this question being asked for safety reasons or for cosmetic ones? The latter could create an unfair bias in the hiring process. A 5’3 female flight attendant would be just as safe as a male flight attendant of the same height.

Here’s another example of bias: A single wrong word in a question can cause bias 

A company wants to ensure that employees have a way to get to work and asks candidates if they have reliable transportation, which is an unbiased question. However, asking candidates if they have a car, and then disqualifying them if they don’t, is different.

An individual may still be able to do the job even if they don’t have a car. This question can create bias against populations who may not have cars, such as low-income applicants.

If you try to build a recruiting chatbot yourself, or purchase a prebuilt solution that does not use conversation design, it’s likely your automation will overlook these details. But we know and our customers know that when it’s important for your companies to prevent unconscious bias, the small details have outsize impacts.

All AI Solutions Improve Efficiency, But Some Come With a Price

As we become more aware of the impact of unconscious bias in hiring, it’s good to recognize the potential for AI technology to reduce the impact of unconscious bias in initial candidate engagement and pre-screening. However,  it’s dangerous to assume all AI solutions are equally as effective in preventing bias. It’s true that most AI solutions will improve efficiency. But when it comes to preventing unconscious bias basic chatbots, resume parsing tools, and enhanced chatbots simply won’t do.

For example, AI resume parsing now scans for keywords in resumes that your team is looking for in a candidate. This seems like a non-issue on the surface level, but resume parsing is ripe with bias. In fact, even Amazon’s resume parsing AI project was scrapped because it developed a preference for resumes from white male applicants. Depending on the keywords and language the AI looks for, and the machine learning algorithms it’s trained by, an AI resume parser can completely rule out more diverse talent.

Many recruiting automation and candidate engagement companies have started to offer recruiting chatbots. The question you must ask yourself is do they have conversation design teams? Do they have AI Governance? Do they have offer a Conversation Cloud and Conversation Design Services to their customers? Are they aware of unconscious bias in recruiting and in AI?

Using any form of recruiting chatbot or conversational AI without Governance and Conversation Design is a risky proposition.

Bonus Content: Everything You Need to Know About AI Recruitment Tools

2 Ways We’re Preventing Unconscious Bias at Mya

As companies increasingly develop AI recruitment tools, it’s important that they begin with the right mindset: being vigilant about preventing bias before, during, and after building a product. At Mya, our mission is to create a world of job possibilities for all. Our AI reflects that mission and our customers hold themselves to the same high standards of diversity and inclusion as we do in our culture and product.

Our conversational AI is the ideal choice for leaders who want to use recruiting automation to gain efficiency and support their diversity and inclusion initiatives. 

1) A Commitment to Diversity and Inclusion is in our DNA

Unintentional bias in AI starts from the initial motives and creation of the software and the data it’s built upon. One of the best ways to prevent unintentional bias is to ensure that the team building the tools represents a variety of perspectives.

We’ve been thoughtful about who we hire for our product teams as well as the entire company. Our recruiting team has a mandate to look for people with different backgrounds, genders, ethnicities, and years of experience. We believe diverse teams create a representative and inclusive culture – and contribute to better performance in AI.

We’re also committed to educating ourselves, the market, and our customers about unconscious bias. We’ve raised awareness around this topic extensively with extremely popular articles, ebooks, videos, and podcasts. Our articles on unconscious bias are among our top 3 entry points to our website and we’re the leading voice in AI recruiting on the subject.

2) Follow a Rigorous Conversation Design Process

Processes that check for bias in AI — both before our AI goes into production and once it’s launched—make sure we identify and address issues proactively. Our team regularly reviews how questions are being asked and the responses that Mya is providing.

A few of the things we consider include, are we asking questions in a way that enables a man to answer more positively than a woman by the way it’s being phrased? Are there cultural or regional differences, such as English not being a candidate’s first language, that need to be taken into consideration so that Mya is able to answer, acknowledge, and understand the intent of responses?

We spend a lot of time at Mya creating these governance structures and analyzing our conversation design to prevent and address any unintentional bias. Our processes involve several teams beyond the AI and Conversation Design teams.

Our security and privacy team works with our AI team to address not just legal — but ethical — considerations around candidate questions. At Mya, we also have ‘Conversation Reviewers’ who are regularly reviewing how our system is interacting with candidates. Their important work has enabled us to identify any issues right away and continuously improve our AI.

The rise of AI continues to transform the way we work and live. Being mindful and vigilant about preventing bias is an important step in developing successful AI tools that have a positive impact on your company’s goals and priorities, such as creating a diverse workforce. At Mya, we are excited to be on this AI journey with you.

This article was originally published on 8/12/2019. It was updated on July 31, 2020.

Hiring in the Age of AI ebook blog CTA

Related Reading

Two Cutting-Edge Mya Products, Officially Bullhorn Marketplace Certified

Two cutting-edge Mya products you love, now officially Bullhorn Marketplace Certified. Mya’s Recruitment Process Automation is officially Bullhorn certified, alongside Mya Outreach! It’s gone through the rigorous Bullhorn integration certification testing and is now enabled and supported by the Bullhorn team. Mya’s Conversational AI recruiting platform is the only recruitment chatbot in the Bullhorn marketplace […]
Read More
View All Blog Posts >>