Why recruiting teams should care about the quality of their AI

Quality is paramount when it comes to recruiting AI and chatbots. It's possible for even a single bad interaction with a recruiting chatbot to cause a candidate to drop off. In a recent Digitas survey, 73% of respondents claimed they would never use a chatbot again if they had a single bad experience during their first encounter. Success in using AI for recruiting hinges on the ability of your recruiting chatbot to understand, recognize, and seamlessly handle a number of variables in human conversation.

It's important for HR AI buyers to be able to evaluate chatbot quality. However, some reports indicate that 60 to 80% of HR professionals feel that they have only a limited understanding of AI and success in purchasing a chatbot solution requires HR leaders to understand and recognize both the type of chatbot they need as well as the type of experience they'd like to create.

Most AI for talent acquisition vendors on the market have built reputable products. Many recruiting chatbots available today will elevate your employer brand and deliver a positive candidate experience to each candidate. But this is because the bar has been set so low. 85% of candidates are used to not hearing back from employers at all after submitting a job application. In the context of these kinds of application experiences, providing candidates with any response that allows 2-way dialogue is a step in the right direction.

But, as more and more companies adopt chatbots, simply having a recruiting chatbot won't be enough to differentiate yourself. So when purchasing a recruiting chatbot spare yourself and your IT team the headache of having to upgrade and reintegrate with a superior solution in 2 years' time. Focus on quality now, and acquire a chatbot solution that can deliver the best possible candidate experience.

I've put together the following 4 examples to help illustrate and shine a light on some of the key conceptual differences in the conversational experience that different chatbots can create. Most of these examples, with the exception of Amazon, are from years ago and the chatbots used are deployed in broad consumer-facing functions. Still, I've taken care to ensure that the concept covered in each example will help you to identify with and feel more empathy for candidates as well as further your understanding of recruiting chatbots.

Siri won't let you talk about music the way you like to talk about music

Users engage Siri on an almost infinite variety of topics. The level of understanding that Siri has is quite impressive considering the breadth of functions it's able to perform. However, excessive breadth often means that Siri doesn't quite have enough depth about each topic. In this example, a user has engaged Siri in a conversation about music. Siri has a very surface-level understanding of the language humans use to communicate about music so if a user wants to get Siri to play them music they have to communicate with it in a very specific and limited way.

When we feel like we aren't being understood, we get frustrated. Imagine being stuck in a foreign country where you don't speak the language and being caught urgently needing to use the restroom in public. Trying to find a restroom would be an extremely frustrating experience for you. You'd use every word for restroom you knew in your native language to no avail. You'd have to be reduced to pantomiming to get to your goal. The conversational environment would constrain you to a narrow set of options. It doesn't feel good to not have options. It doesn't feel good to not be able to communicate using your natural way of speaking.

Similarly, candidates also don't feel good when they are restricted to a narrow set of options. Like the restroom example, candidates urgently need a job. And like the restroom example, rather than being reduced to pantomiming, candidates would much rather speak to someone who was able to understand their language and their intent. Bear with me here. A category of recruiting chatbots called bots, only offer Y/N questions or multiple choice questions to candidates as communication options.

Bots create a restrictive experience that is better than not being communicated to at all, but they don't create the best possible experience for candidates. Bots aren't bad. For some companies, they may be the ideal solution. But if you want your recruiting chatbot to create the best possible candidate experience, basic bots are not your best bet.

A person strays from Poncho's decision tree...what happens next?

Poncho, a now-defunct chatbot that was deployed via Facebook Messenger, was designed to tell you about the weather. It is another example of a chatbot that doesn’t have tolerances for the different ways users may phrase things.

Here it only recognizes the first part of the question "What's the weather like in Brooklyn" and misses the modifier "this weekend." After that, it was unable to pick up on the user's clarifying statements and created a frustrating experience.

It’s important to note that this interaction would never happen with a conversational AI as a CAI would be able to understand how the modifier “this weekend” affects the question “what’s the weather like?” For example, a CAI built for recruiting would be able to correctly answer "Do you have any jobs available in San Francisco?" and any modifications of it.

TayTweets learns the wrong things and says the wrong things

In the AI world, Tay Tweets is infamous. Tay was an AI that Microsoft released on Twitter that used machine learning to evolve its communication style over time. The machine learning algorithms used general unstructured data fed to it by the Twitter community that was not subject to screening by the Microsoft team.

Within a short time of launching, Tay went from writing mild tweets such as “So glad to meet you I love people,” to sending our derogatory and racist messaging through its account. Tay’s a great case study on what can go wrong if you don't have the right governance around the data your using to power machine learning.

While Tay Tweets is an extreme example that's quite unlikely to happen with any recruiting chatbot, the consequences of running machine learning algorithms without proper oversight are quite serious. Just look to Amazon's scrapped recruiting AI project.

It was trained using a dataset that comprised almost entirely of men and developed a bias against female engineers. The resulting fallout was disastrous for Amazon's employer brand. It drew a massive outcry from the public, from media outlets, and potential candidates on Twitter. This story is still talked about to this day.

This would never happen with Mya as we have a rigorous internal review process that all of the data we use to train our AI goes through before it is exposed to Mya. Our internal processes are built to ensure that we keep unintentional bias out of Mya's algorithms. Impacts on bias and diversity are only one of the many issues that can crop up due to bad data.

Don't choose a chatbot get a conversational AI

Automating recruiting functions with a basic chatbot can leave you stuck with a bot that doesn’t understand the context, isn’t able to hold conversations, and has unpredictable performance. Engaging with such a bot will cause candidates to become enraged and in some cases can even lead them to stop doing business with you entirely.

By entrusting your candidate experience to a basic chatbot you run the risk of causing your time-to-hire to rise, candidate satisfaction scores to plummet and engagement rates to drop.

However, investing in the right type of chatbot like a conversational AI will ensure that using automation improves the candidate experience rather than taking away from it.

If you’re ready to give your talent acquisition team their time back and place the best candidates into open positions, say hello to Mya and learn how we can help you do just that.