What happens when AI makes a mistake?

Reaktor-Everyday-AI-illustrations-1

When your “smart” assistant isn’t that smart

People make mistakes – but so do AI-powered systems. What can you do when your smart assistant fails to deliver?

In many situations, people want their AI-powered system – such as a smart speaker or a voice assistant – to be a tool: something that helps them perform a task faster or focus on more important things. When cooking, driving or entertaining guests, people expect AI to assist from the background, not to take center stage.

But sometimes the AI-powered system fails its seemingly simple task: it plays you the completely wrong kind of music or directs you to the coffee shop three towns over instead of the one you visit every day.


“If I ask my smart speaker to convert Fahrenheit degrees to Celsius degrees, it will tell me that in order to convert Fahrenheit to Celsius you need to multiply by this and subtract this from that. And I am like, I do not need this information.”

Female, 32, Helsinki



When people engage in an activity, they expect a convenient and frictionless experience from their AI-powered helper. When it does not deliver, they have to switch from their passive role towards AI to an active, instructing one.

Do I need to change or can I change the algorithm?

People have been using different AI-powered systems now for years and understand that the algorithms are not perfect. They even know that the system should receive some input to produce more accurate results.

But this is often made difficult, if possible at all: giving feedback to the system is laborious and ineffective. There may be nothing you can do immediately to fix the problem or teach the AI to prevent the error from happening again in the future.

People are even willing to adjust their behaviour to get better results the next time: How should I rephrase my question to get the answer I want? Often, there may not even be a way to let the system know the result it gave was all wrong, resulting in frustration at the lack of ability to help the system improve.

A little interaction can make a big difference

Being let down by the “smart” assistant makes the user unhappy. If you cannot interact with the system, you have to make do with its flawed help or turn the system off and go find help elsewhere.

At the bottom of the problem is interaction: people want quick and more effective ways to let the system know what it’s doing wrong. They also wish to understand why the system failed and, left in the dark, start questioning the system’s responses, wondering if there is a hidden commercial logic behind the oddly behaving algorithm.

By allowing instant feedback to which the AI service takes account and adjusts its responses, companies can increase people’s satisfaction with imperfect algorithmic decision-making. Instead of shouting at a black box, people feel they are actively engaging with their AI-powered system – and hopefully getting better answers along the way.


Everyday AI is a collaboration between Alice Labs and the Centre for Consumer Society Research, University of Helsinki, in partnership with Reaktor. The Engaging With EverydAI webinar took place on 5th May at 9am CET / 10am EET.