Expert-in-the-loop: AI needs the right copilot too

Making sure the right human is embedded in your AI loop

Expert in the loop

AI, as we all know, can do great things. It’s hugely capable and its ability to 'learn', produce output and support almost any activity or product is growing daily. It’s vastly capable, when used in the right way. It’s also vastly problematic when not used in the right way, mitigating those problems is the subject of this article.

Almost all AI training courses or insights will talk about ‘Human-in-the-loop’ – where all AI generated content, processes or other outputs are subject to human review, often at multiple stages. This guards against potential issues which the AI itself may not pick up, these could be inappropriate content or language, incorrect findings (hallucinations) misunderstood concepts, impossible activities or pictures and many other things. The human reviews the AI to notice and correct these, often with their involvement used as further training for the AI to help prevent similar issues in the future.

This, obviously, is a vital concept. What is less discussed about this concept though is the skills and capabilities of the human (or humans) who are in the loop. How do we make sure that we have the right human in that loop, that they are the person or people best placed to both spot and correct the AI as needed, to guide it on the most helpful outputs and to ensure that the mistakes made and corrected are fed back in the most advantageous way to help with training the AI. In the same way that we’d look to an expert for specialist tasks like coding, piloting your holiday flight, advising on company strategy or fixing a step (shout out to Phil Dunphy) I think you’d want an appropriately trained human performing this role, especially if the output is key for subjects like customer contacts, strategic input or other critical tasks.

I think the phrase, whilst meaningful, does not actually convey the importance of the role or the fact that the humans and AI involved in tasks are at least equally important to the successful outcome. I think the phrase ‘Expert-in-the-loop’ is far more descriptive and real about the impact. The expert is going to be the human who is actually responsible, or even accountable’ for the best possible outcome of using the AI. Not just any human will do, you’re going to want the best person for the job.

I’d suggest, for any tasks where an organisation is going to be using AI:

  • Properly understand the process that the AI is involved in, and map the process stages where the AI output is influential
  • Clearly document the context of these stages and the knowledge that a human is going to need to accurately review the output
  • Document the requirements of that expert in a role description, including the domain knowledge and AI expertise needed to contribute to the ongoing AI training
  • Continuously review the usage of the AI tool/s to ensure that the human review element does not expand beyond the expertise of the humans in the loop

AI is obviously going to be playing a role in the future of work – the jury is currently out about exactly what impact it will have and where it will have the most benefit. While that evolves, we must remember to think about our process for using and assuring AI to produce better (and not just different) outputs.

Join 100+ subscribers

Get the latest news, blogs, and resources in your inbox monthly.

Subscribe Now
Your data is in the safe hands. Read our Privacy policy.

Thanks for subscribing!

You should receive a confirmation email shortly and we'll now send you new Perspectives as soon as they are published.

Oops! Something went wrong while submitting the form.