Human-in-the-Loop

Recent Posts

Blogs
/
May 11, 2023

It isn’t practical to manually correct your AI system each time it uses incorrect data.

The world of artificial intelligence is rapidly expanding, and with it comes a new set of challenges. One of these challenges is the practicality of correcting an AI system each time it uses incorrect data. In today’s modern data stack, the design is primarily focused on business intelligence tasks, which are reports and dashboards that are consumed by human decision-makers. Humans apply their domain knowledge and common sense to interpret the data, and if the data looks wrong, a person will double-check the data values, correct any mistakes they find, or apply their intuition instead of naively using the data. This is known as human-in-the-loop decision-making.

However, most of the time, AI systems make individual automated decisions at scale without a human review. The involvement of a business subject matter expert in individual decisions is limited due to the need to operate at scale or to make services affordable to customers. A data scientist trains the system, and a business subject matter expert reviews its behaviors, then authorizes the AI system to be deployed into production. This is known as human-over-the-loop governance. 

The problem is that AI has no common sense and limited domain knowledge – it only knows the historical data it was trained on. When new data is outside the range of data an AI system was trained on, the outputs can be unexpected, even dangerous.

The lack of a human in the loop results in lower day-to-day governance standards and risk protections, and the saying “Garbage in, garbage out” applies even more to AI than to business intelligence. 

We need to start building AI-ready data pipelines based on interpretable feature engineering with built-in data quality checks, audit trails, and monitoring.

Explore more posts

coloured-bg
coloured-bg
© 2024 FeatureByte All Rights Reserved