Breaking Free from Confirmation Bias in AI: A Call for a better Approach to all Things AI

Robert Engels
3 min readMar 15, 2024

--

Being in San Francisco, this vibrant and living city, full of ideas and willingness to move onward, I could not help myself getting into a thought or two on the sociology of trends and commercial interests. In our current society we often stick to verification and confirmation to shape our ideas and opinions. Social media, news, and opinion platforms tend to reinforce our beliefs, while even academic resources like Arxiv.org can unintentionally contribute to this echo chamber by not being peer-reviewed. This trend is definitely visible in the AI industry, where big ideas, big money, and authoritative messaging can overshadow the importance of critical evaluation and scientific inquiry. Instead of acknowledging and solving observed problems, issues with AI are made undiscussable through arguments like: “why are you so negative? The issue will be solved if we only have more (more data, more time, more compute, more money, more whatever).”

As a seasoned AI scientist, I am concerned that our collective focus on confirmation may hinder progress in a field full of (real) potential. And that feeling gets stronger when walking in San Francisco, between richness, beauty, tents, people sleeping in cars while they have no housing, super sport cars and fentanyl junks besides those. The glass of the Tech Scene here always seems to be half-full, but the issues you get when really applying the tech seem to get marginalised (just as the problems you see around you on the streets) . Our eagerness to show that a method works can lead us to overlook underlying issues that need addressing. When we become too attached to our hypotheses, we may fail to consider alternative solutions or improve upon our existing models.

The consequences of such a confirmation bias are significant. We may set unrealistic expectations for people around us, we may start to believe in our own biases, all this leading to disillusionment when those expectations are not met. This pattern is all too familiar in the world of AI, and I fear we are on the brink of another wave of disappointment with generative AI if we do not count up all warnings we currently get.

To break free from this cycle, we must adopt a more scientific approach. Only if we start to acknowledge issues and begin to document our findings, clearly identifying where things go wrong instead of waiting for the “next round” of deliveries in the form of new “black boxes”, we might start to really implement value. By doing so, we can ensure that our AI solutions are grounded in evidence, reproducible, and open to falsification. This approach encourages us to challenge our assumptions, explore alternative solutions, and continuously improve upon our work.

By implementing the ways of work of scientific methods, we can build AI solutions that are not only innovative and efficient but also robust, reliable, and responsible when applied in practice. As we navigate the ever-evolving landscape of AI, let us commit a culture of documented inquiry and evidence-based decision-making. But what´s more: it will force more openess in development of new AI technology. Instead of beginning with large concepts and substantial investments to block competition, which might lead to developments following a likely unfavorable path, we schould more openly explore, implement, and test new directions through open and proactive involvement early on. Doing so will lead you to the right use cases to tackle with the AI technology currently around you, instead of chasing the gold pot at the end of the rainbow, creating overinflated explanations and hinder positive sides of AI to be implemented.

--

--

Robert Engels

Broad interest in topics like Semantics, Knowledge Representation, Reasoning, Machine Learning and putting it together in intelligeable ways.