
This chapter covers
- Hallucinations, one of AI’s most important limitations
- Why hallucinations occur
- Whether we will be able to avoid them soon
- How to mitigate them
- How hallucinations can affect businesses and why we should keep them in mind whenever we use AI
Chapter 1 provided an overview of how current AI works. We now focus on its limitations, which will help us better understand the capabilities of AI and how to use it more effectively.
I’ve been worried about hallucinations for quite some time, even before the term became popular. In my book, Smart Until It’s Dumb: Why Artificial Intelligence Keeps Making Epic Mistakes [and Why the AI Bubble Will Burst] (Applied Maths Ltd, 2023), I called them “epic fails” or “epic mistakes,” and I expressed my skepticism about them being resolved:
It seems to me that every time an epic fail is fixed, another one pops up. . . . As AI keeps improving, the number of problematic cases keeps shrinking and thus it becomes more usable. However, the problematic cases never seem to disappear. It’s as if you took a step that brings you 80% of the way toward a destination, and then another step covering 80% of the remaining distance, and then another step to get 80% closer, and so on; you’d keep getting closer to your destination but never reach it.
It also seems that each step is much harder than the previous ones; each epic fail we find seems to require an increasingly complicated solution to fix.