Can AI ever never Hallucinate?

Artificial Intelligence

Fill out this field
Please enter a valid email address.
Fill out this field

Can AI ever never Hallucinate?

The quick answer is no. There are many layers as to why AI will always suffer from hallucination.

However, at the core is probabilistic machine learning, the engine or the heart of AI. It is randomness that allows the machine to learn patterns without the need to be rigorous and complete.

If we expected the machine to learn by being complete, we would default to exhaustive rule-based programming, making data analysis and trend detection computationally & humanly impossible.

The genius of machine learning is that the machine learns randomly and incompletely and yet predicts very well. We humans accept that we do not know what the machine learnt, thus the idea that AI/ML is a black box whose output we cannot explain. The randomness of AI also gives rise to AI being non-deterministic, i.e. AI gives different answers for same questions.

This is not something that can be solved. There are many different attempts to minimise this. However, randomness is the core of ML/AI. Some researchers force AI to be deterministic by forcing the model to operate with same seed-codes, etc. but that is not removing randomness, but forcing the model to one version of randomness.

Some agents add RAGs to ensure responses are grounded in documentation. My experience is that this fails to work even when dates, numbers are clearly stated in documents, etc. Such self-verification will improve, of course, however, added steps to minimise randomness need to be hard-coded, which is again, non-ML philosophy & resource intensive/borderline impossible and adding verification steps will add this step’s/models’s own duty-bound randomness/hallucinations, etc.

AI is good for light-hearted content and banter, but when it comes to anything serious, such as extracting dates, statements, content from documents for professional use-cases, it is important to be wary.

Recently Deloitte was fined for “AI-ing” their reports. This obviously comes across as unprofessional and discounts any effort put into making the report. Further, hallucination is one of the main causes of lack of AI deployment professionally, fuelling talk of the risks of AI being a bubble, etc.

A point I would like to make here is that AI seems good for light-hearted content and banter because there is no absolute truth to such content, hence while AI may not be wrong for such content, it need not be right either. The risk of AI hallucination even for light banter/content is still there, it is just muted & goes unnoticed.

Reminds me of the 2nd book of Harry Potter, where a journal writes back to Ginny Weasley & Harry Potter. While AI is not as devious (yet!), its engine is quite random and chaotic!