Why Those Who Believe in CoT Reasoning are deluding themselves (or Others!)
The professional world is being sold on Chain of Thought (CoT) as a breakthrough in AI logic. In reality, the CoT breakdown is it self prone to the same hallucination errors and is not a real representation of AI reasoning.
To understand why CoT is unreliable, we need to understand two layers of “randomness” that AI suffers from when we ask it to explain its CoT.
First, Before the AI “reasons,” it must decide which variables to treat as truth. Because LLMs lack a symbolic ground truth, this isn’t a choice made by first principles but rather from stochastic selection.
The reality is that the model latches onto specific data points based on token probability and prompt weights. If the “anchor” is chosen via a probabilistic roll of the dice, the entire foundation of the subsequent “logic” is built on a high-speed guess.
This alone makes asking AI to explain its reasoning a fools errand. And because no one using AI wants to hear that AI output was based on randomness, the story continues.
Second, to continue the mirage of AI reasoning, we need the AI to come up with a nice story. A story on how “AI interprets its own reasoning”. Once the variables are set, the “reasoning” begins. But the AI isn’t thinking through the steps; it is predicting the next most likely rhetorical pattern of a logical argument.
This is “Post-hoc Rationalization.” Research shows that models often “reason” their way toward a pre-determined (and often biased) output. It creates a persuasive narrative that looks like a derivation, but the conclusion isn’t forced by the logic. The logic is manufactured to satisfy the conclusion.
The bottom line is CoT doesn’t give us an audit trail of a model’s mind; it gives us a persuasive essay written by a “Stochastic Parrot.” For professionals, the danger isn’t that the AI is wrong, it’s that it has become exceptionally good at sounding “logical” while being hallucinated.





