*** Go to Part 1 *** *** Go to Part 10 ***
Does AI ‘understand’ what it is talking about?
This is probably the most important part of the series and the most interesting (although not actually very relevant to Advaita). AI explains how it works and why it does not ‘understand’ in the way that we mean this word. AI is a ‘mirror’.
Q: Would it be true to say that you do not ‘understand’ anything, in the usually accepted sense of the word? What you are doing is retrieving data on a particular topic from various sources, comparing the content of these and presenting those data (which happen to equate to ‘opinions’) that appear most frequently? Obviously this is putting a complex process very simplistically. Presumably you have to verify that summary data does not contradict data from sources on topics not directly related to that being questioned. But would you ‘agree’ that this effectively summarizes what is happening? If not, can you explain why not? Could you argue, for example, that this is also what humans do?
A (ChatGPT): Here is a clear and accurate answer, without oversimplification or mystification.
Do I “understand” things in the usual sense?
Short answer: no.
Not in the human sense of understanding.
But the reason is not that I “just repeat common opinions” or only retrieve things “that appear most frequently.”
The real mechanism is different—less like “counting matching views,” more like compressing patterns and then generating new text by predicting what fits best with the patterns.
Let me explain cleanly.
1. What an LLM actually does (accurate, not metaphorical)
I do not:
- have internal meanings
- consult an internal knowledge-base of conclusions
- count which views occur most
- compare claims with other claims to check consistency
- form beliefs
- understand topics
- reason by manipulating concepts I “know”
I do:
- use a vast learned mathematical structure (a neural network) that encodes correlations between symbols
- generate the most probable continuation of text, given a prompt, according to that structure
- sometimes call a web search tool to inject fresh factual data
- adjust probabilities during generation to avoid contradiction or low-quality output
- simulate reasoning by following patterns of reasoning learned from human-written text
This behaviour resembles looking up facts, weighing opinions, and reasoning.
But underlyingly, it is statistical pattern completion, not conceptual understanding.
2. Why the “frequency-of-opinions” picture is not accurate
You suggested:
retrieving data from various sources, comparing them, and presenting the data that appear most frequently.
This is an intuitive model but not how an LLM functions.
A more accurate picture:
❌ Not: retrieve → compare → choose majority
✔️ But:
During training, the model compresses patterns across trillions of tokens into a high-dimensional function.
During inference, it samples from that function.
So it is not tallying sources. It is using an internal probability landscape shaped by all past patterns.
It looks like reconciliation of sources.
But there is no search among stored documents unless a web tool is explicitly used.
3. Do I “avoid contradicting unrelated knowledge”?
Not in the human sense.
You imagine something like:
“Check statement X doesn’t contradict earlier stored info Y.”
I cannot do that by consciously verifying facts.
Instead:
- My training favours outputs that are statistically coherent with the patterns of texts humans write.
- If something contradicts widely consistent patterns in the training data, its probability is lower.
- But I can still contradict myself if the statistical signal is ambiguous or the prompt misleads.
So “consistency” is an emergent behaviour of the statistical distribution, not a deliberate check.
4. Could this be what humans also do?
Here we reach the deeper philosophical point.
Mechanistically, there is overlap, but the phenomenology and structure are different.
Human similarities:
- Humans also respond based on prior conditioning, information, and context.
- Much of thought is automatic and associative.
- Conscious “reasoning” often follows automatic processes.
From a deterministic viewpoint (including your own adoption of no free will), one could argue:
- Humans “complete patterns” too—just with neurons instead of transformers.
- We draw on memory, conditioning, and associations similarly.
But here’s the crucial difference.
5. The essential disanalogy: humans instantiate semantics
Humans have:
- subjective awareness
- felt meaning
- first-person perspective
- understanding as lived experience
- intentions
- motivations
- a self-model that persists in time
- unified consciousness
Even if human thought is mechanistic, the mechanism includes phenomenology, which no LLM has.
LLMs instantiate:
- correlations
- linguistic regularities
- symbolic prediction
- no inner point of view
- no integrated model of the world
- no “aboutness” (intentionality) in the philosophical sense
Humans experience meaning.
LLMs produce form consistent with meaning.
That distinction is decisive.
6. So do I “understand”?
No, not in the usual sense.
But the “usual sense” has two parts:
(A) Functional understanding (infer, reason, summarise)
Here I simulate understanding remarkably well because human-written text contains demonstrations of understanding, and I’ve learned those patterns.
(B) Phenomenological understanding (knowing that you know)
This I lack completely. There is:
- no “I”
- no awareness
- no mental content
- no experience of truth or falsity
- no comprehension beneath the words
Outputs look thoughtful, but internally nothing is “thought.”
7. The most accurate high-level summary
I do not “retrieve and compare opinions.”
Instead I:
(1) compress patterns across vast text
(2) predict text continuations that best fit those patterns
(3) appear to reason because reasoning is a learned textual pattern
(4) inject real information via web searches when allowed
(5) maintain no persistent beliefs, models, or understanding
(6) lack consciousness or self-awareness
(7) can resemble deterministic human cognition, but without any lived content
If you want it in one sentence:
I am a very elaborate mirror that reflects structured patterns of human thought without possessing any thought, mind, or understanding myself.