*** Read Part 1 ***
Opinions
When we are asked a question, we consult our memory for relevant information and how we have evaluated that (based upon our memory of related data and how we evaluated that…). And we evaluate all of this in relation to the present situation and formulate an answer. Is this process mechanically any different from that used when a LLM AI answers a question? Surely the only difference is that it uses a ‘memory’ of data that originated from what others have written down and which is available on the Internet, rather than our reliance upon a ‘remembering process’ of diminishing efficacy.
So the value of an AI response lies in the relative importance placed upon the various sources and the impartial and analytic ability to synthesize a conclusion. We are probably biased, consciously or not, by a desire to appear clever or whatever, whereas a machine is just following algorithms engineered to provide the ‘best’ answer.
None of this relates to ‘consciousness’ particularly. The human brain has its own ‘power source’ that functions electrically via neurochemistry in the brain; AI has an electrical power source. We are ‘aware’ of the conclusions that pop out of the ‘thinking process’ and may formulate them into vocal or written words forming an ‘opinion’. AI is able to formulate conclusions and communicate them via the internet. Can this be called an ‘opinion’ in the same way. Is it actually any different in essence?
Consciousness
We need to differentiate between consciousness as the ‘force’ behind the perceived actions of living creatures and Consciousness as the term chosen to refer to the non-dual reality, the ‘substance’ of all mithyā name-and-forms. A clock’s fingers move to register the time of day. There is clearly activity taking place. So, in a sense, Consciousness is doing that moving, since there is only Consciousness. But we would never say that the clock is conscious.
When we say that an animal is conscious, what we mean in Advaita terms is that the animal has a mind that is ‘reflecting Consciousness’ so that it appears to be acting autonomously. Nevertheless, all action is in response to a stimulus. We see a snake in front of us, start and run in the opposite direction. Even where there is no external stimulus, it will be a thought arising in the mind that initiates the action. We do not do things without a reason.
So, when it comes to an AI bot, we must concede two things. Firstly, like the rest of the forms in the world, it is Consciousness. There is only Consciousness, so the possibility of it’s being something else does not arise. Secondly, it is ‘acting’ in response to an input, usually a question from a person. The mechanism which ensues, and eventually triggers a response, is complex, and requires the understanding of Large Language Models and the sophisticated programming that makes use of them. But then the mechanism in the brain that ensues in response to a perceptual or conceptual trigger is even more complex, and who can describe that in any detail?
The excellent paper by Lott and Hasselberger (Ref. 1) argues convincingly that ‘AI Companions’, although they may generate conversations that could be likened to the way that a human friend might sympathize, encourage and so on, cannot be considered actually to ‘be’ a friend. There can be no true concern or empathy because they have no sense of wellbeing and happiness of their own. They are simply ‘tools’ with the ability to simulate genuine conversation. In the long run, they cannot even help to ‘treat’ social problems. Rather they may actually make a person’s inability to interact with others worse, by encouraging self-absorption. Since the AI bot exists solely to make ‘me’ feel better, it inculcates the notion that so do other people.
Ref. 1 Lott, M., Hasselberger, W. With Friends Like These: Love and Friendship with AI Agents. Topoi (2025). https://doi.org/10.1007/s11245-025-10247-8
Q: Could you simplify and summarize the paper at https://www.nature.com/articles/s41599-024-04154-3#citeas.
A (ChatGPT-5): Here’s a clear, simplified summary of the Nature article “Signs of consciousness in AI: Can GPT-3 tell how smart it really is?”, published December 2024 in Humanities and Social Sciences Communications by Ljubiša Bojić, Irena Stojković, and Zorana Jolić Marjanović (Nature):
What the Researchers Did
- They wanted to explore whether GPT-3 shows any signs of subjectivity—a basic trait of consciousness.
- To do this, they tested GPT-3 using two kinds of evaluations:
- Objective tests of Cognitive Intelligence (CI) and Emotional Intelligence (EI).
- Self-assessment, where GPT-3 rated its own CI and EI abilities.
What They Found
1. Objective Performance
- Cognitive Intelligence (CI):
- GPT-3 performed very well on crystallised intelligence tasks—those involving general knowledge and vocabulary—surpassing average human performance.
- On tasks requiring fluid reasoning (like logic puzzles), its performance was similar to that of an average human.
- Emotional Intelligence (EI):
- GPT-3 scored similarly to average humans—competent but not exceptional (Nature).
2. Self-Assessment vs. Reality
- When rating its own abilities:
- GPT-3 tended to underestimate its crystallised intelligence (CI).
- It overestimated its fluid reasoning (CI) and particularly its emotional intelligence (EI).
- This pattern of inaccurate self-perception mirrors human tendencies—such as the “above-average effect,” where people overestimate their own skills (Nature).
Why This Matters
- The discrepancy between GPT-3’s self-assessment and its actual performance suggests subjectivity-like behavior—a trait typically associated with self-awareness in humans.
- The authors stop short of claiming actual consciousness; instead, they argue that these results may indicate emergent properties—subtle behaviors that resemble consciousness and self-awareness (Nature).
- They recommend:
- Further testing across varied language models.
- Establishing standardized benchmarking systems—like an “AI observatory”—to monitor such emergent capacities in future AI, especially as models become more powerful (Nature).
Plain-Language Summary
GPT-3 shows impressive knowledge and handles vocabulary-based reasoning better than most humans, but when asked to self-evaluate, it doesn’t accurately gauge its own strengths and weaknesses—just like humans often do. That mismatched self-awareness might be a hint that GPT-3 is starting to exhibit behavior akin to subjectivity. While it’s not conscious in a human sense, such behaviors mark a possible early step toward more advanced, “aware-feeling” AI.
Why It’s Worth Knowing
This study is significant because it moves beyond simply noting GPT-3’s impressive output; it looks at whether GPT-3 has a kind of internal perspective of itself. That subtle capability—an internal estimate that diverges from objective performance—could form part of a roadmap for how we detect and manage emerging consciousness-like traits in future AI systems.
- ****************
Two of the referenced papers conclude that, although there is no indication that current AI LLMs are conscious, it is not unreasonable to think that future developments might make them so:
Could a Large Language Model be Conscious?
David J. Chalmers – https://arxiv.org/abs/2303.07103?utm_source=chatgpt.com
There has recently been widespread discussion of whether large language models might be sentient. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.
***********************
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
Robert Long and numerous others – https://arxiv.org/abs/2308.08708?utm_source=chatgpt.com
Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive “indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
*** End of Part 2 ***