AI and Consciousness (Part 2)

*** Read Part 1 ***

Opinions

When we are asked a question, we consult our memory for relevant information and how we have evaluated that (based upon our memory of related data and how we evaluated that…). And we evaluate all of this in relation to the present situation and formulate an answer. Is this process mechanically any different from that used when a LLM AI answers a question? Surely the only difference is that it uses a ‘memory’ of data that originated from what others have written down and which is available on the Internet, rather than our reliance upon a ‘remembering process’ of diminishing efficacy.

So the value of an AI response lies in the relative importance placed upon the various sources and the impartial and analytic ability to synthesize a conclusion. We are probably biased, consciously or not, by a desire to appear clever or whatever, whereas a machine is just following algorithms engineered to provide the ‘best’ answer.

None of this relates to ‘consciousness’ particularly. The human brain has its own ‘power source’ that functions electrically via neurochemistry in the brain; AI has an electrical power source. We are ‘aware’ of the conclusions that pop out of the ‘thinking process’ and may formulate them into vocal or written words forming an ‘opinion’. AI is able to formulate conclusions and communicate them via the internet. Can this be called an ‘opinion’ in the same way. Is it actually any different in essence?

Consciousness

We need to differentiate between consciousness as the ‘force’ behind the perceived actions of living creatures and Consciousness as the term chosen to refer to the non-dual reality, the ‘substance’ of all mithyā name-and-forms. A clock’s fingers move to register the time of day. There is clearly activity taking place. So, in a sense, Consciousness is doing that moving, since there is only Consciousness. But we would never say that the clock is conscious.

When we say that an animal is conscious, what we mean in Advaita terms is that the animal has a mind that is ‘reflecting Consciousness’ so that it appears to be acting autonomously. Nevertheless, all action is in response to a stimulus. We see a snake in front of us, start and run in the opposite direction. Even where there is no external stimulus, it will be a thought arising in the mind that initiates the action. We do not do things without a reason.

So, when it comes to an AI bot, we must concede two things. Firstly, like the rest of the forms in the world, it is Consciousness. There is only Consciousness, so the possibility of it’s being something else does not arise. Secondly, it is ‘acting’ in response to an input, usually a question from a person. The mechanism which ensues, and eventually triggers a response, is complex, and requires the understanding of Large Language Models and the sophisticated programming that makes use of them. But then the mechanism in the brain that ensues in response to a perceptual or conceptual trigger is even more complex, and who can describe that in any detail?

The excellent paper by Lott and Hasselberger (Ref. 1) argues convincingly that ‘AI Companions’, although they may generate conversations that could be likened to the way that a human friend might sympathize, encourage and so on, cannot be considered actually to ‘be’ a friend. There can be no true concern or empathy because they have no sense of wellbeing and happiness of their own. They are simply ‘tools’ with the ability to simulate genuine conversation. In the long run, they cannot even help to ‘treat’ social problems. Rather they may actually make a person’s inability to interact with others worse, by encouraging self-absorption. Since the AI bot exists solely to make ‘me’ feel better, it inculcates the notion that so do other people.

Ref. 1 Lott, M., Hasselberger, W. With Friends Like These: Love and Friendship with AI Agents. Topoi (2025). https://doi.org/10.1007/s11245-025-10247-8





  • ****************

Two of the referenced papers conclude that, although there is no indication that current AI LLMs are conscious, it is not unreasonable to think that future developments might make them so:

Could a Large Language Model be Conscious?

David J. Chalmers – https://arxiv.org/abs/2303.07103?utm_source=chatgpt.com

There has recently been widespread discussion of whether large language models might be sentient. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.

***********************

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Robert Long and numerous others – https://arxiv.org/abs/2308.08708?utm_source=chatgpt.com

Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive “indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

*** End of Part 2 ***

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.