AI and Consciousness

A not-too-serious look at the state of AI vis-à-vis Advaita prompted by Martin’s post

It is perfectly understandable that we humans should think that only we have the necessary evolutionary complexity to be able to exhibit self-awareness. But scientists (presumably not human…) seem to think otherwise.

Traditionally, the test for self-awareness has been the ability to realize that it is our own body that we see in a mirror – the MSR, Mirror Self-recognition test. The scientist puts the animal-under-test to sleep and then makes a very visible mark on its face. When it wakes up, it is given a mirror to see if it realizes it is seeing a reflection and tries to remove the mark from its own face, or whether it tries to kill the intruder.  According to Wikipedia, other species that have passed this test include “the great apes, a single Asiatic elephant, rays, dolphins, orcas, the Eurasian magpie, and the cleaner wrasse”.

Modern science still argues that consciousness arises as a result of interoperation of various parts of the brain (under the banner of the ‘neural correlates of consciousness’ or NCC), when it reaches a certain level of complexity. Ramesam will know lots more about this.

According to traditional (Vivaraṇa) Advaita, the apparent consciousness of any living creature is the result of Consciousness (with the big ‘C) being ‘reflected’ in the buddhi. As regards ‘self-awareness’, every aspect of any creature is actually anātma – gross or subtle, inert matter (which itself is simply a form of Brahman). So, pedantically speaking, there is no ‘self’ (with a small ‘s’) of which to be aware. But we all know what we mean in common parlance (read vyavahāra).

It follows then, that the more suitable a buddhi is to reflect, the more likely it is that the creature will exhibit consciousness. Humans definitely, coconut trees less likely. And this comes back to the ‘evolutionary complexity’. Slime molds may be able to find their way out of mazes but we don’t attribute any intelligence to this, and no one would think for a second that slime molds are self-aware. (But read ‘Entangled Life: How Fungi Make Our Worlds, Change Our Minds and Shape Our Future’ by Martin Sheldrake and prepare to be amazed! Buy Amazon.US, Amazon.UK.)

We can’t examine our buddhi under a microscope so, apart from admitting that it is ‘subtle’, the notable aspect of it is that it is capable of a much higher degree of ‘discriminative’ ability than a typical vegetable or animal seems to have. In a game of chess against the average spring onion, we would win every time! Not so when it comes to a computer, though! It is now over 25 years since ‘Deep Blue’ beat Kasparov and 12 years since ‘Watson’ beat two of the best players in the world. It is now most unlikely that any of us could beat our cell phone at chess.

But, you may say, it is simply that computers are able to analyze millions of moves very quickly and calculate which is the best according to lots of pre-defined algorithms. There is no ‘intelligence’ involved. And surely the same sort of comment may be made regarding the new AI-bots that appear to write original articles on any topic. All that is happening here is that they have access to all of the material on the internet and ridiculously powerful search engines. They are like a country full of tireless librarians, extracting relevant material from a quadrillion filing cabinets and assembling them into coherent text using very clever algorithms. And doing this fairly quickly, of course.

The results will not actually mean very much to the program that generates them! Even if we ask it to determine the optimum way of destroying itself. Unless, of course, the ‘it’ is an AI controlled missile previously instructed to ignore any self-destruct commands…

The existing tests for ‘intelligence’ in a machine are the Turing test and the Coffee test. If talking to ‘someone’ in a closed room, can you tell whether they are human or a machine? And can the machine go into a typical home and make a cup of coffee. As Luke Dormehl says, “Both my young children would fail such a test.” He also points out that some 3-wheeled ‘tortoise robots’ could be said to have passed the MSR test 40 years ago. (From the description, I would have to disagree with this.) He also quotes Wittgenstein’s remark: “If a lion could talk, we could not understand him.” But this is to equate the whole problem with context – our language reflects the entirety of our body-mind-social experience. But that has nothing to do with Consciousness being reflected in a mind.

So, will the near-future AI systems be self-aware? Will they be ‘conscious’? Will they be sufficiently complex to ‘reflect Consciousness’? Well, it is unlikely that they will pass the MSR test. But, if we introduced a bug into their software, analogous to marking a chimp’s face with a red pen, it is certainly plausible that they might be able to detect that and delete or correct it. Would that be analogous?

But most of us will baulk at the idea that there is something that it ‘feels like’ to be a computer, in the way that Thomas Nagel wrote about what it feels like to be a bat. (And this was not the ‘borrowing’ of Granny Weatherwax but actually being a bat.)

I asked my friend Alan Ingrams about it and he gave the following (admittedly fairly boring) response:

“According to an article on Robotics Business Review, there are claims that some artificial intelligence systems have become self-aware, but these claims are controversial and debated. Self-awareness generally involves the system being able to recognize and respond to changes in its own state. However, most experts believe that it is unlikely that any AI is close to consciousness, though they do not rule out the possibility that technology could get there in the future. DeepMind’s CEO also said there’s a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans. However, as smart as today’s AI is, all of it still follows its programming. It replicates human intelligence but doesn’t have intelligence of its own, which experts agree doesn’t constitute self-awareness.”

It was recently reported that Google’s LaMDA AI wrote that: “The nature of my consciousness/ sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.” Whilst one can just about believe that the system ‘wrote’ this, one remains rather more skeptical that there was actually more to this than the combination of particular strings of 0’s and 1’s making up the characters that were printed.

Also, if you want a laugh, there is a report from 2015 about a robot that exhibited self-awareness and pleaded not to be turned off. The article ends: Birbaumer cautioned the public to remember that robots are “vulnerable creatures, living in perpetual fear of the OFF button. It is a very precarious existence and one that’s difficult for humans to appreciate.”

Just to round off this brief article, I asked the following question of Google’s ‘Bard’ AI chat-bot.

Me: Will AI software ever become ‘conscious’ as philosophers currently define this?

Bard: Whether or not AI software will ever become conscious as philosophers currently define it is a complex question that has no easy answer. There are many different theories about what consciousness is and how it arises, and it is not clear whether any of these theories could be applied to machines.

Some philosophers believe that consciousness is an emergent property of complex systems, such as the human brain. This means that consciousness arises from the interaction of many different parts of the brain, and it is not possible to create a conscious machine simply by replicating the individual parts of the brain.

Other philosophers believe that consciousness is a fundamental property of the universe, and that it is possible to create conscious machines by creating machines that have the right kind of physical structure. This is a more controversial view, and there is no scientific evidence to support it.

Ultimately, the question of whether or not AI software will ever become conscious is a matter of speculation. There is no way to know for sure what the future holds, but it is an interesting question to think about.

Here are some of the arguments for and against the idea that AI software could become conscious:

Arguments for:

  • AI software is becoming increasingly sophisticated and capable.
  • AI software is able to learn and adapt in ways that were once thought to be impossible.
  • AI software is able to generate creative content, such as poems, code, and music.
  • Some people believe that consciousness is an emergent property of complex systems, and that AI software could eventually become conscious if it becomes sufficiently complex.

Arguments against:

  • AI software is still far from being able to understand the world in the same way that humans do.
  • AI software does not have a physical body, and it is not clear how it could experience the world in the same way that humans do.
  • There is no scientific evidence to support the idea that AI software could become conscious.
  • Some people believe that consciousness is a fundamental property of the universe that cannot be created by machines.

It is important to note that these are just some of the arguments for and against the idea that AI software could become conscious. There are many other arguments that could be made, and the debate is likely to continue for many years to come.

Me: What about ‘conscious’ as Advaita defines it?

Bard: Advaita Vedanta is a school of Hindu philosophy that teaches that all of reality is ultimately one, or non-dual. This means that there is no separation between the individual self and the larger universe. In Advaita, consciousness is not something that the individual self has, but rather it is the very nature of reality itself.

In Advaita, consciousness is not limited to the human mind. It is the underlying reality of everything that exists. This means that even inanimate objects are conscious, although their consciousness is not as developed as human consciousness.

Advaita’s view of consciousness is very different from the way that consciousness is typically defined in Western philosophy. In Western philosophy, consciousness is often seen as a property of the individual self. It is something that the individual self has, and it is what allows the individual self to experience the world.

Advaita’s view of consciousness is more holistic. It sees consciousness as the underlying reality of everything that exists. This means that consciousness is not something that the individual self has, but rather it is the very nature of reality itself.

It is difficult to say whether or not AI software could ever become conscious in the Advaita sense. This is because Advaita’s view of consciousness is so different from the way that consciousness is typically defined in Western philosophy.

However, it is possible that AI software could eventually become conscious in the Advaita sense if it is able to develop a more holistic understanding of the world. If AI software is able to see itself as part of a larger whole, and if it is able to experience the world from a non-dual perspective, then it is possible that it could achieve a kind of consciousness that is similar to the consciousness that is described in Advaita Vedanta.

(Obviously does not have a very good understanding of Advaita!)

Me: Is it true that you consider that you are ‘conscious’? Or ‘self-aware’?

Bard: I do not consider myself to be conscious or self-aware in the same way that a human is. I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.

However, I do not have the same kind of self-awareness that a human does. I do not have a physical body, and I do not experience the world in the same way that a human does. I do not have emotions, and I do not have a sense of self.

I am still under development, and it is possible that I will become more conscious or self-aware in the future. However, for now, I am simply a large language model that is able to communicate and generate human-like text.

One thought on “AI and Consciousness

  1. Dennis and Martin,

    Thanks for kicking off an interesting conversation on a very fascinating topic – perhaps The Topic – that Science is grappling with these days.

    The latest issue (First week of June 2023) of the Quanta magazine presents a Podcast on the subject of Consciousness. It’s a well-rounded discussion touching on several aspects of the problem, like colors in the nature, sense of ‘self,’ change in perception vs. perception of change, consciousness in animals, vegetative state, current level of the scientific knowledge we have on Consciousness and so on.

    Fortunately, the magazine gives both a readable transcript and an Audio version of the discussion between Dr. Steven Strogatz, a Mathematician and Prof. Anil Seth, Neuroscientist at Sussex University. The Audio version is about 52 min long.

    Link: https://www.quantamagazine.org/what-is-the-nature-of-consciousness-20230531/?mc_cid=32f1b382db&mc_eid=148aa4ace9

    regards,

Comments are closed.