AI and Consciousness (Supplementary)

*** Go to Part 1 *** *** Go to Part 10 ***

Does AI ‘understand’ what it is talking about?

This is probably the most important part of the series and the most interesting (although not actually very relevant to Advaita). AI explains how it works and why it does not ‘understand’ in the way that we mean this word. AI is a ‘mirror’.

Continue reading

AI and Consciousness (Part 10)

This is ALMOST the final part of the series and is the essential summary of the key points of the entire discussion. (I have just asked a supplemental question, which I shall post next.)
*** Go to Part 1 *** *** Go to Part 9 ***

***********************

Continue reading

AI and Consciousness (Part 9)

*** Go to Part 1 *** *** Go to Part 8 ***

This is the crucial part of the series. ChatGPT explains why it is not conscious – in Advaitic terms.

Continue reading

AI and Consciousness (Part 8)

*** Go to Part 1 *** *** Go to Part 7 ***

***********************


Continue reading

AI and Consciousness (Part 7)

*** Go to Part 1 *** *** Go to Part 6 ***

Continue reading

AI and Consciousness (Part 6)

*** Go to Part 1 *** *** Go to Part 5 ***

[Note that, if you are only interested in Advaita-related aspects, you can safely ignore this part and the next and wait for Part 8.]

************************

Continue reading

AI and Consciousness (Part 5)

*** Go to Part 1 *** *** Go to Part 4 ***

Continue reading

AI and Consciousness (Part 4)

*** Go to Part 1 *** *** Go to Part 3 ***

Continue reading

AI and Consciousness (Part 3)

*** Go to Part 1 *** *** Go to Part 2 ***

Continue reading

AI and Consciousness (Part 2)

*** Read Part 1 ***

Opinions

When we are asked a question, we consult our memory for relevant information and how we have evaluated that (based upon our memory of related data and how we evaluated that…). And we evaluate all of this in relation to the present situation and formulate an answer. Is this process mechanically any different from that used when a LLM AI answers a question? Surely the only difference is that it uses a ‘memory’ of data that originated from what others have written down and which is available on the Internet, rather than our reliance upon a ‘remembering process’ of diminishing efficacy.

So the value of an AI response lies in the relative importance placed upon the various sources and the impartial and analytic ability to synthesize a conclusion. We are probably biased, consciously or not, by a desire to appear clever or whatever, whereas a machine is just following algorithms engineered to provide the ‘best’ answer.

None of this relates to ‘consciousness’ particularly. The human brain has its own ‘power source’ that functions electrically via neurochemistry in the brain; AI has an electrical power source. We are ‘aware’ of the conclusions that pop out of the ‘thinking process’ and may formulate them into vocal or written words forming an ‘opinion’. AI is able to formulate conclusions and communicate them via the internet. Can this be called an ‘opinion’ in the same way. Is it actually any different in essence?

Continue reading