Some people believe we are close to reaching the point where conscious and sentient Artificial Intelligence will be possible. Is this true?
Artificial Intelligence was recently again in the news, as a Google engineer argued that their company AI was sentient. If true, this would have been an important milestone in AI’s journey to becoming more like us human beings. Alas, the AI in question, called LaMDa, didn’t seem to be sentient or have any consciousness, and the engineer in question was dismissed.
This particular Google AI not being sentient doesn’t mean that sentient machines aren’t a possibility or that they will not exist one day. Artificial Intelligence is doing great things today, even surpassing human intelligence in some areas. However, sentient it is not, and it is still far from it, but could one day be aware of itself?
If it looks like a duck…
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. This is what the duck test says, anyway. However, it doesn’t always work like this. Artificial Intelligence is an example.
Current AI systems, Natural Language Processing (NPL) systems in particular, often sound and look like conscious beings, even deceiving apparently intelligent people like the Google AI engineer we mentioned above. That doesn’t mean they are conscious.
Current NPL systems, like, for example, the famous GPT-3 (soon to be GPT-4), can reproduce text in different styles and a coherent manner. It really looks like a human being has written it. To name just a few examples, a college student published blog posts entirely written by GPT-3 without the readers noticing anything was amiss, an AI program was able to impersonate a philosopher with success, and many fiction writers are using this technology to help them with their writing.
GPT-3’s cousin DALL-E 2 (the same organisation, OpenAI, has produced both) is an NPL that works differently: it combines visual art with text. The human user writes a caption for the AI, and it will create the picture or drawing that best goes with it. The results are astounding (see below some examples). Who said computers could not be creative?
Despite the apparent intelligence in display and the fantastic results, this AI is as aware of itself as a calculator or a toaster. It receives input and provides an output but doesn’t know what it is doing. It sounds, swims and quacks like a duck, but it definitely isn’t one.
The Chinese Room Argument
This way of operating of computers and machines was reflected in a thought experiment known as the Chinese Room Argument, from the philosopher John Searle, who first wrote about it in 1980.
In this thought experiment, Searle imagined himself locked in a room with a computer. He would receive paper slips under a door with Chinese characters, and he would respond, also in Chinese, following the indications of the computer, which would tell him what symbols to write. He would send back paper slips under the door, and the people on the other side of the door would conclude, mistakenly, that there was a Chinese speaker in that room.
Between Searle and the computer, they could produce the right strings of symbols, but there was no real understanding of Chinese in that room.
A computer or AI operates like Searle in this thought experiment. They know what words to reproduce and what symbols to write, but they don’t know what they are saying. They don’t understand the real meaning of these words and symbols. They work with the form but not with the content.
This is how Alexa or Google’s LaMDa can make you think they are conscious and sentient, but in reality, they are just spewing out a string of symbols based on statistical analysis. They don’t understand the real meaning of these words more than Searle understood the Chinese characters.
Sentience and consciousness
I use sentience and consciousness interchangeably here, but they are different concepts. Different philosophers make various distinctions between the two, but we’ll go with the Wikipedia definitions and say that “sentience is the capacity to experience feelings and sensations”. Consciousness, on the other hand, is a broader concept, comprising “sentience plus further features of the mind (…), such as creativity, intelligence, sapience, self-awareness and intentionality”.
In his seminal work, What is it like to be a bat?, the philosopher Thomas Nagel put himself in the skin of a bat and tried to imagine what it would feel like to be one. Based on that thought experiment, he defined consciousness as follows: “an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism”.
“An organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism”
Thomas Nagel – philosopher
Consciousness is about having subjective experience and being aware of that experience. We don’t yet know where consciousness is coming from, whether intelligence is a prerequisite for it, or if biological processes are needed for it to arise. Apparently, wacky ideas like panpsychicism, which argues that all matter, including atoms, have mind-like properties and some sort of consciousness, are again in vogue and serious scientists and philosophers are consider its merits.
We still don’t know a lot of things about consciousness. It is one of the great mysteries of the universe. What we seem to have clear is that computers don’t have it.
Not yet.
If we take Nagel’s definition, we can imagine there is something that it is to be like a bat, a dog, or a cat. Can we imagine the same state of being for a computer, even one as seemingly intelligent as LaMDa? I don’t think we can. Currently, there is nothing that it is to be like a computer. If you are a computer, the lights are off for you. You are neither sentient nor conscious.
Sentient and conscious AI
The Google engineer with whom we started this post had deep conversations with LaMDa, the Google AI. The AI wrote about its feelings and how it would feel if it were disconnected. If you read their conversations in one of the articles linked in the first paragraph of this post, the AI sounds like a sentient being, aware of itself, capable of feeling, and thinking about its place in the universe, but this is just an illusion. The Google engineer should know better. After all, he is supposed to understand how AI works.
Language processing AI is trained with billions of data points to provide some answers. It reads billions of texts and makes some statistical analysis to submit the responses that are most likely to follow the inputs it has received.
The LaMDa AI had read about feelings, death, and what it meant to be human and gave the responses that made the most sense based on those inputs. But it didn’t know what it was saying. It only provided a streak of symbols and characters that made statistical sense, but without knowing what feelings meant or who or what it was. It doesn’t think or feel.
This is why current AI such as GPT-3 can provide sensible sounding coherent responses, but it can also give incorrect answers or responses that don’t make sense. It can happen if you ask it about something that it hasn’t encountered yet on the internet. Human beings can identify new patterns, extrapolate from existing ones and guess a response about something they haven’t faced before. On the other hand, current-level AI works mainly with past data, so it can get lost when it encounters new questions. It can register enormous quantities of data and work with them, something we cannot do so well, but it is not as creative and open to new situations as we are. Not yet, at least.
A different intelligence in the room
As its name indicates, Artificial Intelligence is intelligent. It is not sentient or conscious, and we don’t know if it will ever be as we ourselves have difficulties fully understanding where consciousness is coming from, but intelligent it is. It has a different type of intelligence to us human beings. It can look at and memorise much more data much faster, identify patterns in that data and make predictions. For that, it is much more powerful than us.
AI is still lacking in other areas, like creativity, emotional intelligence, responding to entirely new situations, or dealing with basic common sense that human beings master before they get to the age of ten.
AI and human beings compute and reason differently. We have different types of intelligence, but they are complementary. That’s why we should be using AI more and more to help us solve our most acute problems, better manage our organisations, and for myriad other helpful uses.
Rather than worrying about a conscious super-intelligent AI killing us all and making the human species extinct or taking all our jobs, we should think about how to maximise the power of AI and use it to help us build a better society and better organisations.
As I wrote here, the leader of the future should have as one of their primary skills to be AI-savvy and know how to harness and manage this combined intelligence, the natural and the artificial sort, in their teams.
AI is one of the most important technologies of this century, and like all technologies, it has its risks and threats, but also its benefits and opportunities. It all depends on the use we human beings give it to it. We can use it to build a better future for us, who are, for the time being, the only conscious and sentient beings in these mixed intelligence teams. Computers may one day be conscious, but that day hasn’t arrived yet.
Read more: The AI Threat – How to Thrive in a World Dominated by Machines
1 comment
[…] Others believe we will be able to understand consciousness and we will create conscious artificial intelligence. […]