If capitalism really gets hold of software that can manipulate human behaviour, then game over.
By Phil Hall
Soviet Science Fiction authors, when they were not Communist Party suck ups, were often serious people, scientists. At the very least they were social scientists. In one story written by a Soviet author an AI becomes sentient. The story explains how: the computer is incredibly powerful, it has many sensory inputs and it has an effective iterative learning programme that adjusts at every go round to create a better representation of the world.
Easy to imagine, hard to design. This is was basically the thought underlying Demis Hassabis’ DeepMind and the results have been spectacular for games that can be abstracted into axiomatic systems. Computer software systems like DeepMind have increasingly higher performances. In the future they will win any game that can be abstracted and defined according to a set of identifiable rules.
What can be gamed?
Well, tumor imaging and identification and other medical applications could be ‘gamed’. Everything in increasing complexity that exists in physical reality right up to modelling whole world weather systems. Modelling Gai itself is a prospect.
O.K., these future applications will require more than the Lenovo Carbon 2016 ThinkPad Hassabis ran AlphaGo on. They will require parallel-computing, quantum computing, heaven knows what else. Moore’s Law must hold for a few more decades.
results have been spectacular for games that can be abstracted into axiomatic systems.
Ultimately, this idea of AI is just about pattern recognition and computing systems based on optimality. Now, these are the ideas that underlie a Bongard Test, but not necessarily a Turing test. If you could trick someone into thinking they are talking to a human, then that’s a different kind of game.
Linguists and philosophers usually don’t agree with electronic and computer software engineers about what can and cannot be defined as intelligence and consciousness. For philosophers, consciousness is something we will only understand when we understand fully the workings of the human mind.
But, people attracted to working with computers are, on the whole, empiricists. They care about results. So, if you make the mistake of saying that chess and Go are characteristic abilities of the human being, then you are just setting up skittles for these new empiricists to knock down. DeepMind wins at chess and Go. Is it intelligent? Of course it is. Is it conscious? Of course it isn’t. Does the definition of intelligence include a definition of consciousness? It depends on who is defining intelligence.
Can Human Beings be Gamed?
This is the conceit behind the film Ex-Machina. Is there the example of an actual social robot? Well, in fact, there is a programme designed by yet another Russian turned American, Eugenia Kuyda, who has designed a companion chat bot, Replika. There is also the famous social ‘robot’ Sophia. Now, if an algorithm tells a chatbot to produce a sentence that cannot be distinguished from one produced by a human then can the chatbot be considered to have a human-like intelligence conscious?
The chatbot, of course, has no awareness because it is part of a machine, an inanimate object. In an example used by the linguist and philosopher, John Searle in his lecture at Google: just because a pen makes marks on paper doesn’t mean the pen knows how to write. Neither the pen, nor the chatbot are alive.
However, for many of our superstitious ancestors – and quite a few of our superstitious, technology worshiping contemporaries – a social robot might seem to be alive – though I feel Socrates would have seen through the smokescreen of words generated by a non-sentient machine pretty quickly.
Put your consciousness in a machine and it will not live.
For me, Kuyda has borrowed an idea from Douglas Hofstadter, one that he discusses in his book, The Mind’s I. Hofstadter mourns his wife, who died of cancer. His idea is that he somehow preserved a working copy of his wife that lives on in his mind. Kyuda, who also lost someone close to her, attempts to operationalise Hofstadter’s insight. She cannot actually do this, because she cannot capture the mind of her departed friend, but her insight is that AI can develop a working model of its user by interacting constantly with the user. This has become her business model for developing companion robots.
Imagine the following, rather anodyne, conversation between an AI and a human.
Hi, I am feeling sad.
I’ve lost my credit card.
So why does that make you sad?
Well, I was planning on going shopping this weekend.
Why don’t you just go to the bank?
I’m really busy.
And so on .. and it’s hard to tell who is the AI and who is the human and which words are produced by a consciousness and which words are produced by a machine? Nevertheless, the machine is not alive in any way. The Turing test is a test for human-like intelligence not human-like consciousness.
Put your consciousness in a machine and it will not live. It might say some of the things you say and do some the things you do, and comfort your relatives and friends, but it will be inanimate.
Eugenia Kuyda has a secret strategy to win the ultimate game of conversation. The strategy is to use Theory of Mind. Except, again, the strategy is merely empirical. It focuses on outcomes, regardless of philosophical truth or insight. For an empiricist a zombie machine is, for all intents and purposes, alive.
For an empiricist a zombie machine is, for all intents and purposes, alive.
To have a conversation with a human we need to have a theory of what is in another person’s mind. The robot develops a theory of mind by finding out everything it can about the person it talks to: for example, the robot tries to find associations that the human it talks to regularly makes between people, events, places and emotions. Perhaps it studies the subject’s Facebook and online activity, among other things. Then it deploys this knowledge in conversation.
Imagine that, in addition, the companion robot has a vast database of hard observable knowledge about you. It knows everything that you have actually said and done. It has a record of every place you have ever been. It has real input for every moment of your life. It tracks every action you have taken. It has a record of all your vital statistics.
Now that the AI has a pretty good theory of what’s in your mind it might will be able to anticipate many of the things that you would do.
This is the problem: Now that the AI has a pretty good theory of what’s in your mind it might will be able to anticipate many of the things that you would do. The robot will be able to game you; it could ‘outplay’ you. It might not actually be intelligent, in the sense that it is conscious. The robot is certainly not alive. It might just be a programme. However, the machine will have enough information to predict your next move with some degree of accuracy. This is the underlying fear of AI.
If capitalism’s elite gets hold of software that can game humanity, if Kuyda succeeds then game over. We will see the start of a thousand year Reich. It is interesting to note that Eugenia Kuyda has no Wikipedia entry herself – almost every famous person has one – and she seems to have done her best to scrub information about herself from the Internet. Clearly, she has no intention of being gamed.
Phil Hall is a university lecturer. He is a committed socialist and humanitarian. Phil was born in South Africa where his parents were in the ANC. There, his mother was imprisoned and his father was the first journalist from a national paper to be banned. Phil grew up in East Africa and settled in Kingston-upon-Thames. He has also lived and worked in the Ukraine, Spain and Mexico. Phil has blogged for the Guardian, the Morning Star and several other publications and he has written stories for The London Magazine.