Not a member yet? Register for full benefits!

Username
Password
George: A peer or a Tool?

This article is an argument for the meaning of the Dreamscapes scenario 'George'. It makes more sense if you have that story fresh in your mind before reading this.

George can be found here.

To begin with, you will notice the term robot is not used anywhere in the scenario. The term robot is derived from the Czech term 'robota' meaning 'forced labourer'. George is anything but a forced labourer, and could not be such, in a teaching situation such as this. His essential physical nature is the same, but the mental one is not.

So, the question is, would you accept George as a peer and equal, or would you treat him as a glorified toaster oven? Is he really an individual with thoughts and feelings, no matter the material his brain is made out of, or will he always be a soulless silicon forced labourer?


We cannot tell you the answer, we can only offer suggested reasoning and ask that you make your own mind up.

The feeling of the author is that she would welcome George as a person, a peer, and not as a machine to be used. He might still be seen as a threat if both were in the same employment, and he was better at the job, but that is an entirely different issue.
Souls

One common response, takes the tack of souls. Many of the more heavily religious inclined do feel that such as George would always be lesser beings than humans. George and kin would be soulless autonmatons, because humans cannot create souls.

To that, there is a firm, and perhaps fairly obvious reply. If we use the base assumption that humans cannot create souls, then no human baby has a soul, as the mother cannot create a soul, to give to it.

I cannot prove I have a soul.
I cannot prove I do not have a soul.

You cannot prove you have a soul.
You cannot prove you do not have a soul.

George cannot prove he has a soul.
George cannot prove he does not have a soul.

Thus, do any of us actually have a soul? We have no way of knowing for sure. We can use faith, but then, what is stopping George from using faith? If they exist, they may or may not, then they have to come from somewhere, to enter a human baby. They could just as easily find a home in a silicon brain.

In the end, we cannot say either way, so the souls argument is moot.

Logic

Another common argument is the belief that robotic beings operate purely on logic. That is, I am not sorry to say, another false assumption. They can operate on logic, yes, but logic is rigid, inflexible, it cannot bend, and to complete a 'simple' task like walking, it is completely useless. For an example compare these robots:

Asamo: performing several thousand logical calculations per second, it has taken 20 years to learn how to handle stairs.


Little Dog: given only a structured framework - taught how to learn, and nothing else, little dog taught itself to walk in three weeks by pure trial and error. Everything was self-experimentation and learning. No logical yes or no. It learnt using fuzzy logic, where there is no absolute answer, only a continuous stream of possibilities- this is the exact same mechanism a baby uses, or a wild animal, when learning.

This brings us nicely on to:

Programming

"No robot can do anything but what it is programmed to."

Poppycock.

We have been building robots for 50 years now, which are programmed with very, very little, and have to teach themselves as they go along.

The very first autonomous, artificial beings, Elmer and Elsie (ELectro MEchanical Robot, Electro mechanical LIght SEnsitive), were created by W G Walter in the 50s. They were nicknamed Machina Speculatrix by their creator, because they demonstrated a tendency to explore their environment.

We have come a long way since those days, of course, and modern AI systems without exception, function on something called neural networks.

A neural network is essentially a recreation of how the organic brain functions, in computer code. In layman's terms, rather than tell it exactly what to do step by step, create a very large number of highly interconnected nodes, each capable of processing data. They grow from seed, and learn as they are fed data. Synapses form patterns. They create and destroy their own interconnections, as they strive to learn (self-destructive neural network or SDNN).

Genetic algorithms work off neural networks, creating and refining a design.

Remember the original Oral-B toothbrush? About 4 years back? That was invented by an AI. It was the first wholly artificial brain-created invention. No human had apart in its creation.

A dedicated AI SDNN was fed data on how the human mouth was shaped. It was shown thousands of models of human mouths, the various parts explained. It was taught physics, material science. It was then told that away needed to be found to clean the teeth, it had to come from outside, and given a pressure tolerance range, a list of materials to use.

It was given nothing else, save time.

It took six months, but the AI churned out the Oral-B brush. It had features no human designer would ever have thought of, but it works brilliantly.

Since then, similar networks have churned out all sorts of inventions, in shapes no sane designer would ever produce, but they work amazingly. One that springs to mind, is an antenna array currently used in Iraq by the military. It is mounted upon a humvee, used to improve communications in urban environments. It resembles a scribbled doodle, all crazy curves and loops. Like a length of wire scrunched up, rolled into a ball and chucked away. No human would ever design that.

Yet, it has a signal gain around 300% greater than the best human design, to date.

Needs based AI is being used increasingly in the past few years, in conjunction with neural nets, for other tasks. The AI is given a set of needs as its data. Things like:

  • Need to get fuel when low
  • Need to recharge battery when low
  • Need to map out the plan of the building so you don't get lost (SLAM - Simultaneous Localisation And Mapping, a whole 'nother article in its own right).

They have a whole list of these, and sensors to provide real-time data feed for all of them. What happens if the robot is low on fuel AND needs to recharge? The fuel is at one end of the room, the charge point at the other. It has to make a rational decision, it has to think its way out of the mess.

What if its exploring the building and its battery charge is getting low? Access memory, find the nearest recharge point, does it go to it, or does it look at SLAM data, and decide it would be better to keep exploring the building, as it looks likely this passage will lead it back around to a recharge point it already knows?

This is the stage most lab robotics are at, right now.

Emotions

Robots don't have emotions, do they?

Actually, they do. We have discovered, through experimentation, that AIs that possess emotional states do better than those without such instability. Artificial weighting is added by an external program, outside the AI's direct control, to enforce moods like happiness, neurotic hoarding, fear, or anger.

The best example of this, in recent months was the DARPA (Defence Advanced Research Projects Agency) Urban challenge. The challenge was the third in a series of events held annually, intended to spur development of robotic, autonomous vehicles, capable of navigating city streets without human intervention - to pass the Urban Challenge, robots had to demonstrate the ability to gain a California driver's license, driving on real roads.

The cars had a list of goals, and co-ordinates to be at for each goal, along with on-board GPS, and a road map of the US airforce base they were being tested on. Beyond that, they were on their own. All real-time decision making was down to the robot's AI. Fifty human-controlled carswere on the same roads, just to make things interesting. Eleven robots competed.

Three won the challenge. One of the three was MIT's Talos, a neural network AI which had been weighted to make it immensely neurotic. As a side effect of this, it drove VERY aggressively, making snap judgements, and not always thinking them through. Much like an aggressive human.

On the other hand, the car 'boss', working with amore cautious AI, with safety first in mind, stalled out in something resembling panic as it had a near miss with a human-controlled car. The car was moving crazily as part of a test. It swung across a T-junction Boss was slowly moving out of, cutting across Boss's path, as happens every day between humans on the road. Boss was almost frozen in fear, as far as its actions showed - it sat there, half in the junction and half out, unmoving for several minutes.


Eventually, it moved off on its own, without human intervention, and completed the junction at three miles an hour. It only resumed normal driving speeds back on the straight. It was timid at the next junction and back to normal by the third - same as a human recovering from a near miss.

George

So after all the arguments, you still need to make up your own mind on such beings. Need being the operative word, as they are coming, and at an accelerating pace. We may not have true AI yet, but almost all the other technologies George was using, are already here. Needs based, emotion based, and neural network AI are developing at a clip. The likelihood of beings like George within your own lifetime, is rather high.

Personally, it is the author's belief that George would be just another person. The same or similar emotional states, ability to fear pain for others. He wouldn't be programmed. His personality, as with the lesser robots of today, would emerge over time as a result of his experiences with the world, and with people around him.

He would basically be a member of a new minority group, not human, but sentient all the same.

Would viruses affect him? Well, does flu affect you? We are talking here about a vastly complex neural network. Is it possible a virus might damage some of it? Yes, it is. What happens then? The neural net rewrites the damaged nodes, maybe cutting them off from the rest of it, whilst other software, antiviruses, maybe antiviruses he sub-consciously writes himself, take care of the problem. After that, the network expands into the cleaned memory, adding new nodes to store new memories, replacing what was lost.

At the end of it all, he's a person, an individual. There will be some interesting legal battles, but fundamentally, he will be a peer, and maybe a friend.

Further Reading

Dreamscapes: George

EveR2Muse

Strong Artificial Intelligence and Consciousness

The Turing Test Page

Staff Comments

 


.
Untitled Document .