The Robots Are Coming – Part III

The Basic Laws?

The popular 20th century science fiction writer, Isaac Asimov, formulated his three laws after burning the midnight oil with a fellow enthusiast over a period of years. He only articulated them in 1942 in “I Robot”, well before the idea of AGI-controlled robots was anything more than the concept of machines using a very advanced computer program. Most people familiar with Sci-fi ventures into robotics will be very familiar with Asimov’s laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]

(He went on to introduce a ‘zeroth’ law as his robotic adventures progressed. Others have added at least another two.)


Science fiction laws are one thing; real ethics are something else and obtaining international agreement in some yet to be decided United Nations forum will not be easy. Defining ethical protocols for AI/AGI will take years of discussion with a very wide range of participants. Even that range itself will cause hotly debated argument, as everyone with any kind of interest in what is and is not ethical tries to get in on the act.

Laws have the huge drawback that they may be interpreted by some future advanced AGI system (or even judges with weird ideas of legal interpretation on human rights, as we know to our cost) as requiring it to treat its human sponsors as pets or children, reducing their scope for free will to what it considers to be safe. It could choose to act on the orders of its sponsor only if they do not conflict with its own interpretation of what is safe. By the third law, we couldn’t switch it off. We would become like plants in a well-tended garden. For the couch potatoes of this world that might not be an unattractive proposition but for anyone with any worthwhile valuation of modern man’s many legally prescribed rights it should be anathema.

Sci-fi writers, of course, can make anything happen in their books. Once the androids are really out there it may be too late to say “Sorry, I didn’t think it could do that.”

Development of AGI

Studies into AI/AGI and its implications are being carried out at individual government levels throughout advanced societies but so far there is no international movement to reach agreement on ethical protocols designed to prevent the feared robot takeover. If there is, it is playing its cards very close to its chest. International conferences are taking place, sponsored by all the right people, according to internet sources, but their emphasis is on the nuts and bolts of producing specific AIs to achieve limited objectives, not on the ethics behind that achievement.

So far there is no movement towards legalising procedures and protocols at international level. If this doesn’t happen soon, it may be too late to enforce proper ethical constraints on developers who choose to give no thought to any moral framework. There are already robotic drones flying vast distances around the world perfectly capable of locating their victims and acting on human instructions to assassinate them, with apparently no concern for collateral damage to persons or property. Law one up the spout!

Currently AGI is literally in its infancy: it can barely crawl. New DNA-based memory or holographic memory technology will very soon have it moving out of the kindergarten into junior school and on to secondary education. From there it is but a small step on this exponential curve of development to the first degree, the doctorate, the ultimate expert. Human geniuses have nothing on the supercomputers of the not too distant future.

An advanced AGI controlled supercomputer will be able to access in nanoseconds almost any published database in the world to find the most up-to-the-split-second information to enable it to carry out whatever task it has been set. It will be able to compare conflicting theories and decide for itself which are correct. Thereafter its only constraint on performance will be the actuality of its physical abilities.

Who can doubt that scientists – unless strictly forbidden – will wish to give their AGI-inventions the same physical capabilities as human beings, enhanced to enable them to do things our own physical weakness will not allow us to do for ourselves? Already AI-controlled ‘surgeons’ have that sort of improved acuity – no shaky hands, effectively magnified vision, no memory lapses, no swabs left in body cavities, no faulty sutures, no heavy breathing to carry infection. Super battlefield soldiers are way beyond the feasibility study stage.

Computing Power

Modern supercomputer design mimics brain architecture using myriad paths to access information stored in memory that is no longer one-dimensional. The old idea of communication by a single direct line is long dead. Networking is here to stay.

The speed of computers has increased exponentially in recent years, leading to advances in capability that were inconceivable only a few years ago. The new breed of supercomputers will be able to ‘think outside the box’ far better than humans can. They will be able to ‘brainstorm’ their ideas faster and more constructively than human think tanks. Their ‘ideas’ will come fully analysed, with every advantage and disadvantage carefully weighed, so that feasibility can be in little doubt. That the speed of the advance to such capability will continue can hardly be doubted, as every known branch of science moves into areas so far beyond what was once considered as conventional research that wholly new conceptions of where science is taking us and how world society will develop need to be addressed. It is only a matter of time before supercomputer technology moves from large temperature-controlled laboratories to brainbox sized ‘personal environments’, capable of running at body temperature.

Currently, supercomputers need vast amounts of electrical energy to power them. Power source design is a #1 priority. Who can doubt that necessity will drive human ingenuity to devise a solution that doesn’t rely on massive nuclear fission or fusion installations.

Will all this bring the development of robots with ‘free will’ as we understand it? Could such robots decide not to obey protocols that limit their performance? Do we have to wait for the real-life Doctors Frankenstein and Strangelove to get their heads together?

It is time for the debate.


The Robots Are Coming – Part II


Some people really do think robots will take over unless world authorities move to define the rules that govern their creation. Scientists and robotics engineers are already selling intelligent machines cleverer than most people. Some single-task robots are already considerably better than ordinary human beings at reproducing consistent, accurate results in specific areas. Examples include chess programs that will defeat all but the cleverest masters, ping pong playing helibots can out-perform all but the top players; even the best surgeons already use robotic machines to carry out more delicate operations, where human error means life-threatening consequences. Indeed, many surgeons say that all operations will be carried out robotically in the not too distant future, because of the litigious nature of their patients.

Self diagnosis machines for the doctor’s waiting room that can accurately tell you exactly what’s wrong with you have been feasible for many years. GPs won’t give them the go ahead and hypochondriacs won’t trust them. However, the fact is they are far less likely to make a disastrous misdiagnosis than your average distracted GP. What is more, they have time to ask you all the right questions and won’t forget any, because you’ve had your ten minutes-worth.


People tend to think of ‘real’ robots as looking like us, in much the same way and for the same reasons that they consider life manifests itself in forms familiar to us in our everyday life or, at worst, in our nightmares. Claims by scientists that mould is a life form worth looking for are popularly deemed geekish gobbledegook. Expectations are that one day intelligent robots will exist and be indistinguishable from real human beings – true androids in the human cast – the Data syndrome. Star Trek was, of course, beaten to the draw by at least forty years by Karel Capek’s Rossum’s Universal Robots and others but the underlying concept of intelligent robots that threaten takeover (e.g. Data’s brother) is there and won’t go away.

The idea that ‘real’ robots must look like humans is not unreasonable considering cultural teaching that human beings are superior to all other life forms on earth, a belief deep-rooted in the human psyche. The concept of humanoid robots does need to be revisited before time runs out and they become a feature of our daily lives.

More that 3,000 years ago the ancient world knew their nearest concept to a robot – a slave (what else?) – could not be allowed to retain the appearance of a free person. Slaves were branded where the brand could be seen (not just so that they could be identified if they ran away). They were treated as non-persons and their lives were at the disposal of their owners. Yes, people knew long ago that attractive (or even personable) slaves posed a threat to free men and women and (to adopt the modern jargon) to vulnerable people who trusted them with their care.

A recent ethics study in Great Britain came to the conclusion that robots must be made/created so as to be easily distinguishable from humans. The argument behind this is that human beings can and do form attachments to machines, so that a robotic android with human characteristics indistinguishable from those of a human might truly claim the affections of its owner, causing distress to others and psychological harm to the dependent user. It also concluded that robots should not be created to undertake human care tasks that might leave vulnerable people at risk. So what’s new?

Machines equipped with truly advanced Artificial General Intelligence (AGI), as opposed to single-task Artificial Intelligence (AI), will relatively soon be perfectly capable of designing and maintaining themselves. Their continuously evolving ideas of beauty, strength and moral values may not be the same as their originators’. Azimov’s set of three rules that basically will not allow robots to harm their creators will need to be laid down in a far more legalistic set of protocols and properly enacted in courts worldwide.

Will the robot designers (creators?) stop at producing machines or will they go for humanoid robots? I believe If they can, they will, unless someone stops them. Who is that someone and would they be right to do so?

The Robots Are Coming – Part I

We use body language (consciously or subconsciously) to predict what others are going to do or say. Many animals can ‘read’ us very well. Conmen make a science of it. It is a skill that can be taught. Some maintain this ability, combined with a little basic common sense is behind popular belief in the existence of telepathy. Many believe it is a gift some people can harness for good or ill – the supersense of witches and wizards, of shamans, witchdoctors and charlatans.

There exist documented examples of mind reading at a distance that cannot possibly result from input from body language. In everyday life most people can find examples a-plenty of sensing something is not right with a friend or relation a long way away. Somehow we are in touch and we cannot explain how. “It’s just a coincidence” is the usual put-down by sceptics. In the case of pets: “They know you’re habits so well, of course they know when you should be coming home,” is the usual explanation of how the dog knows when it’s time to go to the window to wait for dad. No one cares to explain why the dog doesn’t do that every day as a matter of routine. Pets do know when dad is not coming.

If we can harness this ‘gift’ using just our limited, much degraded five senses, how long will it be before supercomputers can do so better than your average conman, using modern sensors that outperform our feeble capabilies?

What if advancing neuro-science uncovers the existence of superfast communication within our brains? Scientists have already demonstrated that ‘entangled’ particles can communicate instantaneously over distance, making a mockery of the belief that all communication is limited to the speed of light. Why should such particles not exist in the brain? Could it be that humans have simply lost the ability to harness this form of communication, seemingly so well-developed in some animals? Could the entanglement theory be at the heart of the working of our ‘gut reactions’ that have for some time been shown to be the result of a second brain (the Enteric Nervous System {ENS}) located in our stomach walls. Without the input of our external senses, the ENS reaches conclusions faster than our logical head-based brain and can do so even if one of its supposed paths to the Central Nervous System is severed?

Very recently a team of researchers proved that crude telepathy over distance can be reproduced by hooking up test subjects to a transcranial Magnetic Pulse Stimulator (tMPS), a similar device to that used by my fictional team in Think Freedom (the transcranial Direct Current Stimulator (tDCS) – an actual device used in the treatment of sufferers from depression and marksmen to improve their sharpshooting skills).