The Robots Are Coming – Part III

The Basic Laws?

The popular 20th century science fiction writer, Isaac Asimov, formulated his three laws after burning the midnight oil with a fellow enthusiast over a period of years. He only articulated them in 1942 in “I Robot”, well before the idea of AGI-controlled robots was anything more than the concept of machines using a very advanced computer program. Most people familiar with Sci-fi ventures into robotics will be very familiar with Asimov’s laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]

(He went on to introduce a ‘zeroth’ law as his robotic adventures progressed. Others have added at least another two.)

Ethics

Science fiction laws are one thing; real ethics are something else and obtaining international agreement in some yet to be decided United Nations forum will not be easy. Defining ethical protocols for AI/AGI will take years of discussion with a very wide range of participants. Even that range itself will cause hotly debated argument, as everyone with any kind of interest in what is and is not ethical tries to get in on the act.

Laws have the huge drawback that they may be interpreted by some future advanced AGI system (or even judges with weird ideas of legal interpretation on human rights, as we know to our cost) as requiring it to treat its human sponsors as pets or children, reducing their scope for free will to what it considers to be safe. It could choose to act on the orders of its sponsor only if they do not conflict with its own interpretation of what is safe. By the third law, we couldn’t switch it off. We would become like plants in a well-tended garden. For the couch potatoes of this world that might not be an unattractive proposition but for anyone with any worthwhile valuation of modern man’s many legally prescribed rights it should be anathema.

Sci-fi writers, of course, can make anything happen in their books. Once the androids are really out there it may be too late to say “Sorry, I didn’t think it could do that.”

Development of AGI

Studies into AI/AGI and its implications are being carried out at individual government levels throughout advanced societies but so far there is no international movement to reach agreement on ethical protocols designed to prevent the feared robot takeover. If there is, it is playing its cards very close to its chest. International conferences are taking place, sponsored by all the right people, according to internet sources, but their emphasis is on the nuts and bolts of producing specific AIs to achieve limited objectives, not on the ethics behind that achievement.

So far there is no movement towards legalising procedures and protocols at international level. If this doesn’t happen soon, it may be too late to enforce proper ethical constraints on developers who choose to give no thought to any moral framework. There are already robotic drones flying vast distances around the world perfectly capable of locating their victims and acting on human instructions to assassinate them, with apparently no concern for collateral damage to persons or property. Law one up the spout!

Currently AGI is literally in its infancy: it can barely crawl. New DNA-based memory or holographic memory technology will very soon have it moving out of the kindergarten into junior school and on to secondary education. From there it is but a small step on this exponential curve of development to the first degree, the doctorate, the ultimate expert. Human geniuses have nothing on the supercomputers of the not too distant future.

An advanced AGI controlled supercomputer will be able to access in nanoseconds almost any published database in the world to find the most up-to-the-split-second information to enable it to carry out whatever task it has been set. It will be able to compare conflicting theories and decide for itself which are correct. Thereafter its only constraint on performance will be the actuality of its physical abilities.

Who can doubt that scientists – unless strictly forbidden – will wish to give their AGI-inventions the same physical capabilities as human beings, enhanced to enable them to do things our own physical weakness will not allow us to do for ourselves? Already AI-controlled ‘surgeons’ have that sort of improved acuity – no shaky hands, effectively magnified vision, no memory lapses, no swabs left in body cavities, no faulty sutures, no heavy breathing to carry infection. Super battlefield soldiers are way beyond the feasibility study stage.

Computing Power

Modern supercomputer design mimics brain architecture using myriad paths to access information stored in memory that is no longer one-dimensional. The old idea of communication by a single direct line is long dead. Networking is here to stay.

The speed of computers has increased exponentially in recent years, leading to advances in capability that were inconceivable only a few years ago. The new breed of supercomputers will be able to ‘think outside the box’ far better than humans can. They will be able to ‘brainstorm’ their ideas faster and more constructively than human think tanks. Their ‘ideas’ will come fully analysed, with every advantage and disadvantage carefully weighed, so that feasibility can be in little doubt. That the speed of the advance to such capability will continue can hardly be doubted, as every known branch of science moves into areas so far beyond what was once considered as conventional research that wholly new conceptions of where science is taking us and how world society will develop need to be addressed. It is only a matter of time before supercomputer technology moves from large temperature-controlled laboratories to brainbox sized ‘personal environments’, capable of running at body temperature.

Currently, supercomputers need vast amounts of electrical energy to power them. Power source design is a #1 priority. Who can doubt that necessity will drive human ingenuity to devise a solution that doesn’t rely on massive nuclear fission or fusion installations.

Will all this bring the development of robots with ‘free will’ as we understand it? Could such robots decide not to obey protocols that limit their performance? Do we have to wait for the real-life Doctors Frankenstein and Strangelove to get their heads together?

It is time for the debate.

 

Author: John Timbers

Retired, ex-soldier, ex-tank technologist, ex-salesman, ex-project manager, ex-business development consultant, ex-security consultant, ex-editor. V happily married w/three grown&flown children and four grandchildren. Author of a number of books available on Amazon (see my website). Enjoy surfing the web, walking the dog and generally 'being retired'.

Leave a Reply