THE CHALLENGE OF “ARTIFICIAL INTELLIGENCE”

 

THE CHALLENGE OF ARTIFICIAL INTELLIGENCE”

Artificial Intelligence

Birth of AI

By Bob Podolsky

Overview

It is commonly agreed by today’s scientists that computer technology will soon be sufficiently advanced to create machines that actually think or act autonomously – much the way humans do. Such machines are referred to in the literature as “Artificial Intelligence”, or AI for short.

As that time approaches, many scientists are becoming increasingly concerned that intelligent machines might be hostile to humans – or outright dangerous. The “Terminator” and “Matrix” movie series are fictional depictions of this risk, which renowned scientist Stephen Hawking, for example, takes seriously.

Assuming that the risk is real, which seems a reasonable starting point for this discussion, the question becomes, “how can the risk be ameliorated – or better still, eliminated”?

Two approaches have been suggested by Elon Musk and others from the Future of Life Institute:

  1. Design AI machines that are inherently safe, as outlined by Isaac Asimov in 1946,

  2. Somehow solve the problem through the application of ethics.

In the paragraphs below, I intend to demonstrate the reasoning by which I’ve come to the following conclusions:

  • For humans to live safely together with intelligent machines, the machines must be taught a suitable ethic via approach #1.

  • It is too late in the AI development cycle for the first approach to succeed, because machines are already being programmed with lethal capabilities.

  • The 2nd approach could succeed in principle, but for it to do so it will be necessary for humans, as a species, to learn to live safely and peacefully together with one another.

  • Living safely and peacefully with one another is a highly-valued outcome with or without intelligent machines. An ethical means of doing this exists, but is not yet widely known.

  • The most useful mind set in analyzing the problem is to regard self- aware intelligent machines as we would alien visitors from another star system.

Some Basics

To start, let’s examine the word, “intelligence”. As a behavioral descriptor the word is best defined as the ability to predict and control events in the real world. This is not (quite) the meaning of the word when used in the phrase “Artificial Intelligence”, in which context the word means a thing or being that exhibits such ability.

With this definition in mind, we recall that every intelligence must have certain components and properties:

  • Input devices – eyes, ears, sensory nerves, etc. – by which the intelligence can acquire information.

  • A means of storing, indexing, and retrieving acquired information.

  • An external communication means – display screen, printer, voice, etc.

  • One or more effectors – the hands and feet that can act on the environment.

  • A logic or reasoning function – brain

  • A purpose enabler or motivating component – a will.

  • Internal communicators that tie all the other components together.

As defined above, an intelligence is a being (of sorts), but not necessarily a conscious intelligence. It becomes conscious when it becomes aware of being aware. By my reckoning, beings having this capacity should be regarded as people, and treated accordingly – because they have enough awareness to learn to make ethical discernments. At this point too, it behooves us to acknowledge the being’s self-ownership – exactly as we would an adolescent human. Failing in this, our sentient robots become a race of slaves.

This might be a good time to note that “artificial intelligence” is a misnomer. A more accurate term would be “synthetic intelligence” or “non-human intelligence”. This is where the comparison with space-faring aliens becomes apt. We wouldn’t regard such beings as “things” just because their bodies were chemically different than our own. Any species sufficiently advanced to achieve interstellar travel must have long since stopped wasting its precious resources on wars and destruction – and would likely view humans, therefore, much as we might view the great apes: the phrase “promising but inferior” comes to mind. Though as far as I know the apes don’t make war on one another.

About the Ethics

As explained at some length in the article linked above, some ethics are valid and some are not. However, every valid ethic contains a non-aggression principle in one form or another: a statement that the initiation of force, or its threat, is unethical. The use of force is only ethical in (true) self-defense – and then only to the extent required to stop an unethical act of aggression.

By the definition above, most of the activities of the US military are unethical acts of aggression. Certainly attacking a wedding or funeral party in Afghanistan with a drone controlled from half way around the world is NOT an act of self defense. Nor is the concept of “acceptable collateral damage” an ethically defensible policy.

It is already a fact that the military is developing robots with lethal capabilities. Should those robots become autonomous (self-aware and self-programming) – or fall under the control of a computer having such properties – then we’ll have all the makings of a “Skynet” event.

Reality

The idea of programming self-aware robots with a prime directive that forbids them to harm humans is attractive – in principle. But succeeding at that task is to program a basic principle that we have yet to program in ourselves. Is that even remotely possible?

Add to that the additional complexity of teaching a robot to distinguish friend from foe, and the success of the task becomes highly improbable. What is more, if one could program a robot to act ethically, it would never agree to kill people overseas to begin with. Face it! War as we know it is unethical.

In fact, almost all of our societal institutions consistently make highly unethical decisions. So why would we expect the sentient robots we create to act more ethically than we do?

Is There a Solution?

Yes, there is an answer – but the window of opportunity to apply it is closing even now. To live at peace with non-human intelligences we must learn to live at peace with one another. It’s that simple (but not easy).

If we have the will to do so, here are the steps:

  1. Replace hierarchies everywhere with HoloMats of Octologues.

  2. Adopt the Bill of Ethics as the basic ethical standard everywhere.

  3. Let the existing system die of attrition, as more and more people migrate to the new system and cease supporting the old one.

For this solution to succeed, a massive promotional effort is required starting immediately. The public must learn of the new system, recognize its benefits, and adopt it. Failing in this, we will shortly be looking at a new “dark age” – or worse.

On the other hand, if the above steps are taken successfully, we could see the beginning of a new age of peace, prosperity, creativity, and love, on a scale heretofore unimaginable.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)

Want to see a better world? Leave a comment. Solving the captia funds Titania! Thank you for your support!