The Fourth Industrial Revolution: The Future of AI, Ethical and Political challenges, and Existential Risk

https://goo.gl/2Lup8q
https://goo.gl/2Lup8q

Genuine concern about the prospect of artificial intelligence (AI) is no longer confined to the dinner conversations of sci-fi geeks. While human fascination with building super-intelligent beings reaches back into antiquity, new life has been breathed into this latent dream. A paradigm shift has recently taken place in the field of AI. The catalyst was the arrival of machine learning – the ability for a computer to act or respond in a certain way after being explicitly programmed to do so. As a consequence, AI has been considered one of the major innovations likely to transform the future in the near term.

More recently, prominent public figures in the science and tech communities such as Elon Musk and Stephen Hawking have come out to issue grave warnings about the existential risk posed by future advancements in the field. In response to the most recent breakthroughs, institutions and companies have set themselves to the task of thinking about risk-mitigation, namely through research in ‘AI safety engineering.’

The Basic Argument for ‘Artificial General Intelligence’ (AGI)

Intelligence is defined by philosopher Nick Bostrom to be an optimization process – “a process that steers the future into a particular set of configurations.” A super-intelligence (which is to say super-human intelligence) is thus a very strong optimization process. On this view, super-intelligence already pervades our world. The devices in our pockets contains super-intelligence with respect to arithmetic. In narrow tasks and functions, AI has already surpassed human cognitive ability. The best board-game players, machine operators, and translators in the world are all computers. What we’re missing is general intelligence in machines – an ability to learn, plan, and transfer knowledge to different domains and novel situations, often from just raw sensory data. Machine creativity has so far been hard to come by thus far.

A recent survey of leading AI experts revealed that most believe this technology might surface – at the human level – by 2040. Lest we become distracted by time frames, according to philosopher Sam Harris, we need only accept two basic contentions to appreciate the inevitability of advanced AGI:

  1. Progress will continue in the hardware and software design of computers.
  2. Where intelligence and information processing are concerned, there is nothing in principle that prevents a suitably advanced digital computer from acquiring general intelligence (strong AI). The implication is that biological substrate, of the sort that underlies our brain function, does not have a monopoly on such an versatile capability.

With these assumptions in hand, we are led helplessly to the conclude that machines will one day acquire unprecedented intelligence and power. Even now, this potential “lies dormant in matter, just as the power of the atom lied dormant throughout human history, patiently waiting there, until 1945.”

If such an AI comes into being endowed simply with mere, human-level intelligence, it will nevertheless be able to make refinements to itself, recursively. In fact, it will be able to improve its own source code and, ultimately, to learn how to learn better. It will then, by definition, be the most competent innovator of further generations of itself. Indeed, because it will possess processing speed much faster than that displayed by the human brain, the AGI will be able to accomplish this on a digital time scale. The limits of processing in a machine substrate lie far beyond what can be accomplished by the biochemical neural circuits of the brain, the speed of which maxes out at 100m per second. AGI on the other hand, will run a million times faster on electrical circuits, inaugurating a growth process with an exponential takeoff. The mathematician I.J. Good first coined this the “intelligence explosion” in 1965. In one day, with access to the internet – and all available human knowledge – this AGI would produce thousands of years of human-level intellectual progress. Physicist David Deutsch has theorized that any change in the universe within the laws of physics can be accomplished, given the requisite knowledge. The horizons of this machine’s potential ingenuity are unpredictable to us, but we should know that whatever it possible will also be automatic.

There is therefore, a striking difference between AI and other powerful streams of research. Here, the most hazardous advancements are also the most profitable and ethically imperative. AGI may be the solution to most of the problems we face as a species – medical, political, economic, and environmental. Possibly worse than building a super AGI might actually be failing to do so. However, the race to the finish line poses two profound problems in need of serious discussion.

Two Problems: I. Control

At some future point, there will exist a machine “that can see more deeply into the present, past and future” than we can possibly imagine. The work of creating a functional God has been set in motion. Although, the prospect of creating a truly benevolent one cannot be taken for granted. Nor can it be expected that we will have a perfectly safe trial period to contain this problem.

This brings us to one of the main problems that groups such as the Future of Life Institute are set to tackle – the control problem. The concern here is that it is no easy thing to ensure that such an advanced system retains preferences and values commensurate with our own. Bostrom believes that this is the most pressing concern for the future of AI – ensuring that preferences are not “misconceived or poorly defined.” We cannot assume that such an intelligence will have our common sense. A useful analogy might be found in the myth of King Midas. Wishing that everything he touched would turn to gold, he would come to appreciate the principle you get what you ask for, not what you want, as he held the golden statue that had been his beloved daughter.

It may also be naïve to believe that an AI system poses no such problem, because after all, we can simply unplug it. It may be that, in principle, we cannot possibly imagine how difficult it would be to control such a being if it developed novel motives. The concern isn’t that it would suddenly act malevolently toward us. Rather, it is that upon a slight divergence of its interest from ours, it may just trample us to fulfill its objectives. The problem is thus one of competence, not malice. If a sufficiently advanced system arises which ‘stands in relation to us the way we stand we in relation to snails,’ it’s difficult not to worry about its potential indifference toward us. Our fate may then depend on its goodwill, just as the fate of the sea turtle depends on ours now. Computer scientists Stuart Russell and Peter Norvig argue that thus far, the field of AI has been neutral with respect to purpose. They suggest that the guiding principle be made to maximize social benefit rather than to improve raw capability.

Ultimately, we must ensure the AI retains a morality which conserves the well-being of the human race as its highest priority, and indeed, that it works to predict what we would value, if only we were more insightful. Bostrom argues that while this is an additional challenge, it may be the most consequential one. Looking back, “it might well be that… the one thing we did that really mattered, was to get this thing right,” he says.

Two Problems: II. Human Misconduct

Let us set aside the issue of control for a moment and assume everything goes as planned with execution, the so-called “best case scenario.” There remains a real concern about how people will use and react to the advent of this technology. The moment it becomes known that a super intelligence has emerged, the geopolitical and socioeconomic consequences will reverberate around the world.

The United States may become the first nation state to possess a near perfect policy-making tool that could effectively replace the entire State Department. This would also entail unmatched capabilities in conventional and cyber warfare. A shift in global power dynamics could cause a hostile reaction on the part of balancing powers. Such an asymmetry could cause an international crisis of the highest order, even undermining the United Nations in its role as a conflict mediator. Some have therefore suggested international agreements to facilitate cooperation on development, and even a neutral monitoring body analogous to the International Atomic Energy Agency.

An AI could also be a “perfect labour saving device.” Nearly all human labor could be replaced and automated as the most efficient machine labourers come into being, powered by the most inexpensive energy possible. Costs would plummet to that of raw materials. Sam Harris points out that our current political system we may not be ready to cope with such a transformation. He asks whether we may “return to a state of nature,” or a world marked by extreme inequality and authoritarianism.

The surprising recent advances have engendered cause for concern among many. For skeptics, however, the situation we are in does not seem so dire. Some appear to be overly optimistic about our ability to adapt to the issues as the technology scales. Others appear certain this won’t be our problem, likening current concerns about AI to concerns about overpopulation on the Mars colony. Hopefully, this article has demonstrated that in principle, nothing prevents the creation of a super-intelligent AGI, and that the attendant issues are far from trivial. Although such anxieties may often seem like silly abstractions, it’s all but certain that the fourth industrial revolution is coming. Humans might end up being a transitory species in the March of Progress.