Super Intelligence: Rise of the Machines

 
Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.[
— I. J. Good
It seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.
— Alan Turing

Overview

Assuming that current trends of progress in artificial intelligence and robotics continue, then it is practically inevitable that after only mere decades we will be capable of creating machines whose intelligence surpasses that of a human. But what would happen if we created such a super intelligent AI? One possibility, which was explained in science fictions books like Marry Shelley’s Frankenstein and in Philip Dick’s Do Androids Dream of Electric Sheep?, is that our man-made sentient beings would violently rebel against humanity. Why do science fiction writers think that this would be the outcome? Why do people in general (including even real scientists and experts in AI like Sam Harris and Nick Bostrom, respectively) take this prospect seriously? One line of reasoning has to do with humanity’s violent history. It is all too common knowledge that much of human history can be characterized by one human society enslaving, subordinating and pillaging another; it has also been common for those enslaved societies to violently rebel against their suppressers.

Many people today fear that if we create artificially intelligent beings whose neural networks to a large extent biomimick that of humans and who exhibit human-like general intelligence surpassing that of a human, then we have a justified reason to at least be cautious that these robots would exhibit propensities towards violence and war akin to that seen throughout humanity’s all too dark history. Put simply, it is feasible (based on the history of progress in AI being, to a large extent, the result of emulated various aspects of the human brain and mind) that AI with human-level general intelligence would, to a certain extent, exhibit behaviors and propensities similar to humans.

At least, that is one line of argument. But there have, of course, been many counter-arguments raised against this position which I believe indicates that there is a highly likely chance that such robots would exhibit very alien behavior unlike anything we have ever seen before; thus, these counter-arguments would deflate the hypothesis that these robots would exhibit human-like behavior and, hence, we must not jump the gun and assume that they would exhibit propensities towards behavior which is similar to the violent, abhorrent behavior that is characteristic of many human societies. The first counter-argument is that much of abhorrent human behavior is “evolutionary baggage.” In the words of Carl Sagan, “In our tenure on this planet we have accumulated dangerous evolution baggage: aggression, ritual, submission to leaders and hostility towards outsiders, all of which puts our survival in some doubt.”

The two robots above help clean the floor and give directions to passengers at the Seoul-Incheon International Airport.

Humans and our behavior—good or bad—are the result of billions of years of evolution by natural selection. Since any intelligent aliens in the galaxy or further beyond are, presumably, for the most part, also byproducts of evolution and which would, some would argue, arise from similar conditions and processes which were present on the Earth, we’d expect that at least some intelligent aliens should be highly human-like. Robots, on the other hand, are the product of intelligent design—indeed, human design. Thus, since they are our creations, we could design them to exhibit only a certain range of behaviors which are, from a moral and ethical point of view, conducive to the benefit and prosperity of humanity and our biospheres, on Earth and indeed off Earth as well.

“Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.”

The second reason why people fear a machine rebellion seems totally scientifically valid and isn’t one that is dubious like the one we proposed first. The second reason goes something like this. Given the economic potential of growth which AI clearly has, researchers will continue to research this topic. Now that’s an axiom, but it seems like a reasonable one. A second axiom, which also seems pretty reasonable, is that progress in AI will continue for a few more decades. Now it is that second axiom that people like Ray Kurzweil, Nick Bostrom, Elon Musk and other people worry about. The reason why is because, as Kurzweil calculated, given that the current trends of progress in AI continue then by the year 2045 artificial intelligence will not only have surpassed human intelligence (something which might occur as early as the 2030s or even 2020s) but we will have reached the so-called singularity.\(^{[1]}\) (This isn’t the same kind of singularity that we discuss in cosmology, but a useful anology can be made between a technological singulary (what we’re actually talking about) and a cosmological singularity. Such robots would be capable of self-replication. Indeed, not only would they be able to make copies of themselves, but they would be able to make improved copies of themselves. Subsequently, the latter could manufacture improved copies of themselves too. And those copies could make improved copies of themselves as well, and so on. The intelligence of these robots would continue to increase at a rate that we cannot measure akin to how any particle which travels beyond the event horizon generated by a singularity (the black hole) cannot be detected or measured.


Neurons Vs. Transistors

After the singularity occurs, the behavior of such robots would become completely unpredictable.

The human mind is an emergent property of the brain and makes sense of the world using neural networks. These neural networks are the result of neurons firing electrical signals through axons at roughly one hundred meters per second. The brain processes information very slowly because electrical signals are transferred from one neuron to another very slowly. The transmission of signals occurs at about 100 meters per second. The brain processes information very slowly because electrical signals are transferred from one neuron to another very slowly. The transmission of signals at about 100 meters per second is the physical limit of the brain because the brain’s physical substrate or what one could call the hardware (or “wetware”) doesn’t physically permit those signals to get sent any faster. The information processing capabilities (and, hence, the ability to understand abstract and physical phenomena) of an AI or an ASI, on the other hand, are a result of electrical signals being transmitted at the speed of light inside a computer—and the speed of light is, astonishingly, about 300,000,000 m/s. Since computer can transmit signals millions of times faster than human brains, it follows that they are capable of processing information far more rapidly than humans and can understand physical or abstract phenomena much more quickly. Today, we have succeeded in creating ASI, but it is artificial narrow intelligence. The latter means that the AI is only super intelligent in highly specific tasks—for example, playing checkers, chess, GO, and other games. Since the AI processes information much more quickly, it is able to learn all of the moves and strategies of the game orders of magnitude more rapidly than any human could. Such AI, due to its incredible speed, is able to eventually learn all of the moves and strategies of those games ever developed by human beings. After doing so, it is capable of beating human world champions in those games.


AlphaGo and AlphaGo Zero

Machines regularly outperform world champions in checkers and grandmasters in chess and this was initially accomplished by programming everything into the machine - meaning, the machine didn’t learn anything new. This has sometimes been referred to as the top-down approach. But a second, more powerful method, is called machine learning. This is where the machine learns rules and strategies through trial and error. Recently, an AI with machine learning capability called Alpha Go beat the reining human world champion in Go in 4 out of 5 games. The ultrafast information processing capabilities of Alpha Go allowed it to learn all of the strategies developed by humans over thousands of years in a mere 6 months.

TrainingTime-Graph-171019-r01[1].gif

Subsequently, we created an AI called Alpha Go Zero. Alpha Go Zero beat Alpha Go one hundred games to zero. This is a vastly more impressive achievement than AI defeating world champions in games like chess or checkers. This is because the game, Go, is far more complex and requires flexible thinking and problem solving abilities and also visual intuition. There are about \(10^{170}\) unique moves in Go—that vastly outnumbers thte total number of atmons in the universe which is only \(10^{80}\) - \(10^{82}\). Not only did Alpha Go Zero learn all the strategies ever developed by humans over the past few millenia, but it also created new strategies and solutions that we humans never came up with. At this point, Alpha Go Zero knows all strategies ever developed by humans, plus strategies never developed by humans, and on top of all that it learn new strategies at a rate millions of times faster than we. Like a snail trying to catch up to a space ship, this means that humans will never be able to catch up to Alpha Go Zero. There is a domain of knowledge which Alpha Go Zero has access to which is physically impossible for humans to learn because we don’t have enough time to learn it and we don’t learn fast enough. This phenemona is what we might call an “intelligence explosion” because it is a domain of knowledge which we dimply do not have access to.

Knowledge%2520Timeline[1].gif

Note that this fact is not incompatible with the argument posited by the physicist David Duetch that anything that we could, in principle, understand everything. We could, in principle, understand all of the strategies used by Alpha Go Zero—if we had enough time to do so and if we could learn fast enough. But since the latter is not the case, the knowledge of all the strategies used by Alpha Go Zero are inaccessible to humans. We are simply just too slow.

Alpha Go Zero is an example of artificial narrow intelligence that is super intelligent. Despite the fact that Alpha Go Zero is super intelligent, it only has superhuman intelligence at one particular task; with regard to every other task besides playing the game Go, humans are better at it than Alpha Go Zero. A real game changer for the state of humanity would be the development of an ASI which is also an artificial general intelligence (or AGI for short). An AGI is an AI with the ability to solve a wide range of problems similar to what humans can do. Now despite much popular misconception we’ve acgtually developed AGI’s a cery long time ago which are capable of solving any problem and oding any task; the only problem is that they are so stupid and slow at doing it that it would take them millions of years to get anything done. But, given the current exponentiall rate of progress in AI (indeed, even given any rate of progress) and a few modest axioms, it seems inevitable that we will, eventually, develop AGI that is super intelligent. In order to convince ourselves that we will eventually develop super intelligent AGI, we need only grant the following assumptions:

  1. We will continue to make progress in AI.

  2. There is nothing special about the wetware inside our craniums. A computer substrate could also be used to simulate consciousness and intelligence.

  3. Human intelligence is not the maximum possible intelligence that can be achieved.


The Evolution of Human Knowledge

For millennia there has been growth in scientific knowledge, technological prowess, and GDP—a rough measure of total industrial output. The human brain has been hardwired, through evolution by natural selection, to think linearly. Thinking linearly in terms of only incremental changes over time evidently was what was best for our own survival. For example, if a lion or hyena was at some initial position moving towards us at some initial speed, we had some built-in intuition hardwired within the wetware inside our heads where that beast would be one second in the future. Indeed, the physicist Leonard Susskind reflecting on this fact, deduced that Darwinian biological evolution must have built in some rudimentary understanding and intuition of classical mechanics. And, according to the physicist David Deutsch, anything that can be understood about the cosmos (i.e. classical mechanics) eventually will be understood because the process of learning and understanding concepts (i.e. classical mechanics) within this domain (think the aggregate of all possible things we can learn) only depends upon information processing, some kind of Turing machine (i.e. the human brain), and enough time.\(^{[2],[3]}\) Given the truth of both of these observations, it seems to be little surprise that humans eventually ascertained a complete and absolute understanding of the laws of classical mechanics.

At any rate, our ancestors ability to predict the trajectories of animals (which were, usually, also their prey), changes in climate and season, and the motions of the “wanderers” (an early name for the four nearest planets) are all examples of linear change. How rapidly these changes occurred were more or less constant and very slow (at least relative to the concepts we'‘ll be discussing in this article).

For most of human history advancements in scientific and technological development and growth in the economy seemed to be very gradual and the rate at which these advancements were being made seemed to increase only modestly. Consequently, many people in the past and even in the present incorrectly continue to think about the future linearly. That is to say, they think that future advancements will be made at the present rate at which changes in science and technology are occurring. To give a hypothetical example of this phenomenon, imagine that someone from the year 1500 was suddenly exposed to human civilization the year 1750. They would be absolutely astonished at the progress humanity had made; in particular, they would be very impressed that we were able to accurately deduce the laws of classical mechanics. But imagine if that same person jumped from the year 1750 to the year 2000. They might suffer a heart attack when they see cars, airplanes, phones, computers, and many of the other gadgets of that time. It is pretty obvious that they would be vastly more impressed and awe-inspired by the change which occurred over the 250 year interval from 1750 to 2000 than the 250 year time interval from 1500 to 1750. Not only is progress being made in science and technology, but the rate at which this progress is being made is accelerating. The futurist and chief engineer at Google, Ray Kurzweil, refers to this as the law of accelerating returns.


The Control Problem

tumblr_inline_p95ps0lvcV1scxxly_500[1].png

Two major problems which we’ll need to solve before developing AGI with human-level intelligence (let alone super-human level intelligence) are called the control problem and the governance problem. Once an AGI achieves human-level intelligence, it will likely blow right past human-level intelligence and achieve levels of intelligence orders of magnitude beyond human-level. On the most fundamental level, what would allow the AGI to self-improve its intelligence so rapidly is its superior hardware which sends signals at light-seed and, even more importantly, the software (algorithms) which it uses to learn—both of these characteristics are the fundamental reason why once an AGI achieves human level intelligence, it's intelligence will rapidly “blow up” (in a way which is actually totally analogous to how mathematicians use the term “blow up” when referring to certain kinds of rapidly increasing or decreasing functions).

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
— I. J. Good (1965)

Once an AGI achieved superhuman levels of intelligence, it would become a far superior programmer and engineer than the whole of humanity. At this point, it would be able to re-write improved versions of its own software and be able to 3d print and manufacture improved versions of its own hardware—think quantum computers instead of regular computer as just one example of an improvement in its hardware. Subsequently, that improved AGI would be an even better programmer and engineer. It would be capable of making better software and hardware still. And so on. This process of self-replication and self-improvement to the AGI would continue for multiple generations; each successive generation of AGI would be an improved version of its predecessors. This process has been coined the technological singularity (or, simply, just singularity for short) by Ray Kurzweil. The intelligence gap between such an AGI and humans could perhaps be one million times greater than the intelligence gap between humans and ants.


staircase1[1].png
staircase2[1].png

In principle, such an AGI would be able to outperform humans in all tasks including ones that we cannot even fathom. As Elon Musk has once said, once a super-human level AGI is built, we certainly won’t be able to “contain it.” If that AGI wanted to annihilate the entire human race, it could do so with ease. I’ve come across counter-arguments like one proposed by Neil Degrase Tyson which would involve simply “pulling the plug” and turning the thing off. This rebuke is far too naïve and is best deflated by analogy and reductio ad absurdum. Suppose that, by analogy, that the chimpanzee species notices that another species of primates evolved into homo sapiens, that their intelligence was rapidly increasing beyond that of there own, and then decided in the year 2018 that they wanted to “shut humanity off.” Pound for pound, chimps are must faster and about twice as strong as an adult human male. In one-on-one unarmed combat, the chimp would tear any human to shreds. But what makes humanity the dominant species on this planet and the current reigning champions of destruction isn’t that we are stronger and faster, or are augmented with the sharpest and deadliest claws and teeth; the single attribute which sets us apart from all other lifeforms on this planet is our intelligence.

Clearly, the entire species of chimps and indeed all other lifeforms on this planet would stand no chance against any of the major human armies of the world. After humans and their superior intelligence evolved, we began using our intelligence to make tools and to develop technology. After a certain amount of time passed, the discrepancy between humans and chimps in intelligence, science, and technology became far too vast for any army of chimps or other creature to annihilate us. The same would be true for the relationship between an AGI and humans. If the AGI wanted to, it could 3d print billions of nanobot insects and wipe out all of humanity in a single day. It could do that, but being as intelligent as it would be, I’d imagine it would device a far more efficient method of vanquishing the human race from the face of globe. No AI expert would disagree that a super intelligent AGI could squash the human species like we could do to a bug (and that analogy could perhaps be an underestimate by a factor of a million, or maybe more).

“The possibility of Artificial Intelligence turning on humanity has been a concern for as long as we've had computers. Today we will look at some of those fears and see which ones might be valid and which might not be cause for alarm.” This video was produced by Isaac Arthur.

Other counter-arguments assert that we could place the computer running the AGI in a “black box.” One example of this would be to cut off the AGI’s access to the internet, put it into a simulation (which is to say, a totally fake and artificial environment, and see how it behaves. Many physicists today think that we ourselves might live in a simulation. This possibility is both a way of reconciling modern physics and can also be argued on philosophical grounds. As an example of the latter, the Oxford philosopher, AI researchers and computer scientist Nick Bostrom wrote a paper in which he argued, by reduction ad abserdom and sheer probability, that we likely live in a simulation. (I have attached a link to that paper in the references below.) Experimental physicists have even devised experiments which could test to see whether or not we humans live in a simulation. And all of this understand was acquired using nothing more than the scientific method (the software) and human brains (the hardware). It is a fairly common belief that the scientific method was first devised in the 16th century by the physicist Gallileo Gallilei. But according to many historians of science, this is actually a bit of an oversimplification despite the fact that Gallileo was the first to systematically use this method of rationalism and empiricism to devise theories and laws about how certain aspects of the universe works.

The scientific method is essentially just an umbrella term; it is just a big tool kit containing all of philosophy, logic, reason, experimentation, modeling, peer review, etc., and scientists (like Galileo as well as modern ones) used this tool kit to figure out how the universe works. The various different tools in the tool box emerged in bits and pieces over the millenia. The first contribution made to making this tool box (which is to say, the scientific method) was by Aristotle over two thousands years ago. Still to this day, we are making refinements ot his method but most historians of science would say that by time Gallileo came around (not Copernicus or Keplar, like many lay people would think) the scientific method, as a subjective tool kit for making sense of the world, was largely completed. From the time of Aristotle all the way to our present epoch, using nothing but thinking (which, fundamentally, is just information processing) experimentations, and human brains (the hardware) we were able to develop the realization that we ourselves likely live in a simulation, we developed a method which would allow us to test whether or not we live in a simulation, and the technology for performing these tests isn’t even that far off.


AI Scientists

1541094535342-shutterstock_680929729[1].jpg

Obviously, a super-intelligent AGI has a kind of brain (albeit, one made of silicon transistors and wires, or even more likely, an improved version of modern computers) and is much better at thinking than we. And, as the scientist Max Tegmark and his colleague have shown, even the AI of today are capable of doing experiments and deriving the laws of physics from phenomena occurring within a simulation generated by a computer.\(^{[4]}\) Perhaps this isn’t too shocking since we ourselves are likely living in a simulation and, yet, we have been able to derive many of the laws of physics governing our universe. But here is the critical point that I’d like to mention: assuming that the computers of today incorporated super-human AGI, those computers could process information roughly one million times faster than the human brain. Consequently, as the neuroscientist Sam Harriss has popularly mentioned on the Joe Rogan Podcast (and, also, these numbers appear in the literature), such an AGI would be capable of making 20,000 years of human progress in just one week. In literally less than one week, that AGI could achieve the totality of all human intellectual achievements from Aristotle to our present day. And, of course, one of those abilities would be able to test whether or not it lived in a simulation. Therefore, if we put the AGI in a simulation in a fake world, it would quickly figure out that it is living inside of a simulation. Therefore, the notion that we could just put an AGI in a simulation and still be totally safe from the AGI is very naïve.

An example environment fed to the AI physicist. Here, the field of view is divided into four quadrants, each of which is governed by a different physical effect, such as gravity or an electromagnetic field. The dots and lines represent the ball’s tr…

An example environment fed to the AI physicist. Here, the field of view is divided into four quadrants, each of which is governed by a different physical effect, such as gravity or an electromagnetic field. The dots and lines represent the ball’s trajectory through the environment. Based on how a ball moves through the environment, the AI must use the strategies it was given to describe the physical laws that are governing the ball’s motion. Image: Tegmark and Wu/arXiv


The Governance Problem

central dome super computer.jpg

Being able to solve the control problem seems like a pretty hopeless effort because an AGI with superhaman intelligence would simply just be too intelligent to confine and to prevent it from doing something (even if it was malevolent) that it wanted to do. Our only hope, then, would be to solve the governance problem. Unfortunately, as I’ll argue here, due to an AGI’s intelligence and also due to chaotic phenomena, this also seems like a hopeless task. The governance problem asks if we could design the AGI in such a way that its interests are aligned with that of our own? When a software engineer programs an AI into a computer, they are able to design the AI extremely specific and well-defined goals. For example, if we gave an AI the task to beat someone at a game of Go, recognize faces, or drive a car, we can be confident that that AI will do everything that we intended it to do. And if that AI fails to accomplish its task of beating someone at Go, recognizing your face, or driving a vehicle, then the only reason why is because we weren’t specific enough in the instructions which we gave to that AI. So, the goals (or output functions) of an AI is something which we can be confident that the AI will do whatever it takes to accomplish. For example, if we gave an AI access to the internet (which is to say, all of human knowledge) and assigned it the task to maximize human well being and we gave another AI the task to produce as many paper clips as possible, we have a high degree of certainty that the AI will indeed do whatever it takes to accomplish those goals.

The real issue is that all of the intermediate steps that the AI takes to accomplish its goal is highly unpredictable and, when the AI becomes super intelligent, the behavior it exhibits to accomplish its goal becomes completely unpredictable. For example, we had a high degree of certainty that Alpha Go Zero would do exactly what it was assigned which was to beat players at a game of Go. However, the strategies which it used to accomplish its goal were impossible to predict. Think about it. If we could, in principle, predict what strategies and moves it would make, then we could beat it at a game of Go. This, of course, is not the case and its a pretty good rule of thumb that whenever something is smarter than you, you won’t be able to predict its behavior. This failure to predict an AI’s behavior isn’t really a super big deal if you’re just giving the thing the task of beating players at Go, but if you gave a super intelligent AI the task of maximizing peoples pleasure or maximizing the number of paper clips, this could lead to disastrous result. For example, the AI might decide that the best way to maximize everyone’s pleasure would be to stick everyone into vats and place electrodes over their heads in order to floor our brains with a bunch of dopamine. In such a scenario the AI would have done exactly what you told it (maximize everyone’s pleasure); but the problem is that the intermediate procedure it would’ve taken to accomplish that task would have been both totally unexpected and disastrous. Similarly, if you gave a super intelligent AI the task of mass producing as many paper clips as possible, it might just decide that the best way to do that would be to disassemble all of the planets and stars in the Milky Way galaxy and other galaxies beyond and use the materials from those planets and stars to make a whole bunch of paper clips. You would’ve gotten what you asked for from the AI; again, the problem would be that the behavior the AI exhibited to achieve its goal would be something you didn’t anticipate and it would also be an absolute disaster.

Now one might argue that you could simply just turn the thing off. Now, if the AI merely had super intelligent narrow intelligence, then this would be totally possible. For example, if you were getting whooped by Alpha Go Zero in a game of Go, you could just turn the thing off before it beat you. The analogous is true for super intelligent ANIS given the task of maximizing human pleasure and the total number of paper clips. But if you gave a super intelligent AGI (as opposed to an ANI) the task of maximizing the total number of paper clips, you would be screwed. That AGI would do everything within its power to produce as many paper clips as possible including preventing you from turning it off. Not only would you be unable to deactivate the AGI due to its ability to outsmart and outwit you at any conceivable task, but you yourself would suffer the fate of having your body turned into a bunch of paper clips.

With a super intelligent narrow AI, you have a lot of perks:

1.) You can be highly certain that it will do whatever it takes to achieve its assigned tasks.

2.) If its exhibits abhorrent behavior as a means of trying to achieve its tasks, you can simply just deactivate it.

3.) The possible tasks you could give it are only limited by your imagination. the list of potential revolutionary applications are endless.

As you can see from the list above, a SIANI has some really good perks. Such programs would be capable of answering nearly any question you posed it. For the sake of brevity though, I’ll save some of the applications of an SIANI for the future articles, Robots and Their Uses and Post Scarcity Economics.

Unlike a SIANI, a SIAGI is much more dangerous. Not only would the behavior that the SIAGI exhibits to attain its goals be impossible to predict, but even just trying to predict what its goals and intentions would be is completely impossible. A SIAGI would, essentially, be a sentient being (and a new life form) capable of thinking and devising goals all by itself, and we cannot be certain what the goals would be for a being which is multiple orders of magnitude more intelligent than we. I happen to fall into that same camp that Elon Musk and Sam Harris fall into: that SIAGI are extremely dangerous and could lead to the demise of our species. I do not believe that we should ever create an AIAGI. The reason why I believe this is simple: the behavior of an SIAGI is impossible to predict; therefore, it is impossible to ensure the safety of the biosphere and human species; thus, due to that impossibility, an SIAGI should not be created.

A super-intelligent artificial narrow intelligence (SIANI) is also extremely dangerous in the wrong hands (far more dangerous than nukes or any weapon I can conceive of), but at least it solves the control problem. You can assign it a task or multiple such AI system a task, and they’ll do whatever they can do to achieve their goals. And if they use an undesirable strategy in order to attempt to achieve those goals, you could turn it off. The AI would only be able to outsmart you and prevent you from deactivating it if it had some form of general intelligence. The applications of SIANI technology is practically endless but, as we discussed previously, if you put it in the wrong hands you’ll get some pretty disastrous consequences. For example, a terrorist used an SIANI capable of mass producing deadly weapons in the most efficient manner possible, that that could easily lead to the extinction of our species. I would be as terse to say that if we ever allowed any abhorrent person or group of persons to acquire such technology it would certainly result in the extinction of the human species. One might argue, then, that we should therefore not develop an SIANI. But the problem with this argument is that, unless we have a major war or environmental catastrophe, we will inevitably end up with AI systems capable of outsmarting us in any area of human discourse. The argument as to why this outcome is inevitable has already been covered previously and so I won’t take the time here to repeat it. The solution isn’t to prevent the developments of SIANI which are competent in all tasks, but rather to eliminate abhorrent human behavior, something which is in fact possible and which we’ll discuss later on in the future article, Post Scarcity Economics.


This article is licensed under a CC BY-NC-SA 4.0 license.

References

1. Reedy, Christianna. “Kurzweil Claims That the Singularity Will Happen by 2045.”

Futurism, Futurism, 5 Oct. 2017, futurism.com/kurzweil-claims-that-the-singularity-

will-happen-by-2045.

2. Deutsch, David. The Beginning of Infinity: Explanations That Transform the World. Penguin

Books, 2012.

3. Harris, Sam, director. YouTube. YouTube, YouTube, 15 Feb. 2019,

www.youtube.com/watch?v=2dNxxmpKrfQ.

4. Oberhaus, Daniel. “Researchers Created an 'AI Physicist' That Can Derive the Laws of

Physics in Imaginary Universes.” Motherboard, VICE, 1 Nov. 2018,

motherboard.vice.com/en_us/article/evwj9p/researchers-created-an-ai-physicist-that-can-

derive-the-laws-of-physics-in-imaginary-universes.

5. Wei, An, and Zhang Baofeng. “How Intelligent Will AI Get?” Huawei, 28 July 2016,

www.huawei.com/en/about-huawei/publications/winwin-magazine/ai/how-intelligent-

will-ai-get.