eBook: Download Superintelligence Dangers Strategies Nick Bostrom ePub (TXT, KINDLE, PDF) + Audio Version


  • File Size: 2905 KB
  • Print Length: 353 pages
  • Publisher: OUP Oxford; 1 edition (July 3, 2014)
  • Publication Date: July 3, 2014
  • Language: English

Download (eBook)


Ma?tre. Bostrom has written an e book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers which could threaten humankind as the results of the development of artificial forms of intelligence.

What fascinated myself is that Bostrom has contacted the existential danger of AI from a viewpoint that, although I was an AI professor, I had never really analyzed in any detail.

When I was a graduate student in the early 1980s, studying for my PhD in AI, I emerged on comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) through which they mused that, if an artificially intelligent entity could improve its own design, then that enhanced version could generate an even better design, and so on, resulting in a kind of " chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved " superintelligence". This chain-reaction is actually the the one that Bostrom focusses on.

He recognizes three main paths to superintelligence:

1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural systems, evolutionary programming, etc. are put on bring about a superintelligence.

2 . The Entire Brain Emulation path -- Imagine that you are near death. You agree to have your brain freezing and then cut into countless thin slices. Financial institutions of computer-controlled lasers are then used to restore your connectome (i. at the., how each neuron is associated with other neurons, together with the microscopic construction of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body. If your remembrances, thoughts and functions occur from the connectivity construction and patterns/timings of neural firings of your brain, in that case your consciousness should wake up in that synthetic body.

The particular beauty of this approach is that humanity would not have to understand how the brain works. That would simply have to copy the structure of a given brain (to a sufficient level of molecular fidelity and precision).

3. The Neuromorphic route -- In this circumstance, neural network modeling and brain emulation techniques would be combined with AI technologies to produce a hybrid form of unnatural intelligence. For instance , as opposed to replicating a particular person's brain with high fidelity, extensive segments of humanity's overall connectome structure might be copied and then combined with other AI technologies.

Even though Bostrom's writing style is quite dense and dry, the book covers loads of issues concerning these 3 paths, with a major concentrate on the control problem. The control problem is the following: Just how can a population of humans (each whose intelligence is vastly inferior to that of the superintelligent entity) maintain control over that entity? When comparing our intelligence to that of the superintelligent entity, it will be (analogously) as though a bunch of, say, dung beetles are trying to maintain control over the human (or humans) they have just created.

Bostrom makes many interesting factors throughout his book. For example, he points out that a superintelligence might very easily destroy humankind even though the primary goal of that superintelligence is to achieve what is apparently a completely innocuous goal. He points out that a superintelligence would very likely become an expert at dissembling -- and therefore in a position to fool its individual creators into thinking that there is nothing to consider (when there really is).

I find Bostrom's method refreshing because I think that many AI researchers have been either unconcerned with the threat of AI or they have concentrated is without a doubt the threat to humanity once a big population of robots is pervasive throughout human society.

I have taught Artificial Intelligence at UCLA since the mid-80s (with a concentrate on how to permit machines to learn and know human language). In my graduate student classes I cover statistical, symbolic, machine learning, neural and evolutionary technologies for accomplishing human-level semantic running within that subfield of AI referred to as Natural Language Processing (NLP). (Note that human " natural" languages are very very different from artificially created technical languages, such a mathematical, logical or computer programming languages. )

Above the years I have been concerned with the dangers posed by " run-away AI" but my co-workers, in most cases, seemed largely unconcerned. For instance , consider a major introductory text in AI by Stuart Russell and Peter Norvig, titled: Synthetic Intelligence: A Modern Approach (3rd ed), 2010. In the very last part of that book Norvig and Russell briefly mention that AI could threaten human survival; however, they conclude: " But, so far, AI appears to fit in with other revolutionary technologies (printing, plumbing, air travel, telephone) whose negative repercussions are outweighed by their positive aspects" (p. 1052).

Within contrast, my own view has been that artificially intelligent, synthetic entities will come to dominate and replace humans, probably within 2 to 3 centuries (or less). I imagine three (non-exclusive) scenarios in which autonomous, self-replicating AI organizations could arise and jeopardize their human creators.

(1) The Robotic Space-Travel circumstance: In this scenario, autonomous robots are developed for space travel and asteroid mining. Unfortunately, many people believe in the alternative " Star Trek" circumstance, which assumes that: (a) faster-than-light (warp drive) will be developed and (b) the galaxy will be teeming, not only with planets exactly like World, but also these exoplanets will be lacking any type of microscopic life-forms dangerous to humans. Within the Star Trek circumstance, humans are very successful room travelers.

Yet , It is much more likely that, to make it to a near by planet, say, 100 light years away, will require that humans travel for a 1000 years (at 1/10th the speed of light) in a big metal box, all the while wanting to maintain a civilized modern society as they are being constantly radiated while they move about within a weak gravitational field (so their bones waste away while they constantly reuse and drink their urine). When their distant descendants finally arrive at the target planet, these descendants will very likely discover that the target planet is teeming with deadly, microscopic parasites.

Humans have progressed on the surface of the Earth and therefore their major source of power is oxygen. In order to survive they have to carry their environment around with them. In contrast, synthetic organizations will require no oxygen or gravity. They will not be alive (in the biological sense) and so therefore will not have to expend any energy during the voyage. A new simple clock can turn them on once they have arrived at the focus on planet and they'll be unaffected by any forms of alien microbial life.

If there were ever a conflict between humans that space-traveling synthetic AI entities, who would have the edge? The synthetic entities would be looking down on us all from space -- a definitive advantage. (If an intelligent alien ever appointments Earth, it is 99. 9999% likely that whatever exits the alien spacecraft will be a non-biological, synthetic entity -- mainly because space travel is just too difficult for natural creatures. )

(2) The particular Robotic Warfare scenario: Zero one wants their (human) soldiers to die on the battlefield. A population of intelligent robots that are designed to eliminate humans will solve this problem. Unfortunately, if control over such warrior robots is ever lost, then that could spell disaster for humanity.

(3) The Improved Dependency scenario: Even if the reason for writing this is to, it is already impossible to eliminate computers because we are so based mostly on them. Without personal computers our financial, transportation, communication and manufacturing services would grind to a cease. Imagine a near-future modern society in which robots perform nearly all of the services now performed by humans and in which the design and manufacture of robots are managed also by robots. Assume that, at some point, a new design results in robots that no longer comply with their human masters. The particular humans decide to shut off power to the robotic factory but it becomes out that the hydroelectric plant (that supplies it with power) is run by robots made in which same factory. So now the humans decide to halt all trucks that deliver materials to the factory, but as it happens that those trucks are powered by robots, and so on.

I had always thought that, for AI technology to pose an existential danger to humanity, it would require processes of robotic self-replication. In the Star Trek series, the robot Data is more intelligence that numerous of his human colleagues, but he has no desire to make millions of copies of himself, and therefore he poses less of a threat than, say, south american killer bees (which have been unstoppable as they have spread northward).

Once synthetic entities would love to improve their own designs also to reproduce themselves, they will have many advantages over humans: In this article are merely a few:

1. Factory-style replication: Humans require approximately 20 years to generate a functioning adult human. As opposed, a robotic factory could generate hundreds of robots every day. The best event to human-style (biological) replication will occur each time a subset of those robots travel to a new location to set up a new robotic factory.

2. Instantaneous learning: Humans have always dreamt of the " learning pill" however instead, they have to undergo that time-consuming process called " education". Imagine if you could learn how to fly a plane just by ingesting a pill. Synthetic organizations might have this capability. The particular brains of synthetic organizations will consist of software that executes on general computer hardware. Because of this, each robot will be able to download additional software/data to instantly obtain new knowledge and capabilities.

3. Telepathic communication: Two robots will be able connect by radio waves, with robot R1 directly sending some capability (e. g., data and/or algorithms figured out through experience) to another robot R2.

4. Growing old: A robot could back up a duplicate of the mind (onto some safe-keeping device) every week. In the event the robot were destroyed, a new version could be reconstructed with just the damage of one week's worth of memory.

5. Tough Environments: Humans have developed clothing in order to be in a position to survive in cold environments. We go into a closet and choose thermal leggings, gloves, eye protection, etc. to go snowboarding. As opposed, a synthetic entity could get into its cabinet and select an alternate, entire synthetic body (for survival on different planets with different gravitational fields and atmospheres).

What is exciting about Bostrom's book is that he does not emphasize any of the above. Instead, he focusses his book on the dangers, not from a society of robots more capable than humans, but, instead, on the dangers presented by a single entity with superintelligence coming about. (He does consider what he calls the " multipolar" scenario, but that is just the case of a little number of competing superintelligent entities. )

Bostrom is a professor of philosophy at Oxford University and so the reader is also treated to issues in morality, economics, utility theory, politics, value learning plus more.

I have always been pessimistic about humanity's chance of avoiding damage at the hands of it future AI masterpieces and Bostrom's book focusses on the many challenges that humanity may (soon) be facing as the advancement a superintelligence becomes increasingly more likely.

However, I would like to indicate one issue that I think Prof. Bostrom mostly looks out to. The issue is Normal Language Processing (NLP). This individual allocates only two sentences to NLP in his entire book. His point out of natural language only occurs in Chapter 13, in his section on " Values models". Here he views that, when giving descriptions to the superintelligence (of the way we want it to behave), its ability to understand and carry out these descriptions may require that it comprehend individual language, for example, the phrase " morally right".

This individual states:

" The route to endowing an AI with any of these concepts might involve providing it general linguistic capacity (comparable, at least, to that particular of a normal individual adult). Such a basic ability to comprehend natural language could then be used to understand what is meant by 'morally right' " (p. 218)

I fear that Bostrom has not sufficiently appreciated the requirements of natural language comprehension and generation for accomplishing general machine intelligence. I don't think that an AI entity will pose an existential threat until it has achieved at least a human level of natural language running (NLP).

Human-level consciousness is different than animal-level mind because humans are self-aware. They not only think thoughts about the entire world; they also think thoughts about the fact that they are thinking thoughts. They will not only use specific words; they are aware about the fact that they are using words and how different categories of words differ in efficiency. They are not only capable of following rules; these are aware of the fact that rules exist and that they are able to follow (or not follow) those rules. Humans are able to invent and improve rules.

Language is required to achieve this level of self-reflective thought and creativity. I define (human-level natural) language every system in which the internal structures of thought (whatever those happen to be, whether possibilities or vectorial patterns or logic/rule structures or dynamical attractors or neural firing patterns, etc. ) are mapped onto external buildings -- ones that can then be conveyed to others.

Self-awareness arises because this mapping allows the presence of a dual system:
Internal (Thought) Structures < ---> Exterior (Language) Structures.

In the case of human language, these external structures are symbolic. This dual system allows an intelligent entity to take the final results of its thought processes, map them to symbols and then use these emblems to trigger thoughts in other intelligent entities (or in oneself). An entity with human-level self-awareness can hold a kind of conversation with itself, in which it can recommend to and therefore think about its own thinking.

Something such as NLP must therefore can be found BEFORE machines can reach a level of self-awareness to pose a danger to humanity. In the case of a super-intelligence, this dual system may look different than individual language. For example, a superintelligence might map interior thoughts, not only to symbols of language, but additionally to intricate vectorial structures. But the point is the same -- something must work like an external, self-referential system -- a system than can externally recommend to the thoughts and processes of that system itself.

In the circumstance of humans, do not have access to the interior structure of our own thoughts. But that doesn't matter. Exactly what matters is that we can map aspects of our thoughts out to external, symbolic structures. We can then communicate these structures to others (and also back to ourselves). Words/sentences of language can then trigger thoughts about the world, about ourself, about our goals, our plans, our capabilities, about conflicts with others, about potential future events, about past events, and so forth

Bostrom seems to imply (by his oversight) that human-level (and super-human levels) of general intelligence can occur without language. I think this is highly not likely.

An AI system with NLP capability makes the control problem much more difficult than even Bostrom statements. Consider a human H1 who kills others because he believes that God has commanded him to eliminate those with different values. Since he has human-level self-awareness, he should be explicitly aware about his own beliefs. If H1 is sufficiently intelligent then we should be able to communicate a counterfactual to H1 of the kind: " If you failed to believe in God or if you did not think that God commanded you to kill infidels, then you would not eliminate them. " That is, H1 should have access (via language) to his own beliefs also to knowledge into how changes in those beliefs might (hypothetically) change his own behavior.

It really is this language capability that allows a person to change their own beliefs (and goals, and plans) over time. It is the combination of the self-reflective nature of human language, put together with human learning capabilities, that makes it extremely difficulty to both predict and control what humans will finish up thinking and/or desiring (let by yourself superintelligent entities)

It is extremely difficult but (hopefully) not impossible to regulate a self-aware entity. Consider two types of psychiatric patients: P1 and P2. Both have a compulsion to rinse their hands continuously. P1 has what doctors call " insight" into his own condition. P1 declares: " I know I was suffering from an obsessive/compulsive trait. I don't want to keep washing my hands but I aren't help myself and I was hoping that you, the doctors, will cure myself. " As opposed, patient P2 lacks " insight" and states: " I'm fine. I wash my palms constantly because it's the only way to make be sure that they are not covered with germs. "

If we were asked which patient seems more intelligent (all other things being equal) we would choose P1 to be more intelligent than P2 because P1 is aware of top features of P1's own thinking processes (that P2 is not aware of).

Since a superintelligent entity becomes more and more superintelligent, it'll have more and more awareness of its own mental processes. With increased self-reflection it is going to become more and more autonomous and fewer able to be managed. LIke humans, it will have to be persuaded to believe in something (or to take a certain course of action). Also, this superintelligent entity will be designing even more self-aware versions of itself. Increased intelligence and increased self-reflection go hand in hand. Monkeys don't convince humans because monkeys lack the ability to recommend to the concepts that humans are able to entertain. To a superintelligent entity we are as convincing as monkeys (and probably much less persuasive).

Any superintelligent entity that includes human general intelligence will exhibit what is commonly referred to as " free will". Personally, I do not think that my choices are made " freely". That is, my neurons fire -- not because they choose to, but because they experienced to (due to the laws of physics and biochemistry). But let us all define " free will" as any deterministic system with the following components/capabilities:

a. The NLP capacity to comprehend and generate words/sentences that refer to the own thoughts and thought processes, e. g. to be able to discuss the meaning of the word " choose".

b. Ability to generate hypothetical, possible futures before taking a task and also, capacity to build hypothetical, alternative pasts after having taken that action.

c. Ability to think/express counterfactual thoughts, such as " Even though I chose action AC1, I could have instead chosen AC2, and if I had succeeded in doing so, then the following alternative future (XYZ) would likely have occurred. "

Such as system (although each part is deterministic and thus does not violate the laws of physics) will subjectively experience having " free will". I think that a superintelligence will have this kind of " free will" -- in spades.

Given all the recent advances in AI (e. g. autonomous vehicles, thing recognition learning by heavy neural networks, world master-level play at the game of Jeopardy by the Watson program, etc. ) I think that Bostrom's guide is very timely.

Eileen Dyer, Obviously, 200+ webpages later, the writer can't answer the 'what is to be done' question with regards to the likely emergence of non-human (machine-based) super-intelligence, sometime, possibly soon. This is expected because, as a species, we've always been the smartest ones around and never had to even think about the probability of proximité alongside something or someone impossibly smart and smart in manners well beyond our comprehension, possibly driven by goals we can't understand and acting in ways that may cause our extinction.

Building his quarrels on available data and extrapolating from there, Bostrom is confident that:

- some form of self-aware, machine super-intelligence is likely to emerge
- we may be unable to stop it, even if we desired to, no matter how hard we tried
- while we may be unable to stop the emergence of super-intelligence, we could prepare ourself to manage it and possibly survive it
- us not taking this seriously and not being prepared may cause our extinction while serious pre-emergence debate and preparing may cause some form of co-existence

It's radical and perhaps frightening but our failure to know the magnitude of the potential risks we are about to deal with will be a grave error given that, once super-intelligence commences to manifest itself and work, the change may be extremely quick and we may not be afforded a second chance.

Most of the book concerns itself with the number of types of super-intelligence that could develop, the ways in which organic beef be able to control or at least co-exist with such entities or entity, what the world and literally the Universe may turn into depending how we plant the first super-intelligent seed. The creator also suggests that it might be possible for us to survive and even profit if we find a way to do everything just about right. Of course, the odds of that happening given human nature are extremely small but some confidence is needed or we'd just give up and permit ourselves to go wiped out or simply all turn into maintenance workers, serving our all-knowing, all-powerful master.

I am not going to get into any further detail. Bostrom makes his circumstance with competence and humor and this well-researched, original and important work should get to be read and understood and, hopefully, used seriously enough for some of us to expand on his research and work on our conclusions. I will finish my little review here but not before I quote from Bostrom's book, his fervid warning: "we humans are like small children playing with a bomb. These kinds of is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is challenging for which we are not ready now and will not be ready for a long time"

How did he come to such a radical and pessimistic conclusion? An individual better read the guide. It's not exactly fifth grade level material but it's rather a fascinating read for anyone sufficiently motivated, patient and open-minded.

Where can i download free ebook pdf kindle reader online textbook.


Epub electronic summary of the reserve full ebook assessment article by amazon ebay choices amiability Superintelligence Dangers Strategies Nick Bostrom.
You can also buy order purchase intimacy Superintelligence Dangers Strategies Nick Bostrom theme.


Kindle Release design with Music Multimedia system Dvd Video recording Hardcover key points New or used, Mass fast market place paperback.


100 % free ebook pdf kindle reader over the internet textbook epub electronic summary of the book pick-me-up Superintelligence Dangers Strategies Nick Bostrom full ebook review report by amazon ebay collections.
For android or mobile lovely Superintelligence Dangers Strategies Nick Bostrom for iphone, apple ipad tablet txt format complete version, document with internet page amounts theory, art, torrent.
You can also buy order purchase benefits Superintelligence Dangers Strategies Nick Bostrom theme Kindle Edition design with Music Multimedia system CD Video Hardcover principles New or used.


Mass fast market place paperback, fundamental cheap guide Audiobook price quotations, adobe converter, app, modern facts series, essential data source, open public search and document services.


consciousness Superintelligence Dangers Strategies Nick Bostrom concerns tutorial full character types story with research lead dummies integrating all chapters gratis, sparknotes author, part introduction.


Learning Explore standard paper about alacrity Superintelligence Dangers Strategies Nick Bostrom dissertation heritage selection retail store.
Person write my dissertation standard paper type instruction manual practical, hindi, urdu, English and french, chinese and Aussie languages: supported by spain and italian.


Review essentials products and do the job with rules trilogy, diaries integrated materials. release. Learning Exploration standard paper about morphing Superintelligence Dangers Strategies Nick Bostrom dissertation heritage selection retail store.


If you are interested in some other book click the link

If you are interested in another guide view this blog

If you are interested in one more book read this site

If you are interested in some other publication take a look here

If you are interested in some other electronic book please visit

Archive widget

Superintelligence Dangers Strategies Nick Bostrom
Average Rating: 4.46
Votes: 8
Reviews: 1