Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.
Bostrom, who directs Oxford's Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.
Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism---the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.
Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?
Let's say that initially you think that each of these hypotheses is equally likely, you then have to take into account the self-sampling assumption and your own birth rank, your position in the sequence of people who have lived and who will ever live. We estimate currently that there have, to date, been 100 billion humans. Taking that into account, you then get a probability shift in favor of the smaller hypothesis, the hypothesis that only 200 billion humans will ever have existed. That's because you have to reason that if you are a random sample of all the people who will ever have existed, the chance that you will come up with a birth rank of 100 billion is much larger if there are only 200 billion in total than if there are 200 trillion in total. If there are going to be 200 billion total human beings, then as the 100 billionth of those human beings, I am somewhere in the middle, which is not so surprising. But if there are going to be 200 trillion people eventually, then you might think that it's sort of surprising that you're among the earliest 0.05% of the people who will ever exist. So you can see how reasoning with an observation selection effect can have these surprising and counterintuitive results. Now I want to emphasize that I'm not at all sure this kind of argument is valid; there are some deep methodological questions about this argument that haven't been resolved, questions that I have written a lot about.
See I had understood observation selection effects in this context to work somewhat differently. I had thought that it had more to do with trying to observe the kinds of events that might cause extinction level events, things that by their nature would not be the sort of things that you could have observed before, because you'd cease to exist after the initial observation. Is there a line of thinking to that effect?
Bostrom: Well, there's another line of thinking that's very similar to what you're describing that speaks to how much weight we should give to our track record of survival. Human beings have been around for roughly a hundred thousand years on this planet, so how much should that count in determining whether we're going to be around another hundred thousand years? Now there are a number of different factors that come into that discussion, the most important of which is whether there are going to be new kinds of risks that haven't existed to this point in human history---in particular risks of our own making, new technologies that we might develop this century, those that might give us the means to create new kinds of weapons or new kinds of accidents. The fact that we've been around for a hundred thousand years wouldn't give us much confidence with respect to those risks.
But, to the extent that one were focusing on risks from nature, from asteroid attacks or risks from say vacuum decay in space itself, or something like that, one might ask what we can infer from this long track record of survival. And one might think that any species anywhere will think of themselves as having survived up to the current time because of this observation selection effect. You don't observe yourself after you've gone extinct, and so that complicates the analysis for certain kinds of risks.
A few years ago I wrote a paper together with a physicist at MIT named Max Tegmark, where we looked at particular risks like vacuum decay, which is this hypothetical phenomena where space decays into a lower energy state, which would then cause this bubble propagating at the speed of light that would destroy all structures in its path, and would cause a catastrophe that no observer could ever see because it would come at you at the speed of light, without warning. We were noting that it's somewhat problematic to apply our observations to develop a probability for something like that, given this observation selection effect. But we found an indirect way of looking at evidence having to do with the formation date of our planet, and comparing it to the formation date of other earthlike planets and then using that as a kind of indirect way of putting a bound on that kind of risk. So that's another way in which observation selection effects become important when you're trying to estimate the odds of humanity having a long future.
One possible strategic response to human-created risks is the slowing or halting of our technological evolution, but you have been a critic of that view, arguing that the permanent failure to develop advanced technology would itself constitute an existential risk. Why is that?
Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.
Another reason I haven't emphasized or advocated the retardation of technological progress as a means of mitigating existential risk is that it's a very hard lever to pull. There are so many strong forces pushing for scientific and technological progress in so many different domains---there are economic pressures, there is curiosity, there are all kinds of institutions and individuals that are invested in technology, so shutting it down is a very hard thing to do.
What technology, or potential technology, worries you the most?
Bostrom: Well, I can mention a few. In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain---you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we're also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.
In the longer run, I think artificial intelligence---once it gains human and then superhuman capabilities---will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals.
In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?
Bostrom: I think what's driving it is the sense that humans are developing these very potent capabilities---we are doing unprecedented things, and there is a risk that something could go wrong. Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed.
Each time we make one of these new discoveries we are putting our hand into a big urn of balls and pulling up a new ball---so far we've pulled up white balls and grey balls, but maybe next time we will pull out a black ball, a discovery that spells disaster. At the moment we have no good way of putting the ball back into the urn if we don't like it. Once a discovery has been published there is no way of un-publishing it.
Even with nuclear weapons there were close calls. According to some people we came quite close to all out nuclear war and that was only in the first few decades of having discovered the new technology, and again it's a technology that only a few large states had, and that requires a lot of resources to control---individuals can't really have a nuclear arsenal.
Can you explain the simulation argument, and how it presents a very particular existential risk?
Bostrom: The simulation argument addresses whether we are in fact living in a simulation as opposed to some basement level physical reality. It tries to show that at least one of three propositions is true, but it doesn't tell us which one. Those three are:
1) Almost all civilizations like ours go extinct before reaching technological maturity.
Bostrom, who directs Oxford's Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.
Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism---the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.
Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?
Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.
In the short term you don't seem especially worried about existential risks that originate in nature like asteroid strikes, supervolcanoes and so forth. Instead you have argued that the majority of future existential risks to humanity are anthropogenic, meaning that they arise from human activity. Nuclear war springs to mind as an obvious example of this kind of risk, but that's been with us for some time now. What are some of the more futuristic or counterintuitive ways that we might bring about our own extinction?
Bostrom: I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.
Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.
And why shouldn't we be as worried about natural existential risks in the short term?
Bostrom: One way of making that argument is to say that we've survived for over 100 thousand years, so it seems prima facie unlikely that any natural existential risks would do us in here in the short term, in the next hundred years for instance. Whereas, by contrast we are going to introduce entirely new risk factors in this century through our technological innovations and we don't have any track record of surviving those.
Now another way of arriving at this is to look at these particular risks from nature and to notice that the probability of them occurring is small. For instance we can estimate asteroid risks by looking at the distribution of craters that we find on Earth or on the moon in order to give us an idea of how frequent impacts of certain magnitudes are, and they seem to indicate that the risk there is quite small. We can also study asteroids through telescopes and see if any are on a collision course with Earth, and so far we haven't found any large asteroids on a collision course with Earth and we have looked at the majority of the big ones already.
You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?
Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.
Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
How so?
Bostrom: Well, one principle for how to reason when there are these observation selection effects is called the self-sampling assumption, which says roughly that you should think of yourself as if you were a randomly selected observer of some larger reference class of observers. This assumption has a particular application to thinking about the future through the doomsday argument, which attempts to show that we have systematically underestimated the probability that the human species will perish relatively soon. The basic idea involves comparing two different hypotheses about how long the human species will last in terms of how many total people have existed and will come to exist. You could for instance have two hypothesis: to pick an easy example imagine that one hypothesis is that a total of 200 billion humans will have ever existed at the end of time, and the other hypothesis is that 200 trillion humans will have ever existed.
Let's say that initially you think that each of these hypotheses is equally likely, you then have to take into account the self-sampling assumption and your own birth rank, your position in the sequence of people who have lived and who will ever live. We estimate currently that there have, to date, been 100 billion humans. Taking that into account, you then get a probability shift in favor of the smaller hypothesis, the hypothesis that only 200 billion humans will ever have existed. That's because you have to reason that if you are a random sample of all the people who will ever have existed, the chance that you will come up with a birth rank of 100 billion is much larger if there are only 200 billion in total than if there are 200 trillion in total. If there are going to be 200 billion total human beings, then as the 100 billionth of those human beings, I am somewhere in the middle, which is not so surprising. But if there are going to be 200 trillion people eventually, then you might think that it's sort of surprising that you're among the earliest 0.05% of the people who will ever exist. So you can see how reasoning with an observation selection effect can have these surprising and counterintuitive results. Now I want to emphasize that I'm not at all sure this kind of argument is valid; there are some deep methodological questions about this argument that haven't been resolved, questions that I have written a lot about.
See I had understood observation selection effects in this context to work somewhat differently. I had thought that it had more to do with trying to observe the kinds of events that might cause extinction level events, things that by their nature would not be the sort of things that you could have observed before, because you'd cease to exist after the initial observation. Is there a line of thinking to that effect?
Bostrom: Well, there's another line of thinking that's very similar to what you're describing that speaks to how much weight we should give to our track record of survival. Human beings have been around for roughly a hundred thousand years on this planet, so how much should that count in determining whether we're going to be around another hundred thousand years? Now there are a number of different factors that come into that discussion, the most important of which is whether there are going to be new kinds of risks that haven't existed to this point in human history---in particular risks of our own making, new technologies that we might develop this century, those that might give us the means to create new kinds of weapons or new kinds of accidents. The fact that we've been around for a hundred thousand years wouldn't give us much confidence with respect to those risks.
But, to the extent that one were focusing on risks from nature, from asteroid attacks or risks from say vacuum decay in space itself, or something like that, one might ask what we can infer from this long track record of survival. And one might think that any species anywhere will think of themselves as having survived up to the current time because of this observation selection effect. You don't observe yourself after you've gone extinct, and so that complicates the analysis for certain kinds of risks.
A few years ago I wrote a paper together with a physicist at MIT named Max Tegmark, where we looked at particular risks like vacuum decay, which is this hypothetical phenomena where space decays into a lower energy state, which would then cause this bubble propagating at the speed of light that would destroy all structures in its path, and would cause a catastrophe that no observer could ever see because it would come at you at the speed of light, without warning. We were noting that it's somewhat problematic to apply our observations to develop a probability for something like that, given this observation selection effect. But we found an indirect way of looking at evidence having to do with the formation date of our planet, and comparing it to the formation date of other earthlike planets and then using that as a kind of indirect way of putting a bound on that kind of risk. So that's another way in which observation selection effects become important when you're trying to estimate the odds of humanity having a long future.
One possible strategic response to human-created risks is the slowing or halting of our technological evolution, but you have been a critic of that view, arguing that the permanent failure to develop advanced technology would itself constitute an existential risk. Why is that?
Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.
Another reason I haven't emphasized or advocated the retardation of technological progress as a means of mitigating existential risk is that it's a very hard lever to pull. There are so many strong forces pushing for scientific and technological progress in so many different domains---there are economic pressures, there is curiosity, there are all kinds of institutions and individuals that are invested in technology, so shutting it down is a very hard thing to do.
What technology, or potential technology, worries you the most?
Bostrom: Well, I can mention a few. In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain---you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we're also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.
In the longer run, I think artificial intelligence---once it gains human and then superhuman capabilities---will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals.
In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?
Bostrom: I think what's driving it is the sense that humans are developing these very potent capabilities---we are doing unprecedented things, and there is a risk that something could go wrong. Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed.
Each time we make one of these new discoveries we are putting our hand into a big urn of balls and pulling up a new ball---so far we've pulled up white balls and grey balls, but maybe next time we will pull out a black ball, a discovery that spells disaster. At the moment we have no good way of putting the ball back into the urn if we don't like it. Once a discovery has been published there is no way of un-publishing it.
Even with nuclear weapons there were close calls. According to some people we came quite close to all out nuclear war and that was only in the first few decades of having discovered the new technology, and again it's a technology that only a few large states had, and that requires a lot of resources to control---individuals can't really have a nuclear arsenal.
Can you explain the simulation argument, and how it presents a very particular existential risk?
Bostrom: The simulation argument addresses whether we are in fact living in a simulation as opposed to some basement level physical reality. It tries to show that at least one of three propositions is true, but it doesn't tell us which one. Those three are:
1) Almost all civilizations like ours go extinct before reaching technological maturity.
2) Almost all technologically mature civilizations lose interest in creating ancestor simulations: computer simulations detailed enough that the simulated minds within them would be conscious.
3) We're almost certainly living in a computer simulation.
The full argument requires sophisticated probabilistic reasoning, but the basic argument is fairly easy to grasp without resorting to mathematics. Suppose that the first proposition is false, which would mean that some significant portion of civilizations at our stage eventually reach technological maturity. Suppose that the second proposition is also false, which would mean that some significant fraction of those (technologically mature) civilizations retain an interest in using some non-negligible fraction of their resources for the purpose of creating these ancestor simulations. You can then show that it would be possible for a technologically mature civilization to create astronomical numbers of these simulations. So if this significant fraction of civilizations made it through to this stage where they decided to use their capabilities to create these ancestor simulations, then there would be many more simulations created than there are original histories, meaning that almost all observers with our types of experiences would be living in simulations. Going back to the observation selection effect, if almost all kinds of observers with our kinds of experiences are living in simulations, then we should think that we are living in a simulation, that we are one of the typical observers, rather than one of the rare, exceptional basic level reality observers.
The connection to existential risk is twofold. First, the first of those three possibilities, that almost all civilizations like ours go extinct before reaching technological maturity obviously bears directly on how much existential risk we face. If proposition 1 is true then the obvious implication is that we will succumb to an existential catastrophe before reaching technological maturity. The other relationship with existential risk has to do with proposition 3: if we are living in a computer simulation then there are certain exotic ways in which we might experience an existential catastrophe which we wouldn't fear if we are living in basement level physical reality. The simulation could be shut off, for instance. Or there might be other kinds of interventions in our simulated reality.
Now that does seem to assume that a technologically mature civilization would have an interest in creating these simulations in the first place. To say that these civilizations might "lose interest" implies some interest to begin with.
Bostrom: Right now there are certainly a lot of people that, if they could, would be very happy to do this for all kinds of reasons---people might do it as a sort of scientific study, they might do it for entertainment, for art. Already you have people building these virtual worlds in computer games, and the more realistic they can make them the happier they are. You could have people pursuing virtual historical tourism, or people who want to do this just because it could be done. So I think it's safe to say that people today, had they the capabilities, would do it, but perhaps with a certain level of technological maturity people may lose interest in this for one reason or another.
Your work reminds me a little bit of the film 'Children of Men,' which depicted a very particular existential risk: species-wide infertility. What are some of the more novel treatments you've seen of this subject in mainstream culture?
Bostrom: Well, the Hollywood renditions of existential risk scenarios are usually quite bad. For instance, the artificial intelligence risk is usually represented by an invasion of a robot army that is fought off by some muscular human hero wielding a machine gun or something like that. If we are going to go extinct because of artificial intelligence, it's not going to be because there's this battle between humans and robots with laser eyes. A lot of the stories you see in fiction or in films are subject to the good story bias; there are constraints on what makes for a good story. Usually there has to be a protagonist and the thing you're battling has to be evil, and there are going to be ups and downs, and the humans prevail in the end. So there's a filter for the scenarios that you're going to see in media representations.
Aldous Huxley's Brave New World is interesting in that it created a vivid depiction of a scenario in which humans have been biologically and socially engineered to fit into a dystopian social structure, and it shows how that could be very bad. But on the whole I think the general point I would make is that there isn't a lot of good literature on existential risk, and that one needs to think of these things not in terms of vivid scenarios, but rather in more abstract terms.
Last week I interviewed Cary Fowler with the Svalbard Global Seed Vault. His project is a technology that might be interpreted as looking to limit existential risk. Are there other technological (as opposed to social or political) solutions that you see on the horizon?
Bostrom: Well there are things that one can do, some that would apply to particular risks and others that would apply to a broader spectrum of risk. With particular risks, for instance, one could invest in technologies to hasten the time it takes to develop a new vaccine, which would also be very valuable to have for other reasons unrelated to existential risk.
With regard to existential risk stemming from artificial intelligence, there is some work that we are doing now to try and think about different ways of solving the control problem. If one day you have the ability to create a machine intelligence that is greater than human intelligence, how would you control it, how would you make sure it was human-friendly and safe? There is work that can be done there.
With asteroids there has been this Spaceguard project that maps out different asteroids and their trajectories, that project is certainly motivated by concerns about existential risks, and it costs only a couple of million dollars per year, with most of the funding coming from NASA.
Then there are more general-purpose things you can do. You could imagine building some refuge, some bunker with a very large supply of food, where humans could survive for a decade or several decades if there were a large impact of some kind. It would be a lot cheaper and easier to do that on Earth than it would be to build a space colony, which some people have proposed.
But to me the most important thing to do is more analysis, specifically analysis to identify the biggest existential risks and the types of interventions that would be most likely to mitigate those risks.
The connection to existential risk is twofold. First, the first of those three possibilities, that almost all civilizations like ours go extinct before reaching technological maturity obviously bears directly on how much existential risk we face. If proposition 1 is true then the obvious implication is that we will succumb to an existential catastrophe before reaching technological maturity. The other relationship with existential risk has to do with proposition 3: if we are living in a computer simulation then there are certain exotic ways in which we might experience an existential catastrophe which we wouldn't fear if we are living in basement level physical reality. The simulation could be shut off, for instance. Or there might be other kinds of interventions in our simulated reality.
Now that does seem to assume that a technologically mature civilization would have an interest in creating these simulations in the first place. To say that these civilizations might "lose interest" implies some interest to begin with.
Bostrom: Right now there are certainly a lot of people that, if they could, would be very happy to do this for all kinds of reasons---people might do it as a sort of scientific study, they might do it for entertainment, for art. Already you have people building these virtual worlds in computer games, and the more realistic they can make them the happier they are. You could have people pursuing virtual historical tourism, or people who want to do this just because it could be done. So I think it's safe to say that people today, had they the capabilities, would do it, but perhaps with a certain level of technological maturity people may lose interest in this for one reason or another.
Your work reminds me a little bit of the film 'Children of Men,' which depicted a very particular existential risk: species-wide infertility. What are some of the more novel treatments you've seen of this subject in mainstream culture?
Bostrom: Well, the Hollywood renditions of existential risk scenarios are usually quite bad. For instance, the artificial intelligence risk is usually represented by an invasion of a robot army that is fought off by some muscular human hero wielding a machine gun or something like that. If we are going to go extinct because of artificial intelligence, it's not going to be because there's this battle between humans and robots with laser eyes. A lot of the stories you see in fiction or in films are subject to the good story bias; there are constraints on what makes for a good story. Usually there has to be a protagonist and the thing you're battling has to be evil, and there are going to be ups and downs, and the humans prevail in the end. So there's a filter for the scenarios that you're going to see in media representations.
Aldous Huxley's Brave New World is interesting in that it created a vivid depiction of a scenario in which humans have been biologically and socially engineered to fit into a dystopian social structure, and it shows how that could be very bad. But on the whole I think the general point I would make is that there isn't a lot of good literature on existential risk, and that one needs to think of these things not in terms of vivid scenarios, but rather in more abstract terms.
Last week I interviewed Cary Fowler with the Svalbard Global Seed Vault. His project is a technology that might be interpreted as looking to limit existential risk. Are there other technological (as opposed to social or political) solutions that you see on the horizon?
Bostrom: Well there are things that one can do, some that would apply to particular risks and others that would apply to a broader spectrum of risk. With particular risks, for instance, one could invest in technologies to hasten the time it takes to develop a new vaccine, which would also be very valuable to have for other reasons unrelated to existential risk.
With regard to existential risk stemming from artificial intelligence, there is some work that we are doing now to try and think about different ways of solving the control problem. If one day you have the ability to create a machine intelligence that is greater than human intelligence, how would you control it, how would you make sure it was human-friendly and safe? There is work that can be done there.
With asteroids there has been this Spaceguard project that maps out different asteroids and their trajectories, that project is certainly motivated by concerns about existential risks, and it costs only a couple of million dollars per year, with most of the funding coming from NASA.
Then there are more general-purpose things you can do. You could imagine building some refuge, some bunker with a very large supply of food, where humans could survive for a decade or several decades if there were a large impact of some kind. It would be a lot cheaper and easier to do that on Earth than it would be to build a space colony, which some people have proposed.
But to me the most important thing to do is more analysis, specifically analysis to identify the biggest existential risks and the types of interventions that would be most likely to mitigate those risks.
I noticed that you define an existential risk as potentially bringing about the premature extinction of Earth-originating intelligent life. I wondered what you mean by premature? What would count as a mature extinction?
Bostrom: Well, you might think that an extinction occurring at the time of the heat death of the universe would be in some sense mature. There might be fundamental physical limits to how long information processing can continue in this universe of ours, and if we reached that level there would be extinction, but it would be the best possible scenario that could have been achieved. I wouldn't count that as an existential catastrophe, rather it would be a kind of success scenario. So it's not necessary to survive infinitely long, which after all might be physically impossible, in order to have successfully avoided existential risk.
In considering the long-term development of humanity, do you put much stock in specific schemes like the Kardashev Scale, which plots the advancement of a civilization according to its ability to harness energy, specifically the energy of its planet, its star, and then finally the galaxy? Might there be more to human flourishing than just increasing mastery of energy sources?
In considering the long-term development of humanity, do you put much stock in specific schemes like the Kardashev Scale, which plots the advancement of a civilization according to its ability to harness energy, specifically the energy of its planet, its star, and then finally the galaxy? Might there be more to human flourishing than just increasing mastery of energy sources?
Bostrom: Certainly there would be more to human flourishing. In fact I don't even think that particular scale is very useful. There is a discontinuity between the stage where we are now, where we are harnessing a lot of the energy resources of our home planet, and a stage where we can harness the energy of some increasing fraction of the universe like a galaxy. There is no particular reason to think that we might reach some intermediate stage where we would harness the energy of one star like our sun. By the time we can do that I suspect we'll be able to engage in large-scale space colonization, to spread into the galaxy and then beyond, so I don't think harnessing the single star is a relevant step on the ladder.
If I wanted some sort of scheme that laid out the stages of civilization, the period before machine super intelligence and the period after super machine intelligence would be a more relevant dichotomy. When you look at what's valuable or interesting in examining these stages, it's going to be what is done with these future resources and technologies, as opposed to their structure. It's possible that the long-term future of humanity, if things go well, would from the outside look very simple. You might have Earth at the center, and then you might have a growing sphere of technological infrastructure that expands in all directions at some significant fraction of the speed of light, occupying larger and larger volumes of the universe---first in our galaxy, and then beyond as far as is physically possible. And then all that ever happens is just this continued increase in the spherical volume of matter colonized by human descendants, a growing bubble of infrastructure. Everything would then depend on what was happening inside this infrastructure, what kinds of lives people were being led there, what kinds of experiences people were having. You couldn't infer that from the large-scale structure, so you'd have to sort of zoom in and see what kind of information processing occurred within this infrastructure.
It's hard to know what that might look like, because our human experience might be just a small little crumb of what's possible. If you think of all the different modes of being, different kinds of feeling and experiencing, different ways of thinking and relating, it might be that human nature constrains us to a very narrow little corner of the space of possible modes of being. If we think of the space of possible modes of being as a large cathedral, then humanity in its current stage might be like a little cowering infant sitting in the corner of that cathedral having only the most limited sense of what is possible.
0 komentar:
Posting Komentar