Human race 'will be extinct within 100 years'

Moderators: Elvis, DrVolin, Jeff

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Thu Oct 30, 2014 5:26 pm

I thought this thread would be the best place to include my most recent blog entry. It's not exactly about human extinction, but I felt that since it deals with some of the philosophical concepts that have been explored on this thread (and many of the links were obtained from here) that this would be better than any of the Peak Oil or Global Warming threads.

Wednesday, October 29, 2014

Carbon Crisis 2014 Update: Planet Jenga

While this post continues as an update to last year's post on the same subject, that Peak Oil and Global Warming being flip sides of the same coin should be condensed and renamed the Carbon Crisis, I hope to explore the situation from a more anthropological level, as well as defining the impending danger.

On the economic resource front, we face a time of reckoning. Peak Oil seems like a phantom from the past at this point, a crossroads we had to confront back in the summer of 2008 through which we emerged shaken, but not stirred to any meaningful change. Oh, we've got a lot more hybrids, few electric cars here and there, but no railway revitalization, no permaculture, no economic relocalization, and God forbid we change the way money works! Why should we when this wonderful new invention called fracking can just plow shit out of the deepest hardest rock and melted into the fuel we need? Plus, hasn't shale made America 'energy independent' so that the Middle East doesn't matter anymore? All our worries behind us?

Not quite. Thanks to Wombaticus Rex (in his wise estimation, "shale will be much bigger than subprime.") from RI for finding this inconvenient piece of analysis:

It’s been only two months since I last -again – addressed the shale industry, but apparently it’s still not clear enough what a predatory scheme it is. Today, Bloomberg adds even more fuel to the fire. If you want to know how the combination of slip-sliding legal standards and ultra-low interest rates has perverted the US – and global – economy, you need look no further than shale.

The central point the Bloomberg article evokes is simple: does the difference between proved reserves, probable reserves and possible reserves (or resource potential), as reported by oil and gas extraction companies, constitute a lie? And the answer is just as simple: no, it doesn’t. But that’s not where the issue ends, it’s where it begins.

That is, if the difference between the two gets too wide, – potential – investors in company stocks and bonds are not getting the information they are entitled to. The industry may claim, as in the article, that investors are aware of the discrepancy inherent in the numbers, but that’s at best true for most investors, and the bigger ones. Still, the companies shouldn’t be able to use that as some unlimited excuse to claim whatever they wish. Because they can basically throw out any number they want in front of investors, no matter what it’s based on, and it’s legal.

And while there may be a kernel of truth in this bit …

“They’re running a great risk of litigation when they don’t end up producing anything like that,” said John Lee, a University of Houston petroleum engineering professor who helped write the SEC rules and has taught reserves evaluation to a generation of engineers. “If I were an ambulance-chasing lawyer, I’d get into this.”

… there’s also something missing. By the time investors can start any litigation, chances are the companies involved may be long gone. The greater public, and some of the investors, may be fooled, but the industry people themselves? They know about the depletion rates typical of shale wells, of the fact that few wells ever make their owners any real profit, and of the $500 billion(!) the industry lost over the past 5 years.

The shale industry runs on debt, not on energy. And as long as these companies can issue junk bonds at low rates, they will. But that doesn’t mean they will ever be profitable. For their owners, sure, they’re raking in dough like it’s Halloween candy, but for investors in those bonds things don’t look so rosy. Shale is a Ponzi.


http://www.theautomaticearth.com/us-sha ... f-the-law/



Thanks to Raul Ilargi Meijer as well for researching that. So if what we have on our hands is an expanding Ponzi scheme, when is the bubble going to burst? I've seen a number of different prognostications; Thom Hartmann has already written a book called The Crash of 2016. Now I haven't read it, so I'm not sure if he lists the collapse of shale as a reason behind the crash. But I find it interesting how his timing coincides with the analysis of Dennis Meadows, one of the co-authors of the original The Limits to Growth.


Peak Energy & Resources, Climate Change, and the Preservation of Knowledge

Dennis Meadows Collapse inevitable 2015-2020
Posted on June 3, 2014 by energyskeptic

[Dennis Meadows spoke at the ASPO peak oil conference 2006 in Pisa Italy. Many of the scientists and speakers said Meadows was right about Limits to Growth in their presentations -- indeed, his model appeared to be ahead of schedule. Meadows hates to give dates, but when pressed, did say that although he thought 2030 the most likely time-frame for collapse back in 1972 based on various model projections, the exponential use of resources and population growth appeared to have moved the time-frame forward to around 2020. At the "Limits to Growth" conference in 2014 he said the time-frame appears to be 2015-2020].

Dennis Meadows is a co-author of The Limits to Growth. In 1972, the team of 66 scientists he assembled for the original Limits to Growth study concluded the most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity.

Dmitry Orlov on Dennis Meadow’s presentation at the Age Of Limits conference 2014: “Dennis had agreed to present at this conference reluctantly. He has retired from Club of Rome discussions, and has found more cheerful uses for his time. But he seemed happy with the outcome, saying that this is the first time he faced an audience that did not need convincing. Instead, he took the time to add some details that I think are crucially important, among them the fact that his WORLD3 model is only accurate until the peaks are reached. Once the peaks occur (between 2015 and 2020) all bets are off: past that point, the model’s predictive ability is not to be relied on because the assumptions on which it relies will no longer be valid.”

At the 2014 Age of Limits conference he also said that in 1972 we had reached about 85% of Earth’s carrying capacity and today we are about 125%, and every month we delay in getting back within limits erodes Earth’s further ability to tolerate us. “The reason we don’t have a response to climate change,” he said, “is not because we don’t have better models. It’s because people don’t care about climate change.” That may be our epitaph.

“In 1972 there were two possible options provided for going forward — overshoot or sustainable development. Despite myriad conferences and commissions on sustainable development since then, the world opted for overshoot. The two-leggeds hairless apes did what they always have done. They dominated and subdued Earth. Faced with unequivocable evidence of an approaching existential threat, they equivocated and then attempted to muddle through.

Global civilization will only be the first of many casualties of the climate the Mother Nature now has coming our way at a rate of change exceeding any comparable shift in the past 3 million years, save perhaps the meteors or supervolcanoes that scattered our ancestors into barely enough breeding pairs to be able to revive. This change will be longer lived and more profound than many of those phenomena. We have fundamentally altered the nitrogen, carbon and potassium cycles of the planet. It may never go back to an ecosystem in which bipedal mammals with bicameral brains were possible. Or, not for millions of years”.

Video Presentation starts at 17:30, slides below

http://deepresource.wordpress.com/2012/ ... s-meadows/

It Is Too Late For Sustainable Development

Smithsonian Institution Washington, DC; February 29, 2012

I will briefly describe what we did in 1970 – 1972 and summarize the main contributions of our study.
Then I will describe five reasons it is too late to achieve sustainable development.

Public discourse has difficulty with subtle, conditional messages.
Growth advocates change the justification for their paradigm rather than changing the paradigm itself.
The global system is now far above its carrying capacity.
We act as if technological change can substitute for social change.
The time horizon of our current system is too short.

As a result, I will suggest that it is essential now to put more emphasis on raising the resilience of the system.

What we did

A team of 16 people worked under my direction to elaborate a computer model representing the causes and consequences of growth in the main physical factors characterizing global development over the period 1900 – 2100. The model was first conceived by Jay Forrester, who described it in his book, World Dynamics. My team wrote and published 3 additional books on the project, The Limits to Growth, Toward Global Equilibrium, and

Dynamics of Growth in a Finite World.

Our focus was on:

Population Nonrenewable resources

Industrial goods Persistent pollution

Food


Our Main Contributions

We did NOT prove that there are limits to physical growth on a finite planet. We assumed it.
We did present information about a variety of physical limits- water, soils, metals, and other resources – in order to make the idea of limits plausible.
We described the reasons growth of population and industrial output is inherently exponential.
We showed that exponential growth quickly rises to any conceivable limit.
Our computer scenarios demonstrated that prevailing growth policies will lead to overshoot and collapse, not asymptotic approach to limits.
We suggested that changes in the policies could lead to a sustainable state, if the changes dealt with both cultural and technical issues and were implemented soon.



The Limits to Growth presented 12 scenarios. Four of them showed a relatively attractive global equilibrium without any collapse. However, it was written in the New York Times: “It is no coincidence that all the simulations based on the Meadows world model invariably end in collapseThe Limits to Growth, Peter Passell, Marc Roberts, and Leonard Ross, New York Times, April 2, 1972

We said: “These graphs are not exact predictions of the values of the variables at any particular year in the future. They are indications of the system’s behavioral tendencies only. P. 93, The Limits to Growth

However a Google today search on “the Club of Rome predicted” yields 13,700 hits, for example: “In 1972 Limits to Growth, published by the Club of Rome, predicted that the world will run out of gold in 1981, mercury in 1985, tin by 1987, zinc by 1990, petroleum by 1992, and copper, lead and natural gas by 1993”.

Growth advocates change the justification for their paradigm rather than changing the paradigm itself. “At every single stage – from its biased arrival to its biased encoding, to organizing it around false logic, to misremembering and then misrepresenting it to others, the mind continually acts to distort information flow in favor of the usual good goal of appearing better than one really is“, Page 139, in The Folly of Fools; The Logic of Deceit and Self-Deception in Human Life, Robert Trivers, Basic Books, New York, NY 2011

Evolution of the criticisms

1970s: There are no effective limits.
1980s: There are limits, but they are far away.
1990s: The limits are near, but technology and markets can evade them easily.
2000s: Technology and markets do not always evade the limits, but the best policy is still to pursue GNP growth, so we will have more resources to solve problems.
2010s: If we had been able to sustain economic growth, we would not have had trouble with the limits.



Given enough energy, minerals might be reclaimed from under the sea, or from seawater itself. A virtually infinite source of energy, the controlled nuclear fusion of hydrogen, will probably be tapped within 50 years. “The Limits to Growth”, by Peter Passell, Marc Roberts and Leonard Ross, New York Times, April 2, 1972.

“natural resources are not finite in any meaningful economic sense, mind-boggling though this assertion may be. The stocks of them are not fixed but rather are expanding through human ingenuity. p. 24, Julian L. Simon, The Ultimate Resource2, Princeton University Press, Princeton, NJ, 1996


The global system is now far above its carrying capacity


Image


Avoiding collapse will require a longer time horizon than our current system provides.



The Easy Oil is Gone

Oil discoveries peaked in 1960s.
Every year since 1984 oil consumption has exceeded oil discovery.
In 2009 discoveries were about 5 billion barrels (bb); consumption was about 31 bb.
Of the world’s 20 largest oil fields, 18 were discovered 1917 – 1968; 2 in the 1970s; 0 since

Global Oil Production is Nearing the End of its Plateau

1995 – 1999 + 5.5%
2000 – 2004 + 7.9 %
2005 – 2009 + 0.4 %

- data from the International Statistical Supplement – 2010 edition, International Energy Agency, p. 18

• 2010 – 2030 – 50%*

* Projection from Crude Oil – The Supply Outlook, Energy Watch Group, Feb 2008, p. 12.

“By 2012, surplus oil production capacity could entirely disappear, and as early as 2015, the shortfall in output could reach nearly 10 MBD.” – U S Joint Forces Command, Joint Operating Environment Report, February, 2010

“Peak Oil Production May Already be Here,” - Science, p. 1510, Vol 331, March 25, 2011

It is essential now to put more emphasis on raising the resilience of the system. It is essential now to start changing our behavior

=====================================

Mukerjee, M. 23 May 2012. Apocalypse Soon: Has Civilization Passed the Environmental Point of No Return? Scientific American.

Meadows holds that collapse is now all but inevitable, but that its actual form will be too complex for any model to predict. “Collapse will not be driven by a single, identifiable cause simultaneously acting in all countries,” he observes. “It will come through a self-reinforcing complex of issues”—including climate change, resource constraints and socioeconomic inequality. When economies slow down, Meadows explains, fewer products are created relative to demand, and “when the rich can’t get more by producing real wealth they start to use their power to take from lower segments.” As scarcities mount and inequality increases, revolutions and socioeconomic movements like the Arab Spring or Occupy Wall Street will become more widespread—as will their repression.

Many observers protest that such apocalyptic scenarios discount human ingenuity. Technology and markets will solve problems as they show up, they argue. But for that to happen, contends economist Partha Dasgupta of the University of Cambridge in the U.K., policymakers must guide technology with the right incentives. As long as natural resources are underpriced compared with their true environmental and social cost—as long as, for instance, automobile consumers do not pay for lives lost from extreme climatic conditions caused by warming from their vehicles’ carbon emissions—technology will continue to produce resource-intensive goods and worsen the burden on the ecosystem, Dasgupta argues. “You can’t expect markets to solve the problem,” he says. Randers goes further, asserting that the short-term focus of capitalism and of extant democratic systems makes it impossible not only for markets but also for most governments to deal effectively with long-term problems such as climate change.

“We’re in for a period of sustained chaos whose magnitude we are unable to foresee,” Meadows warns. He no longer spends time trying to persuade humanity of the limits to growth. Instead, he says, “I’m trying to understand how communities and cities can buffer themselves” against the inevitable hard landing.




Keep in mind, Peak Oil happened in 2006. We're eight years past the global light sweet crude production peak, what Meadows is talking about is carrying capacity exceeding resource demand resulting in what he describes as "a period of sustained chaos."

It gets worse.

On the environmental front, (thanks to RI's fruhmenschen for the link) Jason Box has done some extensive research that he shares with Slate through some shocking visual aids:






Isn’t ice supposed to be white?
Photo by Jason Box

Jason Box knows ice. That’s why what’s happened this year concerns him so much.
Box just returned from a trip to Greenland. Right now, the ice there is … black:



Image
Dark ice is helping Greenland’s glaciers retreat.
Photo by Jason Box



Image
Crevasses criss-cross the Greenland ice sheet, allowing melt water to descend deep beneath the ice.
Photo by Jason Box



Image
This year, Greenland’s ice was the darkest it’s ever been.
Photo by Jason Box



Image
Box and his team are trying to discover what made this year’s melt season so unusual.
Photo by Jason Box



Image
Box marks his study sites, appropriately, with black flags.
Photo by Jason Box



Image
Box’s ‘Dark Snow’ project is the first scientific expedition to Greenland to be crowdfunded.
Photo by Jason Box



What does it all mean, besides the obvious observation that the Arctic is getting ugly? Well, as Eric Holthaus points out, dark ice melts quicker than regular ice. But a more important factor that I didn't see noted in the article is that dark snow decreases the albedo of the Arctic snow cap. Albedo is the reflective power of the earth's surface. When sunlight hits white snow, it bounces back outside the atmosphere. The darker the snow, the less reflective, and subsequently more heat is trapped. This in turn warms the oceans in a self-reinforcing feedback loop where things just get hotter and hotter.

That is exactly what we're seeing happening right now, according to NOAA:

Saturday, October 4, 2014

Record June–August Global Ocean Surface Temperature
August 2014 record high land and ocean temperature

The combined average temperature across global land and ocean surfaces for August 2014 was record high for the month, at 0.75°C (1.35°F) above the 20th century average of 15.6°C (60.1°F).

June–August 2014 record high land and ocean temperature

June–August 2014, at 0.71°C (1.28°F) higher than the 20th century average, was the warmest such period across global land and ocean surfaces since record keeping began in 1880.
Image
August 2014 record high sea surface temperature

The August global sea surface temperature (SST) was 0.65°C (1.17°F) above the 20th century average of 16.4°C (61.4°F). This record high departure from average not only beats the previous August record set in 2005 by 0.08°C (0.14°F), but also beats the previous all-time record set just two months ago in June 2014 by 0.03°C (0.05°F).

June–August 2014 record high sea surface temperature

The June–August global ocean surface temperature was 0.63°C (1.13°F) above the 20th century average, the highest on record for June–August. This beats the previous record set in 2009 by 0.04°C (0.07°F).


John Davies comments:

This was the warmest August on record, primarily due to very high Sea Surface Temperatures in the Northern Hemisphere.
There is no El Nino event in this period, but some sort of event - hopefully an event not a climate shift - is taking place. If this is an event, the situation will become more normal when it ends, which will be in less than a years time at worst. If it is a climate shift, we are in desperate trouble, though I think it is an event.
It is worth noting that these very high Sea Surface Temperatures are likely to lead to high land temperatures soon, as normally land temperatures in the Northern hemisphere can be expected to exceed Sea Surface Temperatures.
The drought affecting California and the whole of the west of North America, Central America, and large parts of the Brazilian rainforest, though preceding this event was almost certainly down to changes which started before this event but ultimately caused it.

Despite the record high combined average temperature across global land and ocean surfaces for August, the global economy will continue as normal and no specific action can be expected to be taken to curb emissions. This will change, if global temperatures continue to rise. Temperatures are high enough to cause global concern, however. More later.



Note: NOAA's most recent (Sep 4, 2014) prediction puts the chance of El Niño at 60-65% during the Northern Hemisphere fall and winter.

Image

Image

Sea surface temperatures (SST) can be expected to remain high in the Arctic Ocean, as SST anomalies are high in the North Atlantic (+1.65°C, image left) and high temperatures are forecast over the Arctic for at least the next seven days (anomalies as high as +2.87°C, image right). For a comparison with October 3 temperatures, see this earlier post.

Additionally, an increasing amount of heat has been going into the deeper parts of the ocean, and the Gulf Stream will for month to come continue to transport water into the Arctic Ocean, and this water will be warmer than the water already there, threatening to unleash ever larger eruptions of methane from the seafloor of the Arctic Ocean, as discussed in this earlier post.





Where did humanity go wrong? Now we get into the anthropological part of the post I mentioned at the beginning. It's a concept I discovered earlier this year: totalitarian agriculture. This clip is the raw footage of an interview inside a great documentary that belongs on this thread, What A Way To Go: Life at the end of Empire which I highly recommend.





Which, of course, on a spiritual and philosophical level, ties back to the Michael Ruppert quote I posted earlier this year:

"There is going to be a die-off. That is a balancing. That cannot be averted. I cannot offer some happy Pollyanna solution to that. Love is the only vibration that's a higher vibration than fear. Our physical reality is a product of our consciousness which is a product of what we carry in our hearts. And if we carry fear in our hearts, to the point where the consciousness is one of fear, then all we would manifest would be more destruction. The means to save, to resurrect, to make amends with, to reconcile with, to heal ourselves with Mother Earth and everything that lives here, will only become available to us once we realize that cooperation rather than competition, that love rather than fear is the only state of consciousness in which we can successfully live, and lo and behold, those are the ways our ancestors lived 40,000 years ago."



Image

Have you ever played Jenga? It's a game I only recently discovered when someone brought it to work. You stack 54 wooden blocks into a tower. Each player in their turn takes one block out from the tower and places it at the top, building the tower higher and higher while being careful not to disrupt the rest of the tower. The game ends when someone knocks the tower down, the winner being the last player to successfully put a block on top.

I believe, if I understand totalitarian agriculture correctly, that civilization has effectively turned Earth into Planet Jenga. We've been approaching turbo speed ever since our infrastructure was shifted to be predicated on cheap oil production, whereas prior to that we were on cruise control when our infrastructure was predicated on cheap coal production, which we had been since roughly 1750. But we were on that course even prior to that with the advent of agriculture. Not because agriculture itself is inherently environmentally corrosive, but because of the human attitudes that made it totalitarian agriculture as described by Daniel Quinn: "it all belongs to us: everything; every bit of it and we can do with it what we want."

This attitude - this mindset - must be destroyed if there is to be any hope that humanity itself will not be destroyed. That doesn't necessarily mean human existence must return to it's hunter/gatherer roots. I don't think that's a very realistic proposition to expect people to willingly switch to. Perhaps that might be the unwilling eventuality for survival in a post-Peak Everything, runaway climate scenario. But as long as civilization continues wheezing along, I'm going to continue advocating a more sophisticated means of destroying the Planet Jenga mindset: change the way money works so that it represents energy instead of debt and foster a culture that values sustainability and vilifies greed.

Not that I think my proposition is very realistic either - it's just the only chance for civilization to survive.
"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Mon Nov 17, 2014 8:53 pm

We're Underestimating the Risk of Human Extinction
Ross Andersen Mar 6 2012

Image

Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.

Bostrom, who directs Oxford's Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.

Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism---the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.

Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?

Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

In the short term you don't seem especially worried about existential risks that originate in nature like asteroid strikes, supervolcanoes and so forth. Instead you have argued that the majority of future existential risks to humanity are anthropogenic, meaning that they arise from human activity. Nuclear war springs to mind as an obvious example of this kind of risk, but that's been with us for some time now. What are some of the more futuristic or counterintuitive ways that we might bring about our own extinction?

Bostrom: I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

And why shouldn't we be as worried about natural existential risks in the short term?

Bostrom: One way of making that argument is to say that we've survived for over 100 thousand years, so it seems prima facie unlikely that any natural existential risks would do us in here in the short term, in the next hundred years for instance. Whereas, by contrast we are going to introduce entirely new risk factors in this century through our technological innovations and we don't have any track record of surviving those.

Now another way of arriving at this is to look at these particular risks from nature and to notice that the probability of them occurring is small. For instance we can estimate asteroid risks by looking at the distribution of craters that we find on Earth or on the moon in order to give us an idea of how frequent impacts of certain magnitudes are, and they seem to indicate that the risk there is quite small. We can also study asteroids through telescopes and see if any are on a collision course with Earth, and so far we haven't found any large asteroids on a collision course with Earth and we have looked at the majority of the big ones already.
You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.

How so?

Bostrom: Well, one principle for how to reason when there are these observation selection effects is called the self-sampling assumption, which says roughly that you should think of yourself as if you were a randomly selected observer of some larger reference class of observers. This assumption has a particular application to thinking about the future through the doomsday argument, which attempts to show that we have systematically underestimated the probability that the human species will perish relatively soon. The basic idea involves comparing two different hypotheses about how long the human species will last in terms of how many total people have existed and will come to exist. You could for instance have two hypothesis: to pick an easy example imagine that one hypothesis is that a total of 200 billion humans will have ever existed at the end of time, and the other hypothesis is that 200 trillion humans will have ever existed.

Let's say that initially you think that each of these hypotheses is equally likely, you then have to take into account the self-sampling assumption and your own birth rank, your position in the sequence of people who have lived and who will ever live. We estimate currently that there have, to date, been 100 billion humans. Taking that into account, you then get a probability shift in favor of the smaller hypothesis, the hypothesis that only 200 billion humans will ever have existed. That's because you have to reason that if you are a random sample of all the people who will ever have existed, the chance that you will come up with a birth rank of 100 billion is much larger if there are only 200 billion in total than if there are 200 trillion in total. If there are going to be 200 billion total human beings, then as the 100 billionth of those human beings, I am somewhere in the middle, which is not so surprising. But if there are going to be 200 trillion people eventually, then you might think that it's sort of surprising that you're among the earliest 0.05% of the people who will ever exist. So you can see how reasoning with an observation selection effect can have these surprising and counterintuitive results. Now I want to emphasize that I'm not at all sure this kind of argument is valid; there are some deep methodological questions about this argument that haven't been resolved, questions that I have written a lot about.

See I had understood observation selection effects in this context to work somewhat differently. I had thought that it had more to do with trying to observe the kinds of events that might cause extinction level events, things that by their nature would not be the sort of things that you could have observed before, because you'd cease to exist after the initial observation. Is there a line of thinking to that effect?

Bostrom: Well, there's another line of thinking that's very similar to what you're describing that speaks to how much weight we should give to our track record of survival. Human beings have been around for roughly a hundred thousand years on this planet, so how much should that count in determining whether we're going to be around another hundred thousand years? Now there are a number of different factors that come into that discussion, the most important of which is whether there are going to be new kinds of risks that haven't existed to this point in human history---in particular risks of our own making, new technologies that we might develop this century, those that might give us the means to create new kinds of weapons or new kinds of accidents. The fact that we've been around for a hundred thousand years wouldn't give us much confidence with respect to those risks.

But, to the extent that one were focusing on risks from nature, from asteroid attacks or risks from say vacuum decay in space itself, or something like that, one might ask what we can infer from this long track record of survival. And one might think that any species anywhere will think of themselves as having survived up to the current time because of this observation selection effect. You don't observe yourself after you've gone extinct, and so that complicates the analysis for certain kinds of risks.

A few years ago I wrote a paper together with a physicist at MIT named Max Tegmark, where we looked at particular risks like vacuum decay, which is this hypothetical phenomena where space decays into a lower energy state, which would then cause this bubble propagating at the speed of light that would destroy all structures in its path, and would cause a catastrophe that no observer could ever see because it would come at you at the speed of light, without warning. We were noting that it's somewhat problematic to apply our observations to develop a probability for something like that, given this observation selection effect. But we found an indirect way of looking at evidence having to do with the formation date of our planet, and comparing it to the formation date of other earthlike planets and then using that as a kind of indirect way of putting a bound on that kind of risk. So that's another way in which observation selection effects become important when you're trying to estimate the odds of humanity having a long future.

Image
Nick Bostrom is the director of the Future of Humanity Institute at Oxford.

One possible strategic response to human-created risks is the slowing or halting of our technological evolution, but you have been a critic of that view, arguing that the permanent failure to develop advanced technology would itself constitute an existential risk. Why is that?

Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.

Another reason I haven't emphasized or advocated the retardation of technological progress as a means of mitigating existential risk is that it's a very hard lever to pull. There are so many strong forces pushing for scientific and technological progress in so many different domains---there are economic pressures, there is curiosity, there are all kinds of institutions and individuals that are invested in technology, so shutting it down is a very hard thing to do.

What technology, or potential technology, worries you the most?

Bostrom: Well, I can mention a few. In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain---you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we're also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.

In the longer run, I think artificial intelligence---once it gains human and then superhuman capabilities---will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals.

In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?

Bostrom: I think what's driving it is the sense that humans are developing these very potent capabilities---we are doing unprecedented things, and there is a risk that something could go wrong. Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed.

Each time we make one of these new discoveries we are putting our hand into a big urn of balls and pulling up a new ball---so far we've pulled up white balls and grey balls, but maybe next time we will pull out a black ball, a discovery that spells disaster. At the moment we have no good way of putting the ball back into the urn if we don't like it. Once a discovery has been published there is no way of un-publishing it.

Even with nuclear weapons there were close calls. According to some people we came quite close to all out nuclear war and that was only in the first few decades of having discovered the new technology, and again it's a technology that only a few large states had, and that requires a lot of resources to control---individuals can't really have a nuclear arsenal.

Image
Ross Andersen Mar 6 2012, 1:39 PM ET

extinction5.jpg

Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.

Bostrom, who directs Oxford's Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.

Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism---the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.

Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?

Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

In the short term you don't seem especially worried about existential risks that originate in nature like asteroid strikes, supervolcanoes and so forth. Instead you have argued that the majority of future existential risks to humanity are anthropogenic, meaning that they arise from human activity. Nuclear war springs to mind as an obvious example of this kind of risk, but that's been with us for some time now. What are some of the more futuristic or counterintuitive ways that we might bring about our own extinction?

Bostrom: I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

And why shouldn't we be as worried about natural existential risks in the short term?

Bostrom: One way of making that argument is to say that we've survived for over 100 thousand years, so it seems prima facie unlikely that any natural existential risks would do us in here in the short term, in the next hundred years for instance. Whereas, by contrast we are going to introduce entirely new risk factors in this century through our technological innovations and we don't have any track record of surviving those.

Now another way of arriving at this is to look at these particular risks from nature and to notice that the probability of them occurring is small. For instance we can estimate asteroid risks by looking at the distribution of craters that we find on Earth or on the moon in order to give us an idea of how frequent impacts of certain magnitudes are, and they seem to indicate that the risk there is quite small. We can also study asteroids through telescopes and see if any are on a collision course with Earth, and so far we haven't found any large asteroids on a collision course with Earth and we have looked at the majority of the big ones already.
You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.

How so?

Bostrom: Well, one principle for how to reason when there are these observation selection effects is called the self-sampling assumption, which says roughly that you should think of yourself as if you were a randomly selected observer of some larger reference class of observers. This assumption has a particular application to thinking about the future through the doomsday argument, which attempts to show that we have systematically underestimated the probability that the human species will perish relatively soon. The basic idea involves comparing two different hypotheses about how long the human species will last in terms of how many total people have existed and will come to exist. You could for instance have two hypothesis: to pick an easy example imagine that one hypothesis is that a total of 200 billion humans will have ever existed at the end of time, and the other hypothesis is that 200 trillion humans will have ever existed.

Let's say that initially you think that each of these hypotheses is equally likely, you then have to take into account the self-sampling assumption and your own birth rank, your position in the sequence of people who have lived and who will ever live. We estimate currently that there have, to date, been 100 billion humans. Taking that into account, you then get a probability shift in favor of the smaller hypothesis, the hypothesis that only 200 billion humans will ever have existed. That's because you have to reason that if you are a random sample of all the people who will ever have existed, the chance that you will come up with a birth rank of 100 billion is much larger if there are only 200 billion in total than if there are 200 trillion in total. If there are going to be 200 billion total human beings, then as the 100 billionth of those human beings, I am somewhere in the middle, which is not so surprising. But if there are going to be 200 trillion people eventually, then you might think that it's sort of surprising that you're among the earliest 0.05% of the people who will ever exist. So you can see how reasoning with an observation selection effect can have these surprising and counterintuitive results. Now I want to emphasize that I'm not at all sure this kind of argument is valid; there are some deep methodological questions about this argument that haven't been resolved, questions that I have written a lot about.

See I had understood observation selection effects in this context to work somewhat differently. I had thought that it had more to do with trying to observe the kinds of events that might cause extinction level events, things that by their nature would not be the sort of things that you could have observed before, because you'd cease to exist after the initial observation. Is there a line of thinking to that effect?

Bostrom: Well, there's another line of thinking that's very similar to what you're describing that speaks to how much weight we should give to our track record of survival. Human beings have been around for roughly a hundred thousand years on this planet, so how much should that count in determining whether we're going to be around another hundred thousand years? Now there are a number of different factors that come into that discussion, the most important of which is whether there are going to be new kinds of risks that haven't existed to this point in human history---in particular risks of our own making, new technologies that we might develop this century, those that might give us the means to create new kinds of weapons or new kinds of accidents. The fact that we've been around for a hundred thousand years wouldn't give us much confidence with respect to those risks.

But, to the extent that one were focusing on risks from nature, from asteroid attacks or risks from say vacuum decay in space itself, or something like that, one might ask what we can infer from this long track record of survival. And one might think that any species anywhere will think of themselves as having survived up to the current time because of this observation selection effect. You don't observe yourself after you've gone extinct, and so that complicates the analysis for certain kinds of risks.

A few years ago I wrote a paper together with a physicist at MIT named Max Tegmark, where we looked at particular risks like vacuum decay, which is this hypothetical phenomena where space decays into a lower energy state, which would then cause this bubble propagating at the speed of light that would destroy all structures in its path, and would cause a catastrophe that no observer could ever see because it would come at you at the speed of light, without warning. We were noting that it's somewhat problematic to apply our observations to develop a probability for something like that, given this observation selection effect. But we found an indirect way of looking at evidence having to do with the formation date of our planet, and comparing it to the formation date of other earthlike planets and then using that as a kind of indirect way of putting a bound on that kind of risk. So that's another way in which observation selection effects become important when you're trying to estimate the odds of humanity having a long future.

bostrom3.jpg

Nick Bostrom is the director of the Future of Humanity Institute at Oxford.

One possible strategic response to human-created risks is the slowing or halting of our technological evolution, but you have been a critic of that view, arguing that the permanent failure to develop advanced technology would itself constitute an existential risk. Why is that?

Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.

Another reason I haven't emphasized or advocated the retardation of technological progress as a means of mitigating existential risk is that it's a very hard lever to pull. There are so many strong forces pushing for scientific and technological progress in so many different domains---there are economic pressures, there is curiosity, there are all kinds of institutions and individuals that are invested in technology, so shutting it down is a very hard thing to do.

What technology, or potential technology, worries you the most?

Bostrom: Well, I can mention a few. In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain---you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we're also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.

In the longer run, I think artificial intelligence---once it gains human and then superhuman capabilities---will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals.

In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?

Bostrom: I think what's driving it is the sense that humans are developing these very potent capabilities---we are doing unprecedented things, and there is a risk that something could go wrong. Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed.

Each time we make one of these new discoveries we are putting our hand into a big urn of balls and pulling up a new ball---so far we've pulled up white balls and grey balls, but maybe next time we will pull out a black ball, a discovery that spells disaster. At the moment we have no good way of putting the ball back into the urn if we don't like it. Once a discovery has been published there is no way of un-publishing it.

Even with nuclear weapons there were close calls. According to some people we came quite close to all out nuclear war and that was only in the first few decades of having discovered the new technology, and again it's a technology that only a few large states had, and that requires a lot of resources to control---individuals can't really have a nuclear arsenal.

Image
The influenza virus, as viewed through an electron microscope.

Can you explain the simulation argument, and how it presents a very particular existential risk?

Bostrom: The simulation argument addresses whether we are in fact living in a simulation as opposed to some basement level physical reality. It tries to show that at least one of three propositions is true, but it doesn't tell us which one. Those three are:

1) Almost all civilizations like ours go extinct before reaching technological maturity.

2) Almost all technologically mature civilizations lose interest in creating ancestor simulations: computer simulations detailed enough that the simulated minds within them would be conscious.

3) We're almost certainly living in a computer simulation.

The full argument requires sophisticated probabilistic reasoning, but the basic argument is fairly easy to grasp without resorting to mathematics. Suppose that the first proposition is false, which would mean that some significant portion of civilizations at our stage eventually reach technological maturity. Suppose that the second proposition is also false, which would mean that some significant fraction of those (technologically mature) civilizations retain an interest in using some non-negligible fraction of their resources for the purpose of creating these ancestor simulations. You can then show that it would be possible for a technologically mature civilization to create astronomical numbers of these simulations. So if this significant fraction of civilizations made it through to this stage where they decided to use their capabilities to create these ancestor simulations, then there would be many more simulations created than there are original histories, meaning that almost all observers with our types of experiences would be living in simulations. Going back to the observation selection effect, if almost all kinds of observers with our kinds of experiences are living in simulations, then we should think that we are living in a simulation, that we are one of the typical observers, rather than one of the rare, exceptional basic level reality observers.

The connection to existential risk is twofold. First, the first of those three possibilities, that almost all civilizations like ours go extinct before reaching technological maturity obviously bears directly on how much existential risk we face. If proposition 1 is true then the obvious implication is that we will succumb to an existential catastrophe before reaching technological maturity. The other relationship with existential risk has to do with proposition 3: if we are living in a computer simulation then there are certain exotic ways in which we might experience an existential catastrophe which we wouldn't fear if we are living in basement level physical reality. The simulation could be shut off, for instance. Or there might be other kinds of interventions in our simulated reality.

Now that does seem to assume that a technologically mature civilization would have an interest in creating these simulations in the first place. To say that these civilizations might "lose interest" implies some interest to begin with.

Bostrom: Right now there are certainly a lot of people that, if they could, would be very happy to do this for all kinds of reasons---people might do it as a sort of scientific study, they might do it for entertainment, for art. Already you have people building these virtual worlds in computer games, and the more realistic they can make them the happier they are. You could have people pursuing virtual historical tourism, or people who want to do this just because it could be done. So I think it's safe to say that people today, had they the capabilities, would do it, but perhaps with a certain level of technological maturity people may lose interest in this for one reason or another.

Your work reminds me a little bit of the film 'Children of Men,' which depicted a very particular existential risk: species-wide infertility. What are some of the more novel treatments you've seen of this subject in mainstream culture?

Bostrom: Well, the Hollywood renditions of existential risk scenarios are usually quite bad. For instance, the artificial intelligence risk is usually represented by an invasion of a robot army that is fought off by some muscular human hero wielding a machine gun or something like that. If we are going to go extinct because of artificial intelligence, it's not going to be because there's this battle between humans and robots with laser eyes. A lot of the stories you see in fiction or in films are subject to the good story bias; there are constraints on what makes for a good story. Usually there has to be a protagonist and the thing you're battling has to be evil, and there are going to be ups and downs, and the humans prevail in the end. So there's a filter for the scenarios that you're going to see in media representations.

Aldous Huxley's Brave New World is interesting in that it created a vivid depiction of a scenario in which humans have been biologically and socially engineered to fit into a dystopian social structure, and it shows how that could be very bad. But on the whole I think the general point I would make is that there isn't a lot of good literature on existential risk, and that one needs to think of these things not in terms of vivid scenarios, but rather in more abstract terms.

Last week I interviewed Cary Fowler with the Svalbard Global Seed Vault. His project is a technology that might be interpreted as looking to limit existential risk. Are there other technological (as opposed to social or political) solutions that you see on the horizon?

Bostrom: Well there are things that one can do, some that would apply to particular risks and others that would apply to a broader spectrum of risk. With particular risks, for instance, one could invest in technologies to hasten the time it takes to develop a new vaccine, which would also be very valuable to have for other reasons unrelated to existential risk.

With regard to existential risk stemming from artificial intelligence, there is some work that we are doing now to try and think about different ways of solving the control problem. If one day you have the ability to create a machine intelligence that is greater than human intelligence, how would you control it, how would you make sure it was human-friendly and safe? There is work that can be done there.

With asteroids there has been this Spaceguard project that maps out different asteroids and their trajectories, that project is certainly motivated by concerns about existential risks, and it costs only a couple of million dollars per year, with most of the funding coming from NASA.

Then there are more general-purpose things you can do. You could imagine building some refuge, some bunker with a very large supply of food, where humans could survive for a decade or several decades if there were a large impact of some kind. It would be a lot cheaper and easier to do that on Earth than it would be to build a space colony, which some people have proposed.

But to me the most important thing to do is more analysis, specifically analysis to identify the biggest existential risks and the types of interventions that would be most likely to mitigate those risks.

Image
A telescope used to track asteroids at the Spaceguard Centre in the United Kingdom.

I noticed that you define an existential risk as potentially bringing about the premature extinction of Earth-originating intelligent life. I wondered what you mean by premature? What would count as a mature extinction?

Bostrom: Well, you might think that an extinction occurring at the time of the heat death of the universe would be in some sense mature. There might be fundamental physical limits to how long information processing can continue in this universe of ours, and if we reached that level there would be extinction, but it would be the best possible scenario that could have been achieved. I wouldn't count that as an existential catastrophe, rather it would be a kind of success scenario. So it's not necessary to survive infinitely long, which after all might be physically impossible, in order to have successfully avoided existential risk.

In considering the long-term development of humanity, do you put much stock in specific schemes like the Kardashev Scale, which plots the advancement of a civilization according to its ability to harness energy, specifically the energy of its planet, its star, and then finally the galaxy? Might there be more to human flourishing than just increasing mastery of energy sources?

Bostrom: Certainly there would be more to human flourishing. In fact I don't even think that particular scale is very useful. There is a discontinuity between the stage where we are now, where we are harnessing a lot of the energy resources of our home planet, and a stage where we can harness the energy of some increasing fraction of the universe like a galaxy. There is no particular reason to think that we might reach some intermediate stage where we would harness the energy of one star like our sun. By the time we can do that I suspect we'll be able to engage in large-scale space colonization, to spread into the galaxy and then beyond, so I don't think harnessing the single star is a relevant step on the ladder.

If I wanted some sort of scheme that laid out the stages of civilization, the period before machine super intelligence and the period after super machine intelligence would be a more relevant dichotomy. When you look at what's valuable or interesting in examining these stages, it's going to be what is done with these future resources and technologies, as opposed to their structure. It's possible that the long-term future of humanity, if things go well, would from the outside look very simple. You might have Earth at the center, and then you might have a growing sphere of technological infrastructure that expands in all directions at some significant fraction of the speed of light, occupying larger and larger volumes of the universe---first in our galaxy, and then beyond as far as is physically possible. And then all that ever happens is just this continued increase in the spherical volume of matter colonized by human descendants, a growing bubble of infrastructure. Everything would then depend on what was happening inside this infrastructure, what kinds of lives people were being led there, what kinds of experiences people were having. You couldn't infer that from the large-scale structure, so you'd have to sort of zoom in and see what kind of information processing occurred within this infrastructure.

It's hard to know what that might look like, because our human experience might be just a small little crumb of what's possible. If you think of all the different modes of being, different kinds of feeling and experiencing, different ways of thinking and relating, it might be that human nature constrains us to a very narrow little corner of the space of possible modes of being. If we think of the space of possible modes of being as a large cathedral, then humanity in its current stage might be like a little cowering infant sitting in the corner of that cathedral having only the most limited sense of what is possible.
"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Fri Feb 27, 2015 7:39 pm

Old article, but one particular paragraph stood out as to why the trajectory we are currently on will remain uninterrupted, Keystone or no Keystone:

We are deluding ourselves: The apocalypse is coming — and technology can’t save us

If we don’t manage to undertake these initiatives in the very near future, our models will only become more inaccurate — and far too optimistic. Even worse, the massively business-friendly political climate that dominates the developed world makes affecting any immediate, substantive changes to the way our economies operate functionally impossible. And even in the rare instances where wealthy nations have attempted to move away from fossil fuels, developing nations having simply picked up the slack, accounting for the vast majority of carbon emission growth throughout the last decade. In fact, a recent forecast projects that the developing world will increase their total emissions by 50 percent in the next twenty years — a far cry from the carbon draw-down needed if we want any hope of avoiding climate change’s most perilous impacts.
"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby seemslikeadream » Fri Feb 27, 2015 8:16 pm

Glacial Melting In Antarctica Makes Continent The 'Ground Zero Of Global Climate Change'
AP | By By LUIS ANDRES HENAO and SETH BORENSTEIN
Posted: 02/27/2015 12:08 am EST Updated: 3 hours ago

CAPE LEGOUPIL, Antarctica (AP) — From the ground in this extreme northern part of Antarctica, spectacularly white and blinding ice seems to extend forever. What can't be seen is the battle raging thousands of feet (hundreds of meters) below to re-shape Earth.

Water is eating away at the Antarctic ice, melting it where it hits the oceans. As the ice sheets slowly thaw, water pours into the sea — 130 billion tons of ice (118 billion metric tons) per year for the past decade, according to NASA satellite calculations. That's the weight of more than 356,000 Empire State Buildings, enough ice melt to fill more than 1.3 million Olympic swimming pools. And the melting is accelerating.


In the worst case scenario, Antarctica's melt could push sea levels up 10 feet (3 meters) worldwide in a century or two, recurving heavily populated coastlines.

Parts of Antarctica are melting so rapidly it has become "ground zero of global climate change without a doubt," said Harvard geophysicist Jerry Mitrovica.

Here on the Antarctic peninsula, where the continent is warming the fastest because the land sticks out in the warmer ocean, 49 billion tons of ice (nearly 45 billion metric tons) are lost each year, according to NASA. The water warms from below, causing the ice to retreat on to land, and then the warmer air takes over. Temperatures rose 5.4 degrees Fahrenheit (3 degrees Celsius) in the last half century, much faster than Earth's average, said Ricardo Jana, a glaciologist for the Chilean Antarctic Institute.

As chinstrap penguins waddled behind him, Peter Convey of the British Antarctic Survey reflected on changes he could see on Robert Island, a small-scale example and perhaps early warning signal of what's happening to the peninsula and rest of the continent as a whole.

"I was last here 10 years ago," Convey said during a rare sunny day on the island, with temperatures just above freezing. "And if you compare what I saw back then to now, the basic difference due to warming is that the permanent patches of snow and ice are smaller. They're still there behind me, but they're smaller than they were."

Robert Island hits all the senses: the stomach-turning smell of penguin poop; soft moss that invites the rare visitor to lie down, as if on a water bed; brown mud, akin to stepping in gooey chocolate. Patches of the moss, which alternates from fluorescent green to rust red, have grown large enough to be football fields. Though 97 percent of the Antarctic Peninsula is still covered with ice, entire valleys are now free of it, ice is thinner elsewhere and glaciers have retreated, Convey said.

Dressed in a big red parka and sky blue hat, plant biologist Angelica Casanova has to take her gloves off to collect samples, leaving her hands bluish purple from the cold. Casanova says she can't help but notice the changes since she began coming to the island in 1995. Increasingly, plants are taking root in the earth and stone deposited by retreating glaciers, she says.

"It's interesting because the vegetation in some way responds positively. It grows more," she said, a few steps from a sleeping Weddell seal. "What is regrettable is that all the scientific information that we're seeing says there's been a lot of glacier retreat and that worries us."

Just last month, scientists noticed in satellite images that a giant crack in an ice shelf on the peninsula called Larsen C had grown by about 12 miles (20 kilometers) in 2014. Ominously, the split broke through a type of ice band that usually stops such cracks. If it keeps going, it could cause the breaking off of a giant iceberg somewhere between the size of Rhode Island and Delaware, about 1,700 to 2,500 square miles (4,600 to 6,400 square kilometers), said British Antarctic Survey scientist Paul Holland. And there's a small chance it could cause the entire Scotland-sized Larsen C ice shelf to collapse like its sister shelf, Larsen B, did in a dramatic way in 2002.

A few years back, scientists figured Antarctica as a whole was in balance, neither gaining nor losing ice. Experts worried more about Greenland; it was easier to get to and more noticeable, but once they got a better look at the bottom of the world, the focus of their fears shifted. Now scientists in two different studies use the words "irreversible" and "unstoppable" to talk about the melting in West Antarctica. Ice is gaining in East Antarctica, where the air and water are cooler, but not nearly as much as it is melting to the west.

"Before Antarctica was much of a wild card," said University of Washington ice scientist Ian Joughin. "Now I would say it's less of a wild card and more scary than we thought before."

Over at NASA, ice scientist Eric Rignot said the melting "is going way faster than anyone had thought. It's kind of a red flag."

What's happening is simple physics. Warm water eats away at the ice from underneath. Then more ice is exposed to the water, and it too melts. Finally, the ice above the water collapses into the water and melts.

Climate change has shifted the wind pattern around the continent, pushing warmer water farther north against and below the western ice sheet and the peninsula. The warm, more northerly water replaces the cooler water that had been there. It's only a couple degrees Fahrenheit warmer than the water that used to be there, but that makes a huge difference in melting, scientists said.

The world's fate hangs on the question of how fast the ice melts.

At its current rate, the rise of the world's oceans from Antarctica's ice melt would be barely noticeable, about one-third of a millimeter a year. The oceans are that vast.

But if all the West Antarctic ice sheet that's connected to water melts unstoppably, as several experts predict, there will not be time to prepare. Scientists estimate it will take anywhere from 200 to 1,000 years to melt enough ice to raise seas by 10 feet, maybe only 100 years in a worst case scenario. If that plays out, developed coastal cities such as New York and Guangzhou could face up to $1 trillion a year in flood damage within a few decades and countless other population centers will be vulnerable.

"Changing the climate of the Earth or thinning glaciers is fine as long as you don't do it too fast. And right now we are doing it as fast as we can. It's not good," said Rignot, of NASA. "We have to stop it; or we have to slow it down as best as we can. "

___

Associated Press writer Luis Andres Henao reported this story from various locations in Antarctica and Seth Borenstein reported from Washington.
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: Human race 'will be extinct within 100 years'

Postby Nordic » Fri Feb 27, 2015 9:00 pm

What pisses me off is that it's not all about climate change and emissions.

Even WITHOUT climate change we were hell-bent on a path to mass extinction. Hell it's already started!

It's almost like climate change is a goddamn red herring!

As far as people thinking we could never go extinct, they should all visit the Page Museum at the LaBrea Tar Pits here in LA. See all the species that actually overlapped with early man's existence that are now extinct.

Giant ground sloths, saber toothed tigers, mastodons, just to name a few of the more well - known ones. They used to be abundant. Now they're gone. Except for their Tar-stained skeletons on display.

We should just get a jump on it and add some Homo sapiens sapiens skeletons.
"He who wounds the ecosphere literally wounds God" -- Philip K. Dick
Nordic
 
Posts: 14230
Joined: Fri Nov 10, 2006 3:36 am
Location: California USA
Blog: View Blog (6)

Re: Human race 'will be extinct within 100 years'

Postby Elvis » Fri Feb 27, 2015 9:35 pm

Box just returned from a trip to Greenland. Right now, the ice there is … [b]black[/b]


What's making the ice black?

They don't seem to mention the cause; do we assume it's carbon falling out of the sky? Has anyone analyzed whatever it might be?
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7413
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Human race 'will be extinct within 100 years'

Postby tron » Sat Feb 28, 2015 6:29 am

volcanic ash?
User avatar
tron
 
Posts: 507
Joined: Fri Dec 08, 2006 6:34 pm
Blog: View Blog (0)

Re: Human race 'will be extinct within 100 years'

Postby Iamwhomiam » Sat Feb 28, 2015 10:57 pm

Elvis » Fri Feb 27, 2015 9:35 pm wrote:
Box just returned from a trip to Greenland. Right now, the ice there is … [b]black[/b]


What's making the ice black?

They don't seem to mention the cause; do we assume it's carbon falling out of the sky? Has anyone analyzed whatever it might be?

It is just that, carbon black; common soot.

Start thinking of our skies as landfills for industrial emissions. We'd all be screaming if a factory or a landfill was leaching poisons into our water supply, but nary a peep do we hear when they poison our air. Thousands of people die every day from obstructive lung diseases, and still we continue increasing the quantities of toxic pollutants entering our atmosphere by millions of tons each year.
User avatar
Iamwhomiam
 
Posts: 6572
Joined: Thu Sep 27, 2007 2:47 am
Blog: View Blog (0)

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Wed Mar 04, 2015 6:39 pm

Just to give this dark thread a little levity:

Scientists determine the nation’s safest places to ride out a zombie apocalypse

By Terrence McCoy March 4 at 3:21 AM

The question was not an uncommon one. But the pursuit of an answer was.

Years ago, Cornell University researcher Alex Alemi was paging through some literature on zombies when the wheels started turning. What would happen, he wondered, if there was a real zombie outbreak? He lived in Ithaca, N.Y. How long would it remain secure? Tucked away in the hills of central New York, would it hold out longer than the region’s mightiest cities?

To be sure, in a country such as ours, where zombies maintain an entrenched position in the American zeitgeist, Alemi was not the first to express such curiosity. The zombie genre constantly pulses with new entries from Hollywood blockbusters to best-selling books to university courses at Michigan State University. But Alemi, who had just finished Max Brooks’s zombie novel “World War Z,” had a special skill that set him apart: He’s a statistician.

Since a zombie outbreak would, he reasoned, be like an outbreak of other viral diseases, he began to build a statistical model to determine what a “realistic” outbreak would look like. He and a team of other Cornell researchers will this week present their findings at the March meeting of the American Physical Society. Their work couples rigorous scientific methodology with the fantastical. Which, as Alemi pointed out in an interview with The Washington Post, isn’t that different from how researchers approach hypothetical outbreaks of real diseases.

But there is one snag, he said. “Zombies are unique and very different than other diseases in that victims of other diseases either get better or succumb to the disease,” Alemi, a Cornell PhD candidate in physics, told The Post. “But zombies are the undead. They don’t get better. And the only way to stop them is for a human to kill the zombie. With other diseases, no matter how many infections you model, the disease is not going to infect every single person. But in the zombie model, you really can turn every single person into a zombie.”

His interactive model, in which the user can design his or her own zombie outbreak anywhere in the United States with a click of a button, has its limits. For one, the only transportation the zombies have at their disposal are their own feet. Unlike Ebola, the models aren’t concerned with zombies using mass transit to quickly spread the virus. “Zombies do not fly airplanes,” as the research paper put it.

The model has several levers to dictate how quickly the outbreak moves. One is the “bite-to-kill” ratio, which measures how often a person would kill a zombie vs. how often a zombie would infect a human. Also, population centers are fixed — meaning if you live in Chicago when an outbreak strikes, you can’t escape. “Transportation would likely break down in an outbreak,” Alemi said. Even with those confines, Alemi said, “modeling zombies takes you through a lot of the techniques used to model real diseases, albeit in a fun context.”

A “realistic” zombie outbreak, as Alemi calls it, both conforms to popular interpretations — and breaks them. For instance, densely populated regions are just about the worst place to be, a fact assumed in any number of zombie flicks. But unlike the movies, which often depict diffuse saturation and numerous locations simultaneously affected, a true outbreak wouldn’t work like that. It would take hours, days, months and even years to spread into every underpopulated nook and cranny of the United States.

“New York City would fall in a matter of days, but Ithaca, where I am — it would take weeks for the zombies to make their way up here,” Alemi said. “It would be a situation where you’re watching chaos on television, but where you are everything would remain unchanged.”

Indeed, watching the virus seep across the United States is like watching a flame seek out its next source of oxygen. The spots with heavy populations are gobbled up at a dizzying rate, but the virus slows in underpopulated regions, leaving them more or less protected. If the disease were to begin in the heart of New York City, other big cities such as Boston and Washington, D.C., would be gone within days, if not hours. But in more isolated places, like the northern reaches of Vermont and New Hampshire, there wouldn’t be a zombie in sight.

The key to surviving the zombie apocalypse — though Alemi contends it would eventually kill us all — would be to live as far away from a city as possible. For example, he said, the underpopulated region between Los Angeles and San Francisco would be just as susceptible as those metropolises for the simple reason that they’re in close proximity.

It’s not enough to live in a sparsely populated area. Survivors would likely inhabit regions like central Nevada or the Colorado Rockies, far removed from any other city. “It’s bad to be near any big city,” Alemi said. “Just look at the population map. First, you’d benefit from the fact that it would be highly unlikely for the zombie outbreak to begin where you are, and then it would take a very long time for any zombies to get out there.”

But don’t move yet! A rush to underpopulated regions would, in fact, only make them vulnerable. Plus there’s the fact that, you know, zombies are make-believe.

But hey, always good to be prepared. Because even in the American hinterlands, it would be “on the order of years,” Alemi said, before the first zombie arrived.

"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Wed Apr 15, 2015 5:29 pm

I think the observations about population growth are spot-on.

Saturday, Mar 14, 2015 01:00 PM PDT

When humans go extinct: How life will evolve after we’re gone

The sixth extinction could wipe out up to half of Earth's species. Michael Tennesen tells us what might happen next
Lindsay Abrams

The sixth mass extinction is nearly upon us. Species on Earth are dying out at a rate one thousand times greater than they were before humans began altering the environment. By the end of this century, scientists warn, anywhere from 20 to 50 percent of the species on Earth could be lost forever.

And among those who might not make it out the other side, says science writer Michael Tennesen, are humans.

It’s to be expected: No species lasts forever, and in our relatively short existence, humanity has done an impressively good job of undermining the forces necessary for our survival. But while we’re already taking a number of species out with us, Tennesen argues, nature is resilient: the end of man won’t necessarily mean the end of life itself. “Plants, animals, and microbes will survive, adapt, diversify, and proliferate,” he writes of life after man. “New plants will evolve to vanquish our monocultures of corn, wheat, and rice. With far fewer animals around, those species that survive the bottleneck of extinction will move into newly abandoned spaces. With little competition, they will thrive and rapidly evolve.”

We know that, he adds, because this sort of thing has happened in the past.

In “The Next Species: The Future of Evolution in the Aftermath of Man,” Tennesen looks to the previous five extinctions for clues as to what we can expect from the sixth. He spoke with Salon about how recognizing our place in nature might help us last just a little bit longer, and how an eventual Earth without man, at least in our current incarnation, won’t necessarily be such a terrible place. Our conversation, below, has been lightly edited for length and clarity:

A lot of us look at these studies about pollution and climate change and extinction on a very day-by-day, headline basis. What was the value for you of stepping back and taking a more pulled-back, planetary perspective on these issues?

I was influenced by a paper that Anthonky Barnosky from the University of California at Berkeley wrote, about his idea that we are entering a mass extinction event. People who study life on Earth think that extinction has a dual side: it could be a catastrophe or it could be an opportunity. The comet that fell out of the sky at the end of the Cretaceous period knocked out the dinosaurs, but made way for mammals and man.

So I’m trying to look at what can happen next. And to get an idea of what can happen next, I kind of had to pull back and look at the history of life on Earth with the idea: how does life recover from catastrophe? What things can you see in both events that might possibly be repeated in the future? I wanted to look at the whole concept. There was a book by Allen Weissman, “The World Without Us,” where he talked about what it would be like tomorrow if man disappeared and how long it would take for man’s infrastructure to come down, for New York to fall. I just wanted to look at it from more of a reality standpoint: What would the biology be like in such an event?

When you’re looking back at some of these lessons we can learn from past mass extinctions, what are some of the most important things you came across, that we should be paying attention to?

If you look at the past, the driver of four out of the five mass extinctions has been carbon dioxide. I went to Guadalupe National Park and took a hike with the national park biologist Jonena Hearst to Capitan Reef, which was just this explosion of life that existed back in the Permian Era, 250 million years ago, just before the Permian extinction. It showed just how susceptible life is to chemicals in the environment, and the litany of things that was going on during the Permian extinction, which was the greatest extinction we’ve ever had: 90 percent of life was knocked out of the ocean; 70 to 75 percent on land. The high CO2 content and greenhouse gases and other problems — sulfur dioxide release, major changes in the ocean currents — these are some of the things we’re dealing with now. I don’t know if we’re going to be heading into that massive of an event, but there are lessons there. A lot of people want to go, “Well, what’s CO2? What’s the big deal?” It’s 400 parts per million. That’s a lot.

As you said, there is sort of a more optimistic way of looking at mass extinction, because there are some positive potential outcomes…

In an extinction event, you’ve got a new playing board. I went up to Mt. St. Helens and looked at the land around that volcano. They’ve actually separated a portion of the volcanic area as a natural experiment to see how life would come back. Nature actually does a pretty fabulous job pretty quickly.

I looked also at after the eruption of Krakatoa — we’re talking the late 1800s, the whole island was almost leveled and it knocked out islands all around it, just a massive explosion. And yet today, not only have plants returned to Krakatoa and existing remains in that area, but so has wildlife. In about 150 years; that’s pretty amazing.

There would be a resilience to nature if man could just be a part of nature and the natural environment, rather than trying to be the dominant force on Earth. I guess my take-home message of the whole thing is that we can’t start thinking of ourselves as the be-all and end-all of natural history. We’re just a dot on a continuum that’s been ongoing for 600 million years. We really need to get a better perspective of ourselves. We are not the most important thing on the planet. We won’t last forever. Nobody I talked to thought we would. You, I and Homo sapiens have a limited life span, but if we could slow down and pull back a bit and start acting a little bit more like a part of life on Earth rather than the whole reason for its existence, we might just last longer and enjoy what we have left.

I wonder if that could be a different way of framing climate change, and other problems humans are causing. We like to say that we’re destroying the planet, but if nature is resilient, it’s more that we’re destroying ourselves. Earth is more likely to recover than we are.

We need Earth. We need nature. New York needs the microbes in the soil and the roots of plants and trees in the Catskill mountains to clean its water. The eastern coast of central America needs the coral reefs, the mangroves and the salt grasses to deaden the force of the hurricanes and the large waves that come in. Nature is really important for insect control: the birds and the bats have a lot to do with controlling insects and they’re a really important part of pollination. Nature actually plays a role in the creation of oxygen. We get a majority of our oxygen from the rainforest and the plankton in the ocean. There are a lot of really important things that nature does for us that we don’t respect.

And the reasons why we should respect them are, in a lot of ways, selfish.

Yeah, we’re looking after our own health if we look after the health of nature.

You write about all these things that could potentially cause man to go extinct. Is there any, from what you’ve seen, that’s most likely to hit us first?

People talk about the principle driver being climate change, but I really think it’s population growth. I mean, in 1800 we had a billion people, and right now we have 7 billion — only 200 years later. We’re expecting to have 9 billion by the mid-century and 10 or 11 billion out there by the end of the century. This has just happened in our lifetimes: the big growth was after World War II in the United States. It’s the driving force, but we’re not looking at it, and I really believe that climate change would not be such a horrible problem right now if we only had 1 billion people on Earth. It’s something that we don’t want to look at. That’s even more taboo than climate change.

The argument goes that that’s the underlying factor driving climate change, and also driving the emergence of infectious disease, right?

Well infectious disease, Rick Ostfeld at the Cary Institute of Ecosystem Studies has been studying how Lyme disease comes up in New England and the Northeastern area. He points out that what we’re doing is by eliminating species, we’re actually producing an environment that is conducive to disease. An environment that has a multiplicity of species has different carriers of the disease, some being good carriers like rats and chipmunks, while others like possums are not such good carriers. Essentially, by having the full conduit of species and by having a multiplicity of species, we dilute the transmission of disease. That’s another selfish reason to protect the rest of nature.

I was really intrigued by the part of the book where you talk about different scenarios for mankind’s future — you know, if we don’t go extinct, some other things that might happen. Did anything strike you as the most likely, or perhaps most enticing, scenario?

I had a variety of scenarios in the book. And you know, I’m kind of a nature lover, so the path that goes on after man actually sounds kind of good to me. There are a couple of tar deposits under Los Angeles where they have little representatives of what life was like 30 thousand years ago to what life was like 40 thousand years ago. People talk about what life was like before the coming of the Europeans, and I started running into more people who liked to look at life before the arrival of man. It’s really exciting to me, that L.A. could be a basin of mastodons and camels and saber-toothed tigers and things like that.

Then there’s the Palmyra Atoll, which exists between Hawaii and Australia, so it’s way out in the middle of the ocean, and it’s unoccupied. Today the Nature Conservancy actually protects it. It’s remarkable. If you run around with science journalists, everyone wants to go there, just to see what life would be like without the influence of man. I’d like to see that.

There’s also the idea of what could life be like if man continued? What would the next species of humans be? Genetic manipulation could produce a race of super-humans that would basically want to isolate themselves from us. That’s a possibility. Then there are more fanciful things. I visited Oxford professor Nick Bostrom — he had the idea that at some point, man is going to start uploading his mind into a computer. It would be based on the fact as you reach the end of life, there’s a way to extend it, the idea being that you could live on as either a robot or as an avatar in a virtual world. Second Life, where huge amounts of people spend 10 or 20 hours a week in virtual lives, is an example of that already. James Barrat with “Our Final Invention” talked about what could happen if A.I. could replicate itself or learned how to advance its intelligence on its own, and how soon it would be before it passed us.

Another big possibility is if we go to Mars. If we were to go to another planet — and I chose Mars because it’s the most likely — living there for a period of time would be conducive to developing into another species. You have one-third of the gravity you have on Earth plus you have to make your own oxygen. There’s high UV radiation and this can cause disruptions to the genetic system. These are all good ways to become another species. It doesn’t go beyond a few generations of life on Mars before you could have the definition of another species. It isn’t without reason that in a few generations, that could create another species of man that might come down and visit us again. And whenever we have the next species, part of Darwin’s idea is that one species develops out of the next one and then outcompetes the other.

You seem to feel that the most optimistic futures don’t include man. Can you see any role for humans as we are now — perhaps if there were just fewer of us?

Yeah, that would help. Toward the end of the book I talk about the fact that there have been some examples of that, some major changes. We did get rid of slavery. That was a major thing; we needed a major change of thought. Women’s suffrage is a major change in the way society runs. Today, our growing acceptance for homosexuality could also another change in society. So we could have a major change in society, but what it requires of us is to pull back from the dinner table, essentially. There can’t be as many of us. Maybe we could have 1 billion, and I don’t know how that happens.

That’s the next question.

I think if we looked at it more often and if we studied it, if that entered the conversation, that would be a really good thing. And there’s promise in that. If we start really addressing population, whatever that means, and we actually start discussions of it, that would be a good sign.
"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Fri Jun 19, 2015 6:17 pm

A child born today may live to see humanity’s end, unless…
By David Auerbach
June 18, 2015

Image
A couple hugs while standing on a hilly area overlooking Cairo on a dusty and hazy day where temperatures reached 114 Fahrenheit, May 27, 2015. REUTERS/Asmaa Waguih

Humans will be extinct in 100 years because the planet will be uninhabitable, according to Australian microbiologist Frank Fenner, one of the leaders of the effort to eradicate smallpox in the 1970s. He blames overcrowding, denuded resources and climate change.

Fenner’s prediction is not a sure bet, but he is correct that there is no way emissions reductions will be enough to save us from our trend toward doom. And there doesn’t seem to be any big global rush to reduce emissions, anyway. When the G7 called on Monday for all countries to reduce carbon emissions to zero in the next 85 years, the scientific reaction was unanimous: That’s far too late.

And no possible treaty that emerges from the current United Nations Framework Convention on Climate Change in Bonn, Germany, in preparation for November’s United Nations climate conference in Paris, will be sufficient. At this point, lowering emissions is just half the story — the easy half. The harder half will be an aggressive effort to find the technologies needed to reverse the climate apocalypse that has already begun.

For years now, we have heard that we are at a tipping point. Al Gore warned us in An Inconvenient Truth that immediate action was required if we were to prevent global warming. In 2007, Sir David King, former chief scientific advisor to the British government, declared, “Avoiding dangerous climate change is impossible – dangerous climate change is already here. The question is, can we avoid catastrophic climate change?” In the years since, emissions have risen, as have global temperatures. Only two conclusions can be drawn: Either these old warnings were alarmist, or we are already in far bigger trouble than the U.N. claims. Unfortunately, the latter seems to be the case.

Lowering emissions and moving to cleaner energy sources is a necessary step to prevent catastrophic temperature rises. The general target is to keep global temperatures from rising more than 2 degrees Celsius. Higher increases — like the 5C increase currently projected by 2100 — run the risk of widespread flooding, famine, drought, sea-level rise, mass extinction and, worse, the potential of passing a tipping point (frequently set at 6C) that could render much of the planet uninhabitable and wipe out most species. Even the 2C figure predicts more than a meter’s rise in sea levels by 2100, enough to displace millions. It is no wonder that the Pentagon calls climate change a serious “threat multiplier” and is considering its potential disruptive impact across all its planning.

This is where the U.N. talks fall short — by a mile. The targets proffered by the United States (a 26 percent to 28 percent decrease from 2005 levels by 2025), the European Union (a 40 percent decrease from 1990 levels by 2030) and China (an unspecified emissions peak by 2030) are nowhere near enough to keep us under the 2C target. In 2012, journalist Bill McKibben, in a feature for Rolling Stone, explained much of the math behind the current thinking on global warming. He concluded that the United Nations’ figures were definitely on the rosy side. In particular, McKibben noted that the temperature has already increased 0.8C, and even if we were to stop all carbon-dioxide emissions today, it would increase another 0.8C simply due to the existing carbon dioxide in the atmosphere. That leaves only a 0.4C buffer before hitting 2C. Even assuming the Paris conference implements everything that’s promised, we will be on track to use up the remaining “carbon budget” — the amount of carbon we can emit without blowing past the 2C threshold — within two to three decades, not even at mid-century.

These emissions-reduction frameworks, it is safe to say, are simply insufficient. By themselves, they only offer a small chance of preventing the earth from becoming mostly uninhabitable – for humans at least — over the next few centuries. For the talks to be more than just a placebo, they need to encompass aggressive plans for climate mitigation, with the assumption that current wishful targets won’t be met.

Apart from coordination to cope with climate-driven crises and associated instability, climate-change leadership needs to encourage and fund the development of technologies to reverse what we are unable to stop doing to our planet. Many of these technologies fall under the rubric of “carbon sequestration” — safely storing carbon rather than emitting it. Riskier strategies, like injecting sulfates into the air to reflect more of the sun’s heat into space and ocean iron fertilization to grow algae to suck in carbon, run a high risk of unintended consequences. Better and safer solutions to reduce CO2 concentrations in the atmosphere don’t yet exist; we need to discover them and regulate them, to avoid the chaos of what economists Gernot Wagner and Martin L. Weitzman term “rogue geoengineering” in their book Climate Shock.

None of these approaches are substitutes for emissions reductions. Achieving a carbon-neutral society is a necessary long-term goal regardless of other technological fixes. Technology could buy us the time to get there without our planet burning up. Ultimately, we need a Cold War-level of investment in research into new technologies to mitigate the coming effects of global warming. Without it, the U.N.’s work is a nice gesture, but hardly a meaningful one.
"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby FourthBase » Fri Jun 19, 2015 7:28 pm

Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.


That whole interview is the most insidiously evil thing I've ever read, but that part especially takes the cake. Bostrom is the single most dangerous thing in the world. Not nukes, not asteroids, not climate change. Him.
“Joy is a current of energy in your body, like chlorophyll or sunlight,
that fills you up and makes you naturally want to do your best.” - Bill Russell
User avatar
FourthBase
 
Posts: 7057
Joined: Thu May 05, 2005 4:41 pm
Blog: View Blog (0)

Re: Human race 'will be extinct within 100 years'

Postby stillrobertpaulsen » Fri Jun 19, 2015 8:35 pm

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fuck! I've been rambling on in the Transhuman vs. Anarcho-Primitives thread pondering the 'what-ifs' while in the back of my head going, "I know I posted some really creepy transhuman interview somewhere." Thanks for pointing this out. The way Bostrom goes on, he sees us as neanderthals deserving of extinction in the emergence of transhuman superiority. Talk about taking eugenics to another level!
"Huey Long once said, “Fascism will come to America in the name of anti-fascism.” I'm afraid, based on my own experience, that fascism will come to America in the name of national security."
-Jim Garrison 1967
User avatar
stillrobertpaulsen
 
Posts: 2414
Joined: Wed Jan 14, 2009 2:43 pm
Location: California
Blog: View Blog (37)

Re: Human race 'will be extinct within 100 years'

Postby FourthBase » Sat Jun 20, 2015 2:25 am

stillrobertpaulsen » 19 Jun 2015 19:35 wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fuck! I've been rambling on in the Transhuman vs. Anarcho-Primitives thread pondering the 'what-ifs' while in the back of my head going, "I know I posted some really creepy transhuman interview somewhere." Thanks for pointing this out. The way Bostrom goes on, he sees us as neanderthals deserving of extinction in the emergence of transhuman superiority. Talk about taking eugenics to another level!


It's like a bingo card of so many of the worst ideas ever. Eugenics, genocide, cult suicide, Orwellian doublespeak, Huxleyian utopianism, hatred of Nature, extinction of an animal species, us...picture Bostrom in a Nazi uniform singing "It's a Bingo!" And really, how chilling is it to read someone openly, cheerfully plan for such a Final Solution? Of the human problem. Has anyone called him a speciesist yet? A self-hating Human? A robotic colonialist? He's one of the most respected intellectuals in the world and he's a fucking monster. He promotes panic over other existential threats merely to extort us into considering transhumanism, which is like a pyromaniac recounting to you in great detail all the ways your home could be destroyed by an earthquake or a hurricane or a termite colony in the hopes that you'll let him prevent all that by setting your house on fire. Bostrom is a psychopath and, unfortunately, also a genius. Hitler with an elite IQ. His arguments need to be the thing set on fire and burned to ash. From any and every direction, from the right and the left and anywhere. Who are the top anti-transhumanists in the world? The most notorious ones are fundies -- they have limited, shrinking sway over the unreasonable, that's it. I'm talking the secular, respectable world of thinkers, writers, activists. On the right, Fukuyama? (Or is he now in-between?) On the left...who? Uh...Norman Ball?
“Joy is a current of energy in your body, like chlorophyll or sunlight,
that fills you up and makes you naturally want to do your best.” - Bill Russell
User avatar
FourthBase
 
Posts: 7057
Joined: Thu May 05, 2005 4:41 pm
Blog: View Blog (0)

Re: Human race 'will be extinct within 100 years'

Postby coffin_dodger » Sat Jun 20, 2015 9:03 am

Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.


In his Machine-God fantasies, he's completely overlooking biological evolution of consciousness.
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 46 guests