Tuesday, March 6, 2012

We underestimate our chances of extinction...but it doesn't have to be that way




In reading The Atlantic today, I came across an interview with transhumanist, Nick Bostrom.

Nick Bostrom is a professor of philosophy and director of the Future of Humanity Institute at Oxford University.  The title of this interview was…ominously enough… “We’re Underestimating the Risk of Human Extinction.”  If you’ve been following my blog for at least a little while now, you no doubt know that I agree with him.  And yes, by “extinction,” I do mean every last human being.  It can happen.  However, Bostrom does not see the threat of annihilation coming as much from the Yellowstone supervolcano (big fear of mine) or environmental disasters brought on by Global Warming (ditto).  Instead, he foresees the dangers coming directly from our own hands.

That may seem odd coming from a transhumanist, one who ostensibly believes in the betterment of the human condition through technology.  What does Bostrom fear?  A few examples he cites in the interview include the coming of machine intelligence and nanotechnology.  The potential exists for especially deadly forms of weapons systems stemming from those advances.  Likewise for developments in synthetic biology.  He also mentions the risk of “designer pathogens” that result from advances in genetic technology and readily available information on DNA and virus sequences.

There are two very important points that Bostrom makes aside from his laudable urging for us to shed our collective hubris in thinking we could never go extinct.  For one, the general public shouldn’t confuse likely artificial intelligence scenarios with Hollywood stories.  As he says:

“For instance, the artificial intelligence risk is usually represented by an invasion of a robot army that is fought off by some muscular human hero wielding a machine gun or something like that. If we are going to go extinct because of artificial intelligence, it's not going to be because there's this battle between humans and robots with laser eyes.”

Cool as a few of those films are, it’s just not the way it’s likely to go down if it does indeed happen.
Second of all, though there is risk associated with human technological advancement, Bostrom by no means advocates against technological development. 

“Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.”

Indeed.  Human beings are fundamentally very weak and squishy things.  There are technologies we can develop such as cybernetics that will help to mitigate the biological and environmental challenges that we face.  This can all be achieved without losing the essential attributes that appertain to humanity.  Transhumanism is not be feared.  Technology is not to be feared.  The best way to prevent the scenarios that Bostrom primarily describes is to take control of our own technological development, take control of our own biology, and to ultimately understand the convergence of the two.

The idea of our world being a computer simulation is also discussed.  I’ll leave you to read it as I’ve run short on time and space.


Follow me on Twitter: @Jntweets

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.