
If there’s even
a 1% chance that there will be a Singularity in our lifetime, I think a
reasonable precaution would be to spend at least 1% of our GDP studying the
issue and deciding what to do about it.
January 13, 2014 by Max Tegmark
Exactly three years ago,
on January 13, 2011, we humans were dethroned by a computer on the quiz show Jeopardy!.A
year later, a computer was licensed to drive cars in Nevada after being judged safer than a human.
What’s next? Will computers
eventually beat us at all tasks, developing superhuman intelligence?
I have little doubt that
this can happen: our brains are a bunch of particles obeying the laws
of physics, and there’s no physical law precluding particles from being
arranged in ways that can perform even more advanced computations.
Risks vs.
rewards of the Singularity
But will it happen
anytime soon? Many experts are skeptical, while others such as Ray Kurzweil
predict it will happen by 2045.
What I think is quite
clear is that if it happens, the effects will be explosive: as Irving Good
realized in 1965, machines with superhuman intelligence could rapidly
design even better machines. Vernor Vinge called the resulting intelligence explosion ”The
Singularity,” arguing that it was a point beyond which it was impossible
for us to make reliable predictions.
After this, life on Earth
would never be the same. Whoever or whatever controls this technology would
rapidly become the world’s wealthiest and most powerful, outsmarting
all financial markets, out-inventing and out-patenting all human
researchers, and out-manipulating all human leaders. Even if we
humans nominally merge with such machines, we might have no guarantees
whatsoever about the ultimate outcome, making it feel less like a
merger and more like a hostile corporate takeover.
In summary, will there be a
Singularity within our lifetime? And is this something we should work for or
against? On one hand, it could potentially solve most of our problems,
even mortality. It could also open up space, the final frontier: unshackled by
the limitations of our human bodies, such advanced life could rise up and
eventually make much of our observable universe come alive.
On the other hand, it could
destroy life as we know it and everything we care about — there are ample
doomsday scenarios that look nothing like the Terminator movies, but are far
more terrifying.
Other existential
risks for spaceship Earth
I think it’s fair to say that
we’re nowhere near consensus on either of these two questions, but that doesn’t
mean it’s rational for us to do nothing about the issue. It could be the
best or worst thing ever to happen to humanity, so if there’s even a 1% chance
that there will be a Singularity in our lifetime, I think a reasonable
precaution would be to spend at least 1% of our GDP studying the issue and
deciding what to do about it. Yet we largely ignore it (a rare exception
being intelligence.org).
Moreover, this is far from the
only existential risk that we’re curiously complacent about, which is why I
decided to dedicate the last part of my new book Our Mathematical Universe (http://mathematicaluniverse.org) to this very topic.
As our “spaceship Earth”
blazes though cold and barren space, it both sustains and protects us. It’s
stocked with major but limited supplies of water, food and fuel. Its
atmosphere keeps us warm and shielded from the Sun’s harmful ultraviolet rays,
and its magnetic field shelters us from lethal cosmic rays. Surely any
responsible spaceship captain would make it a top priority to safeguard its
future existence by avoiding asteroid collisions, on-board explosions,
epidemics, overheating, ultraviolet shield destruction, and premature depletion
of supplies?
Why are we so
reckless?
Yet our spaceship crew
hasn’t made any of these issues a top priority, devoting (by my
estimate) less than a millionth of its resources to them. In fact,
our spaceship doesn’t even have a captain!
Why are we so reckless? A
common argument is that we can’t afford taking precautions, and that because it
hasn’t been scientifically proven that any of these disaster scenarios will in
fact occur, it would be irresponsible to devote resources toward their
prevention.
To see the logical flaw in
this argument, imagine that you’re buying a stroller for a friend’s baby, and a
salesman tells you about a robust and well-tested $49.99 model that’s been sold
for over a decade without any reported safety problems.
“But we also
have this other model for just $39.99!”, he says. “I know there have been news reports of it collapsing
and crushing the child, but there’s really no solid evidence, and nobody has
been able to prove in court that any of the deaths were caused by a design
flaw. And why spend 20% more money just because of some risks that aren’t
proven?”
If we’d happily spend an extra
20% to safeguard the life of one child, we should logically do
so also when the lives ofall children are at stake: not only those
living now, but all future generations during the millions and potentially
billions of future years that our cosmos has granted us.
It’s not that we can’t afford
safeguarding our future: we can’t afford not to. What
we should really be worried about is that we’re not more
worried.
Max Tegmark, PhD is a professor of physics, MIT. His
new book, Our Mathematical Universe: My Quest for the Ultimate Nature of
Reality, was published Jan. 7, 2013.
No comments:
Post a Comment