In absolute terms, humanity hasn’t been around very long at all. From our own perspective, though, evolution seems to be taking an eternity. As a species, we remain profoundly stupid.
We haven’t learned to share, or to work toward our common interests. We befoul our own nest. We continue to develop weapons which threaten annihilation. With every small step forward made by an Einstein, a Beethoven, a Tolstoy, the species as a whole has trouble following. We lurch toward progress, then rapidly retreat again—witness the 2016 U. S. election.
Do we need somebody—or something—smarter to step in and take charge? AI may fit the bill, especially artificial intelligence of the “superintelligence” variety discussed in Oxford philosopher Nick Bostrom’s thought-provoking book of the same name.
But of course, as with all things human, the answer is not so straightforward. You may have read that scientific and tech luminaries such as Stephen Hawking and Elon Musk have sounded warnings about the potential dangers of artificial intelligence technology. Indeed, Musk calls AI an “existential threat” to human civilization and has co-founded OpenAI, a non-profit, open-source AI research company, to try to foster collaboration in developing “friendly AI” as a result.
Bostrom sounds an alarm in Superintelligence, as well. The concern is that research into and continued development of AI might lead to an “intelligence explosion” that would create an entity or entities so much smarter than us that we would become redundant and dispensable. Bostrom has coined the term “Singleton” to designate such an all-controlling superintelligence. A “bad” Singleton would be the end of us.
However, a vein of optimism runs through Superintelligence, too. Bostrom believes, or would like to believe, that humanity has a potential “cosmic endowment” which could be realized through a benign superintelligence. He acknowledges that the odds would seem to be against this, and likens humanity and superintelligence to a child with an undetonated bomb in its hands. The core problem is one of control: how do we create a superintelligence that will not jettison humanity but rather work to enhance it?
We must, Bostrom says, “hold on to our humanity … maintain our groundedness, common sense, and good-humored decency even in the teeth of this most unnatural and inhuman problem. We need to bring all our human resourcefulness to bear on its solution.” This is, Bostrom maintains, “the essential task of our age.”
At a moment in history when bellicosity and benightedness are ascendant, this is a very tall order indeed. Yet contemplating Bostrom’s suggested cosmic endowment is a worthwhile exercise in staving off despair. One must hope there remain enough intelligent and altruistic people at work in the field of AI (and in every other important field) to make envisioning a better future viable.