ChatGPT Looks at the Coming Year

Last year around this time we offered a quasi-optimistic take on 2022. This time, as a corrective, we turned to two outside sources for guidance on what 2023 (and beyond) might look like: Cormac McCarthy and OpenAI.

McCarthy’s year-end pair of novels, The Passenger and Stella Maris, were the literary event of the year for us, George Saunders notwithstanding. Quite dark, though. To counter McCarthy’s (and our own) innate pessimism, we enlisted the help of Open AI’s ChatGPT and DALL·E 2.

Despite his bleak outlook (or perhaps because of it), McCarthy is eminently quotable. Here is a line from The Passenger that we used to generate OpenAI’s year-end outlook: “The horrors of the past lose their edge, and in the doing they blind us to a world careening toward a darkness beyond the bitterest speculation.”

A dark future. Image: DALL-E-2.
A dark future. Image: DALL-E 2.

A Dark Future?

It is difficult to predict the future with certainty, but it is possible to imagine a future in which the horrors of the past are remembered and acknowledged, but no longer have the power to haunt and dominate the present. This could be a world in which people have learned from the mistakes and atrocities of the past, and have made a concerted effort to prevent them from happening again.

In this future, the horrors of the past may have lost their edge because they have been widely recognized, acknowledged, and addressed. People may have developed a deeper understanding of the causes of these horrors, and may have implemented measures to prevent them from occurring in the future.

However, it is also possible that this focus on the past could have the unintended consequence of blinding people to the present and future. People may become so focused on preventing past horrors from repeating themselves that they overlook other pressing issues and problems that are unfolding in the present.

Furthermore, it is important to recognize that the future is not predetermined, and that the actions and choices of individuals and societies can shape the direction in which the world moves. If people are not careful and vigilant, it is possible that the horrors of the past could be repeated, or that new horrors could emerge. It is crucial that people stay aware and engaged in the present and the future, in order to prevent darkness and suffering from taking hold.

Happy New Year, everyone.

Can AI Save Us?

In absolute terms, humanity hasn’t been around very long at all. From our own perspective, though, evolution seems to be taking an eternity. As a species, we remain profoundly stupid.

We haven’t learned to share, or to work toward our common interests. We befoul our own nest. We continue to develop weapons which threaten annihilation. With every small step forward made by an Einstein, a Beethoven, a Tolstoy, the species as a whole has trouble following. We lurch toward progress, then rapidly retreat again—witness the 2016 U. S. election.

Superintelligence, by Nick Bostrom (Oxford University Press, 2014). Is it the answer, or the end?
Superintelligence, by Nick Bostrom (Oxford University Press, 2014). Is it the answer, or the end?

 

 

 

 

 

 

 

 

Do we need somebody—or something—smarter to step in and take charge? AI may fit the bill, especially artificial intelligence of the “superintelligence” variety discussed in Oxford philosopher Nick Bostrom’s thought-provoking book of the same name.

But of course, as with all things human, the answer is not so straightforward. You may have read that scientific and tech luminaries such as Stephen Hawking and Elon Musk have sounded warnings about the potential dangers of artificial intelligence technology. Indeed, Musk calls AI an “existential threat” to human civilization and has co-founded OpenAI, a non-profit, open-source AI research company, to try to foster collaboration in developing “friendly AI” as a result.

Bostrom sounds an alarm in Superintelligence, as well. The concern is that research into and continued development of AI might lead to an “intelligence explosion” that would create an entity or entities so much smarter than us that we would become redundant and dispensable. Bostrom has coined the term “Singleton” to designate such an all-controlling superintelligence. A “bad” Singleton would be the end of us.

However, a vein of optimism runs through Superintelligence, too. Bostrom believes, or would like to believe, that humanity has a potential “cosmic endowment” which could be realized through a benign superintelligence. He acknowledges that the odds would seem to be against this, and likens humanity and superintelligence to a child with an undetonated bomb in its hands. The core problem is one of control: how do we create a superintelligence that will not jettison humanity but rather work to enhance it?

We must, Bostrom says, “hold on to our humanity … maintain our groundedness, common sense, and good-humored decency even in the teeth of this most unnatural and inhuman problem. We need to bring all our human resourcefulness to bear on its solution.” This is, Bostrom maintains, “the essential task of our age.”

At a moment in history when bellicosity and benightedness are ascendant, this is a very tall order indeed. Yet contemplating Bostrom’s suggested cosmic endowment is a worthwhile exercise in staving off despair. One must hope there remain enough intelligent and altruistic people at work in the field of AI (and in every other important field) to make envisioning a better future viable.