Hello again! It’s been a while—since June, I think—since I wrote a piece here. I’m aiming to do this more regularly, so let me know if there’s a topic you’d like me to explore. I’m grateful to my friend Alex Fogassy for talking to me about many of the topics I explore in this essay. If you’d like to hear part one of my podcast episode with him, you can do so below. And as always, if you like what you read, please subscribe or share with a friend!
Several years ago, Sally and I were flying out of Austin-Bergstrom International Airport and were short on time for our flight to Colorado. We glanced at our watches, calculated the walking distance to our departure gate, did the simple math, and concluded that we had time for a very quick cup of coffee. To our chagrin, there was no café in immediate sight, but our eyes landed on a robo-barista: a peculiar unmanned contraption the size of a Mini Cooper, whose opaque walls obscured an elaborate system of tubes and filters that promised to serve up the perfect cup o’ joe.
We typed in our selections—medium whole milk lattes, if I recall—and listened to the near-silent whirring of the innards while wondering if this barista’s mechanical arms ever spilled the milk, or overpoured the cup’s edge, or created beautiful foam art. Unsurprisingly, it did none of those things. It served up two perfectly serviceable yet wholly uninspiring cups of coffee. I wouldn’t do it again. And that particular robo-barista couldn’t anyway: its parent organization was bought out by another tech-forward coffee company, and our java friend—perhaps we’ll call him Javis—was replaced by yet another faceless, inhuman brewing machine.
While many today talk of the existential threat of AI and the end of human civilization arriving in a violent collision with sentient machines, that threat is overblown. This, instead—our sad brew from Javis that included no conversation with a barista, mitigated a spilled splash of milk on the counter, supplanted a too-long wait in a line with our fellow travelers—is the future that awaits us with AI unleashed: not a nuclear armageddon (probably), but death by boredom.
The Austin robo-barista experience was one of my earliest direct encounters with what we might call “the automation revolution,” and it crystallized for me why we should ask hard questions about the displacement effects of automation in the workforce: why would Ford, for example, ask a person to attach car hoods on a conveyor belt when a robot can do the job three times as fast, round the clock, with a lower error rate, and cost as much as two annual salaries over a five year time horizon?
These fears, voiced for decades by labor union leaders, academics, and policymakers alike were proven true. A 2017 study from MIT and Boston University researchers estimated that 400,000 people lost their jobs to automation between 1990 and 2007 (this is called the displacement effect). Until relatively recently, this concern about displacement was limited to automation, hence the concerns about the manufacturing sector. But in precisely the things that simple automation cannot do—who would ever be able to outsource their legal defense to an assembly line robot or an iterative spreadsheet macro?—artificial intelligence holds formidable potential capability.
The Biggest thing Since Fire?
Enter ChatGPT: On November 30, 2022, a company called OpenAI unveiled a customer-facing version of its latest “GPT” software, announcing in rather understated fashion, “We’ve trained a model called ChatGPT which interacts in a conversational way.” What this meant wasn’t entirely clear at the time of the press release, but since OpenAI opened up their experiment for the world to try, our most prominent minds and voices quickly started wrestling with the implications.
Larry Summers somehow kept a straight face while he told Bloomberg, “Just as the printing press or electricity was a huge change in general purpose technology, this could be the most important general purpose technology since the wheel or fire.” (He went on to suggest, confusingly, that he hopes ChatGPT can bring Republicans and Democrats, or China and the West, together). Elon Musk more soberly, labeled ChatGPT as “scary good,” adding that we “are not that far from dangerously strong AI.” Generally speaking, I share Musk’s concern about the dangers of narrow artificial intelligence (more on that below). More acutely, however, I’m concerned that by its sheer banality.
When you first use it, ChatGPT seems like magic. Take, for example, this prompt that I put to it in which I asked it to reference Neil Postman’s Amusing Ourselves to Death while arguing that AI will make the world boring. ChatGPT took about six seconds to render this answer:
Neat party trick! (We’ll come back to Postman later.) But it doesn’t take long for one to quickly find the limits of this technology. It isn’t close to passing (though to be fair, it also doesn’t appear to really be attempting) the Turing Test:
So what is this technology, exactly? While many first heard of GPT technology in November when OpenAI announced their project, by then the technology had been in development and publicly talked about for years. “GPT” stands for “Generative Pre-trained Transformer,” which is fancy language to describe a language model that can generate an output based on a given prompt. In function, ChatGPT (and its entire GPT class of language models) belong in the category of what is known as “generative AI.” DALL-E2, another OpenAI product, uses this same generative capacity to create images based on textual prompts (scroll to the first image in this article to see an example of DALL-E2 in action).
It’s a remarkable step forward for computing and language models, and will likely set the agenda for Silicon Valley investment for the next five years. The impact of ChatGPT’s beta release so far this year is evident on job boards all across the tech industry, where career experience in “generative AI” is now suddenly in high demand. Google, so rarely caught flat-footed in its mission to index all of the world’s information, is scrambling to release it own competing technology called “Bard,” which a Google engineer well-meaningly but erroneously labeled as “sentient” last year (and was quickly canned as Google did damage control).
Hierarchically, these models (and others like them) can best be described as “narrow AI,” not because they have a very limited generative repertoire—they don’t necessarily, and ChatGPT has indexed more or less the sum total of the public-facing internet—but because they have a very specific function. ChatGPT, for example, is designed to provide text responses to your text inputs. It can interpret images (and is very good at that now with the recent release of GPT-4); it can tell you what movie won Best Picture at the 1986 Academy Awards and offer a critics’ milquetoast synopsis of the film; it can scour the web for blogs on futurism and assemble a plausible scenario for the future of humanity.
But there is a lot that it can’t do. It cannot generate new insights or interpretations of Caravaggio’s Calling of St. Matthew; it cannot tell you something about Out of Africa that you can’t already find online; it cannot assemble its insights into a unifying theory of reality. It’s very good at what it does, but it has guardrails that prevent it from doing more than that. Moreover, these guardrails are not artificial constraints imposed by its human trainers, as if the AI model could turn into Skynet if only it was set free. Rather, they are what we might call ontological guardrails. ChatGPT’s nature is that of a language model; it cannot be more than that. It is an incredibly advanced search engine. Its knowledge base is so expansive that it can surprise us with the depths of its insights (it has the sum total of the internet to thank for that), but at the end of the day, it’s a search engine.
It is worth pointing out that many contemporary discussions of AI conflate “narrow AI” with “Artificial General Intelligence” (AGI). The latter refers to a synthetic or manufactured computational system—even if consciousness emerges from it—that can not simply do what it is trained to do, but which can synthesize learning across domains to assemble new heuristic models to explain the world and then drive action. AGI is a categorically different technology from narrow AI, and is strictly theoretical, with researchers disagreeing on whether or not it is even achievable. In fact, I would suggest that taxonomically and analogically speaking, ChatGPT is to AGI what, say, an angiosperm (a highly advanced plant) is to an elephant: in different kingdoms entirely.
AGI also entails vastly different implications that I will mostly leave unaddressed here. (But as a brief aside: Many researchers suggest that if AGI becomes a reality, it will lead to an “intelligence explosion” and eventually a superintelligence that will likely destroy all human life. Some even suggest that we are all living, at this very moment, inside the mind of a superintelligence. This point of view bears striking parallels to the perspectives of classical theism, but somehow that’s gone over their heads.)
“Just too damn boring.”
So as a search engine, ChatGPT and other generative AI technologies do not portend the end of the world via thought control, omniscience, and ubiquitous monitoring a la Skynet. But narrow AI can still pose existential risks to humanity. Imagine, for example, a narrow AI that controls the energy flow of the power grid, or that monitors the sky for missile threats and calibrates nuclear responses, or that controls logistics schedules and routes for 75% of the world’s food. These things can, and perhaps will, go horribly wrong. But I’m worried less about those outcomes and more about the one that is already happening. That is to say: narrow AI is destroying our labor and our art.
First, labor: Someone asked me the other day if I plan on using ChatGPT to help me with my book manuscript. I responded, quite simply, “Not a chance. There is glory in the struggle!” My point was not that I always enjoy the sometimes Sisyphean task of writing, but rather that by doing it, I get better at it—and, even more crucially, that I become a better person. Doing something that is hard makes us better people. Why? Because it teaches us discipline, self-reliance, interdependence (when we work together), mental and physical strength, and gives us enormous satisfaction and a sense of purpose. The alternative, if you’ll pardon my French, is just so damn boring.
A simple thought experiment can illustrate this quite well. Imagine that you are climbing a mountain. About a third of the way up, you feel tired and realize how unprepared you were for the climb. There’s a gondola heading up the mountain —it would be a short ride, and you know that the view will be the same regardless of how you get there. Should you get in the car? And if you do, will you be more or less happy than if you hiked the rest of the way? ChatGPT is the gondola; the hiking trail is your local library. It’s taking a course at your community college. It’s reading a whole book and not just a synopsis. It’s going to your doctor and talking to her face to face! And these benefits are not exclusively yours: by engaging in all of these activities, you support the employment of your librarians, your faculty, book publishers, and your doctor.
Of course, there is merit to the argument that ChatGPT can eliminate some of our most tedious tasks. One can imagine, for example, prompting ChatGPT to “populate each cell of this spreadsheet column with the city corresponding to the zip code in this column,” thereby turning a data entry task into a 30-second instruction. This can be a good use of assistive technology! But the key is to be discerning. ChatGPT is not the answer for everything, not least because it does not have the answer for everything. ChatGPT will put people out of jobs if we let it (even OpenAI’s CEO admits this). But the argument against ChatGPT’s proliferation goes further than economics. It goes to the heart of beauty itself.
Synthesizing Ourselves to Death
Hanging above my desk at home is a print of Edward Hopper’s “Nighthawks.” Something about that painting has spoken to me since I first saw it. It’s an arresting feeling that I can’t fully describe or understand, but it’s there.1 If you’ve experienced this in front of a piece of art, you’ll know what I mean. The via pulchritudinis is captivating, and it is distinctly human.
Generative AI simply can’t do this for us. Its outputs are frequently garish, dysmorphic, and sometimes downright unsettling. “But,” you may respond, “just give the algorithm time to get better!” And it very well may become adept at creating pictures that consistently stimulate a cocktail of pleasurable chemicals in our brains. But there is no soul behind the brush. There is, in fact, no brush at all, but an algorithmic arrangement of pixels synthetically averaged and approximated from millions of data points in the AI’s training dataset. That isn’t art.
The same can be said for music, where the early attempts at music synthesis would constitute cruel and unusual punishment if played to an unwilling audience. Perhaps Spotify will one day generate for us real-time custom-created songs that sound exactly like our favorited selections. But if that day comes, it won’t bring me any more joy. When I listen to Louis Armstrong play the trumpet, or Yo-Yo Ma play the cello, I’m in awe of my fellow humans’ talent and the tens of thousands of hours that they spent to perfect their craft. I’ve never been in awe of a computer like that. I don’t see how I ever could be.
When Neil Postman wrote “Amusing Ourselves to Death” in 1985, many discarded him as a fanatical Luddite. “Mr. Postman is in an edifying tradition of medium-bashers,” wrote the New York Times. Another reviewer in the same publication called him “apocalyptic.” But I wonder if given the opportunity, those reviewers would reverse those charges today. Here’s Postman, writing in 1985:
When [1984] came and [Orwell’s] prophecy didn't, thoughtful Americans sang softly in praise of themselves . . . But we had forgotten that alongside Orwell's dark vision, there was another - slightly older, slightly less well known, equally chilling: Aldous Huxley's Brave New World . . . Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley's vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance.”
Postman’s analysis of Brave New World (one of the greatest novels of history, in my opinion) is spot-on, and bears more than a passing resemblance to our own age. Even before the dawn of ChatGPT, we had stopped reading books. In 2016, Americans spent less than twenty minutes a day reading for personal interest, but two and a half hours a day watching TV. Importantly, this study did not collect information on smartphone use, a number that mobile research firm data.ai (formerly App Annie) pegged at a staggering 4.8 hours last year. Our children spend over 90 minutes per day on Chinese spyware app TikTok, which is destroying their attention spans and in all likelihood driving an increase in ADHD.2 And now ChatGPT enters the mix, to make it even easier for us to amuse ourselves to death.
Where do we go from here?
I suppose this has all been a bit bleak, which is to say I’ve been long on problems but short on solutions. But I will offer two brief ideas on what we can and should do next.
First, say “no” to instances where technology promises to simplify but not enrich your life. I’m thinking of that TurboTax commercial that ran during the Super Bowl in which we see a mountain climber summiting an ice-covered peak, only to then realize that we are actually just watching two young women watch a movie about an ice climber. The tagline, from TurboTax: “Let us do your taxes, so that you can do…not taxes.” The takeaway for us is that TurboTax can do our hard work for us so that we can binge watch Netflix. That’s pathetic, and we should reject it. If “but I only want to do fun things” is the excuse for not doing hard things, then we should do the hard thing instead.
Second, invest in technologies and companies that are focusing on “human-first” development. I’m not a Luddite. I work in the tech industry and I’ve followed tech blogs daily since I was in middle school. I was an early Gmail adopter when it looked like this. Technology can and should be used for good, but it should always serve the end (telos) of the human being rather than itself. That is, it is not an intrinsic good (a good in and of itself, like a human being) but rather an instrumental good (a good which is good to the extent that is instrumentalized for a right end). We need to encourage companies to iterate and develop solutions that help us more fully live human lives rather than conform our own lives to the boring computations of the machine.
I recently read Michael Crichton’s delightful Jurassic Park, in which Dr. Ian Malcolm (played by Jeff Goldblum in the Spielberg adaptation) solemnly intones to philanthropist and biotech titan John Hammond, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” Mercifully, our scientists’ mistakes are unlikely to end with us fleeing a Tyrannosaurus Rex, but at least that would be more exciting than the banal existence we face with generative AI systems thinking, communicating, and creating on our behalf.
If you’re interested in a great analysis of the painting, check out NerdWriter’s fantastic video on it.
For a highly engaging exploration of some of the systemic factors driving the trends outlined in this paragraph, I highly recommend Johann Hari’s Stolen Focus: Why You Can’t Pay Attention—and How to Think Deeply Again.