Dispatches from the Empire


When I tell people that I’ve lost several clients to ChatGPT and its ilk over the last few weeks, they think my anxiety over AI stems from ostensibly losing my job. It does not. I’m fortunate to live a life that requires very little financial maintenance as I have no debt. While I can’t afford any big purchases — should the house need a new roof or something happens to my car, I’m in some trouble — for right now, in this moment, I’m fine. I can afford groceries. I can afford dog food and vet visits. My financial life is already quite lean, and if I need to trim a little more fat, that’s possible.

My anxiety comes from the larger implications of AI. These implications are very difficult to talk about with other people outside the tech world, mostly because I am without useful analogies. Someone recently told me AI is a tool. Well, yes, but it’s also not. It’s less a hammer and more a, well, a hammer that learns to become every other tool. A hammer that then teaches itself language. All languages. And writes code. And can run that code. Someone else told me "it's just a computer program." Well, yes, but computer programs have to be written by a human. We can look at their code and analyze it. We can understand how it works. AI doesn't work that way. These Large Language Models (LLMs) are just code, yes, but the models themselves are opaque. We do not understand how they know what they know. They literally teach themselves.

Long-term, this means that these LLMs can get out of our control. While it takes vast amounts of compute power (think very large server farms) to run these models, should an LLMs get out of our control, what's to stop it from spreading? The internet was designed quite intentionally to be decentralized — without any central hub that can shut it down. So should one of these LLMs decide to spread, how can we "pull the plug" to shut it down?

But as technology progresses, it takes less and less compute power to run these models. Some, like the open-source model released by Facebook, can be run locally on a single home computer. Once these models proliferate, running on just a single machine, our ability to contain them becomes impossible.

The dangers of high-powered AI LLMs are impossible to exaggerate. Human society is based on trust. We (generally) trust the newspapers, the websites we visit, the pictures we see. We trust the music we listen to was created by the musicians whose voices we hear. But all of this goes out the window with the present capabilities of AI. Photo-quality images can be generated in seconds. Videos can be faked. Our voices can be made to say anything. How on earth does society survive this?

When we can't trust anything we see, read, or hear, what happens to civilization?

This is happening now. Current AI has the capability to do all these things. As these LLMs grow, they get ever-better at generating images, sound, and video that's impossible to discern as fake.

In a recent video I linked to (and one I think to be essential viewing), The A.I. Dilemma, Tristan Harris said that 2024 will be “the last human election” in America. Election Day 2024 is still 18 months away, and I think Tristan might be wrong in his presumption. The amount of fake information, fake articles, photos, videos will expand exponentially in mere months. When anyone can create a sex tape of anyone else, when anyone can use AI to generate photos and videos of our politicians doing and saying unspeakable things, what happens to our political system? Why wait until 2028?

If we thought the despair caused by social media was bad, if we thought it was hard losing relatives to Fox News or the MSNBC echo chambers, we ain't seen nothing yet.

And here’s where I struggle: I don’t want to fill people with anxiety. I don’t want to be the friend no one invites out because he’s always talking about the end of the world. But if we don’t talk about these things now, if we don’t understand how they work and their implications, we’re liable to be taken by surprise, and I’m afraid we as humans don’t have that luxury.

When people compare AI to the invention of fire, the wheel, or the atom bomb, they’re not wrong. The implications of AI are just as profound as all three, which is very difficult for us to understand. But we need to try, we need to use our imaginations now so reality won’t surprise us.

I’m very anxious. The last thing I want is for others to feel anxious. But anxiety serves a purpose. It is our mind telling us to get prepared. Too often, that reaction has been hijacked by social media and 24-hour cable news, permeating our lives with anxiety. What I find so troubling is that now that we might need to feel some anxiety, many of us are too burnt out, too accustomed to feeling anxious that we simply can’t live with it anymore. We numb ourselves to the world and to the very real dangers we face.

I suppose that’s my goal now, to be sure that we are not numb to the implications of our current moment. We need to be ready; we need to be informed.

In a recent letter to a friend, I wrote:

I have a creeping feeling that this isn’t the future I imagined or hoped for. My life — my little life — is good. It’s full of meaning and love. But the world? Some nights I can barely sleep I’m so filled with anxiety for it. For us. For all living things.