When I tell people that I’ve lost several clients to ChatGPT and its ilk over the last few weeks, they think my anxiety over AI stems from ostensibly losing my job. It does not. I’m fortunate to live a life that requires very little financial maintenance as I have no debt. While I can’t afford any big purchases — should the house need a new roof or something happens to my car, I’m in some trouble — for right now, in this moment, I’m fine. I can afford groceries. I can afford dog food and vet visits. My financial life is already quite lean, and if I need to trim a little more fat, that’s possible.
My anxiety comes from the larger implications of AI. These implications are very difficult to talk about with other people outside the tech world, mostly because I am without useful analogies. Someone recently told me AI is a tool. Well, yes, but it’s also not. It’s less a hammer and more a, well, a hammer that learns to become every other tool. A hammer that then teaches itself language. All languages. And writes code. And can run that code. Someone else told me "it's just a computer program." Well, yes, but computer programs have to be written by a human. We can look at their code and analyze it. We can understand how it works. AI doesn't work that way. These Large Language Models (LLMs) are just code, yes, but the models themselves are opaque. We do not understand how they know what they know. They literally teach themselves.
Long-term, this means that these LLMs can get out of our control. While it takes vast amounts of compute power (think very large server farms) to run these models, should an LLMs get out of our control, what's to stop it from spreading? The internet was designed quite intentionally to be decentralized — without any central hub that can shut it down. So should one of these LLMs decide to spread, how can we "pull the plug" to shut it down?
But as technology progresses, it takes less and less compute power to run these models. Some, like the open-source model released by Facebook, can be run locally on a single home computer. Once these models proliferate, running on just a single machine, our ability to contain them becomes impossible.
The dangers of high-powered AI LLMs are impossible to exaggerate. Human society is based on trust. We (generally) trust the newspapers, the websites we visit, the pictures we see. We trust the music we listen to was created by the musicians whose voices we hear. But all of this goes out the window with the present capabilities of AI. Photo-quality images can be generated in seconds. Videos can be faked. Our voices can be made to say anything. How on earth does society survive this?
When we can't trust anything we see, read, or hear, what happens to civilization?
This is happening now. Current AI has the capability to do all these things. As these LLMs grow, they get ever-better at generating images, sound, and video that's impossible to discern as fake.
In a recent video I linked to (and one I think to be essential viewing), The A.I. Dilemma, Tristan Harris said that 2024 will be “the last human election” in America. Election Day 2024 is still 18 months away, and I think Tristan might be wrong in his presumption. The amount of fake information, fake articles, photos, videos will expand exponentially in mere months. When anyone can create a sex tape of anyone else, when anyone can use AI to generate photos and videos of our politicians doing and saying unspeakable things, what happens to our political system? Why wait until 2028?
If we thought the despair caused by social media was bad, if we thought it was hard losing relatives to Fox News or the MSNBC echo chambers, we ain't seen nothing yet.
And here’s where I struggle: I don’t want to fill people with anxiety. I don’t want to be the friend no one invites out because he’s always talking about the end of the world. But if we don’t talk about these things now, if we don’t understand how they work and their implications, we’re liable to be taken by surprise, and I’m afraid we as humans don’t have that luxury.
When people compare AI to the invention of fire, the wheel, or the atom bomb, they’re not wrong. The implications of AI are just as profound as all three, which is very difficult for us to understand. But we need to try, we need to use our imaginations now so reality won’t surprise us.
I’m very anxious. The last thing I want is for others to feel anxious. But anxiety serves a purpose. It is our mind telling us to get prepared. Too often, that reaction has been hijacked by social media and 24-hour cable news, permeating our lives with anxiety. What I find so troubling is that now that we might need to feel some anxiety, many of us are too burnt out, too accustomed to feeling anxious that we simply can’t live with it anymore. We numb ourselves to the world and to the very real dangers we face.
I suppose that’s my goal now, to be sure that we are not numb to the implications of our current moment. We need to be ready; we need to be informed.
In a recent letter to a friend, I wrote:
I have a creeping feeling that this isn’t the future I imagined or hoped for. My life — my little life — is good. It’s full of meaning and love. But the world? Some nights I can barely sleep I’m so filled with anxiety for it. For us. For all living things.
The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.
“Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” wrote Santa Clara County Superior Court Judge Evette D. Pennypacker. “In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”
When I go for a walk in the forest, sometimes I see small sections of the forest where one species has taken over. A rust fungus. Caterpillars. Barberry bushes. When one species grows unchecked, the balance of the ecosystem is thrown off, and it collapses.
I am astounded at how persistent the belief in us humans that we are an exception. That we know better. That it won’t happen to us.
When you learn to recognize this cognitive dissonance, you begin to see it everywhere. A friend raising two young boys, hoping they will grow up to play pro baseball. (Statistically, they won’t, and they will have a childhood devoid of dreams of any other possibilities.) Several friends, all vegetarian, all concerned about the environment, yet planning to have children. (And not adopt.) Me, thinking that I can change human nature merely by berating people with facts born of the scientific method.
Climate change. AI. Guns. Political violence. Social media. Our brains have not evolved to handle much of the world in which we find ourselves. We are not as evolved as we think we are.
I have lost faith in leaders, in the political parties, in many institutions. I work in the media and I see people I know — people I respect — succumbing to partisanship over logic and reason.
In my 20s, I thought I could change the world. I was always one epiphany away from a paradigm shift.
But now, as I approach 40, I don’t look to the future with excitement as I once did. I now feel vague-yet-persistent anxiety about what’s to come. I look around my little town and see how the technological changes of the last 30 years have helped people here. Sure, everyone has a phone and access to the world of information, but they’re addicted to social media. Opioid addiction is rampant. Many people lost their jobs when manufacturing was sent overseas.
Change is inevitable, I know. But don’t we tell ourselves that we are a compassionate culture? Are the foundational myths of this culture — Christianity chief among them — based on compassion? Then why have so many people here been left to suffer in poverty and addiction? Why can’t things change, but we also have grace and mercy for others — and ourselves — when things do?
My dreams to change the world have long since evaporated. I don’t believe we can change human nature, even through education. Most people will always remain animated by their insecurities. Now, my only goal is to stay out of the way, to find a quiet corner somewhere and watch the future happen around me.
GPT–4 (came up with)/(retrieved from its database) some excellent questions for this purpose and did identify human or AI correctly in both cases. Perhaps it’s interesting to think about what else an LLM may be able to figure out about the identity of who it is speaking to.
Everything you write or post online — text, images, video — is being used to train these large language models.
My writing style can be effectively copied and used to convince others they are reading something I’ve written, when in fact it’s pure AI.
You will not be able to (and perhaps already can’t) trust anything you read as ‘real.’
I mean, this is both insane…and also kinda reasonable?
I don’t love the idea of the state getting between us and the internet. But social media is absolutely designed to be addictive. The state steps in between young people and cigarettes, young people and alcohol, young people and drugs. Is social media any different? And haven’t we proven that social media in its current form is more destructive to mental health as most of those things?
I think the philosophical implications of these LLMs are among the most troubling. Think about your aunt or neighbor that posts on facebook. Do they have the ability to understand how AI works? Can they tell the difference between a LLM and a person? Can they tell the difference between sentience and a predictive text model?
I don’t think I’m being cynical when I say no. So what then happens when these AIs start becoming such a presence in the world that they take on what looks to many humans to be god-like intelligences? Do we have people taking these LLMs as sentience, or perhaps as the voices of god? Do we have people advocating for their right to exist, to “live?”
And most unnervingly, once sentience and predictive text models like GPT 4 become so muddled, do we lose sense of what’s human and what isn’t?
It’s likely I’m going to spend the next few days thinking out loud about the recent AI announcements from OpenAI, Microsoft, Facebook. I barely slept last night, as only yesterday did the ramifications of this week really begin to set in.
Bluntly, I think my job as a copy editor is largely gone, or could be within twelve months. This realization only hit me tonight as I was talking to my parents — why it hadn’t occurred to me months ago as I tooled around with ChatGPT 3.5, I don’t know. But thinking back, the very first thing I asked ChatGPT was to write a New York Times article about the destruction of the moon. And it did. It wrote it better than many of my editing clients.
I think I am obsolete.
I read that the new ChatGPT 4 can ingest images, too. Meaning you can sketch a website on a piece of paper. Snap a photo of it, then upload. Tell it to write a website that looks like your sketch…and it does. In seconds. Goodbye, web designers.
I read it now gets into the 90th percentile when taking the bar exam. Goodbye, lawyers.
The way I think about the internet has completely shifted in the last 24 hours. No longer is it a tool for communication between humans, but rather the amniotic fluid of these Large Language Models. And their fuel. They ingest everything on the internet — and ‘learn’ from it. That LiveJournal I kept in high school was food for these things. The purpose of the internet is now something else.
What happens when millions of people like me lose their jobs? What happened when millions of Americans had their jobs shipped overseas in the wake of NAFTA? I grew up in rural America — I spend a lot of time in towns long since hollowed out as industry moved to cheaper markets. Sure, we saved a buck, but the cost was the livelihoods of thousands of people, of their purpose. Humans are many things, and as cliche and unoriginal and obvious as it is, a good job is enough for most people to feel fulfilled in their sense of purpose, providing shelter and food to their families. What happened when those jobs left? Over the last 30 years, hopelessness and drugs moved in, suicides started increasing, small towns withered, and populism flourished.
The technological progress of the next 5 years is going to make the progress of the last thirty seem glacial.
I’m already exhausted by the potential instability.
What happens when children are raised with these LLMs? We thought Google was bad… Who will need to learn anything if we could just ask the LLM? Who will need to learn to code? Who will need to learn to write?
Genetics data has revealed that the popular understanding of race, developed during a time when white supremacy was widely accepted, simply doesn’t make any sense. In the popular view, for instance, “Black” represents a single, homogenous group. But genomic data makes clear that populations in Sub-Saharan Africa are the most genetically diverse on Earth.
A few weeks ago, my only friend in this town died. Fifty-one years my senior, he was my neighbor across the street.
Four years ago, when we first met, I was wary. He would come to my fence as I was gardening, talking about immigrants coming over the southern border or something else he had seen on Fox News — but as our relationship matured and I learned to steer the conversation away from political issues (or, if I’m honest, indulge him a bit), we struck up a friendship.
Last summer, when his Android phone quit working, he walked over to my fence and asked me about “those iPhones you keep telling me about.” We bought a used SE on eBay. Within a few months, he had upgraded to a brand new iPhone, an Apple TV, an Apple Watch. He took to technology — well-designed, thoughtful technology — in a way I had never seen in someone his age. He loved learning about the capabilities of this incredible tool that fit in his pocket.
Long before he moved to this small town, he was a globe-trotter. He was born in Brooklyn in the 30s, became an Airborne Ranger in the Korean War, and went on to work at IBM, American Satellite, and other long-diminished-yet-bedrock tech companies. He told stories of setting up satellite uplinks in Alaska, of living in Rome, of business meetings with executives all over the world. He moved often — Missouri, Virginia, California, Italy, Minnesota — before settling in this small town in 2002. The tumult of 9/11 on the east coast caused him to re-assess where he wanted to be, and for some reason, he chose this tiny town.
Seventeen years later, I would move here, into a 130-year-old home across the street from his. We got to know each other over the next four years. I painted his garage as he told stories. I would help him with his new iPhone, or try my best to help him with his old Android. I mowed his lawn, shoveled his snow. Initially, he asked how much my services would cost, and when I told him to knock it off — he was a neighbor, after all — he took to me. I don’t think he was accustomed to people being decent without a price of some kind. It wasn’t long after his new phone that he’d start calling 2–3 times a day, asking about this or that, how to use the Find My app to share his location with his niece, or just to ask where I was hiking that day. Once, I FaceTimed him from the top of a mountain not far from our houses and he was amazed. Just that morning, I had been in his living room helping him with something or other, and now I was on a mountaintop? And we were videochatting? He relished those moments.
On a very snowy night a few weeks ago, I walked across the street to shovel his back porch. He heard the shovel on cement and cracked the back door. His voice sounding weak, he asked me to come inside. “I’ve got a question for you.” I walked in a few minutes later to him sitting on his couch. His hair was disheveled, his voice thin. He was clearly not feeling well. He had been vomiting for nearly 24 hours and asked if these were symptoms of covid. “I don’t think so,” I replied, “but I have some tests across the street.” I walked across the street, grabbed some covid tests and Thera-Flu, and walked back to his place. He didn’t want to take a test yet, so I put them on the counter. I asked if he wanted anything, if I could take him to the hospital, told him that I was worried about dehydration. He insisted on staying put, but if in the morning he wasn’t feeling better, he’d let me drive him to the hospital.
“Call me if you need anything. I mean that. Anything.” I told him as I got ready to leave.
“Thanks, buddy,” he said. He thanked me that way often, but this time his voice sounded different. Resigned. I heard both gratitude and finality. I walked across the street and messaged a friend of mine, telling her I was unsure he would survive the night.