AI plus MRI yields the ability to recognize what the mind is hearing.
Today, researchers announced a new bit of mind reading that’s impressive in its scope. By combining fMRI brain imaging with a system that’s somewhat like the predictive text of cell phones, they’ve worked out the gist of the sentences a person is hearing in near real time. While the system doesn’t get the exact words right and makes a fair number of mistakes, it’s also flexible enough that it can reconstruct an imaginary monologue that goes on entirely within someone’s head.
The hardest part of the AI revolution will be the discovery of empirical evidence that free will is a myth.
When I tell people that I’ve lost several clients to ChatGPT and its ilk over the last few weeks, they think my anxiety over AI stems from ostensibly losing my job. It does not. I’m fortunate to live a life that requires very little financial maintenance as I have no debt. While I can’t afford any big purchases — should the house need a new roof or something happens to my car, I’m in some trouble — for right now, in this moment, I’m fine. I can afford groceries. I can afford dog food and vet visits. My financial life is already quite lean, and if I need to trim a little more fat, that’s possible.
My anxiety comes from the larger implications of AI. These implications are very difficult to talk about with other people outside the tech world, mostly because I am without useful analogies. Someone recently told me AI is a tool. Well, yes, but it’s also not. It’s less a hammer and more a, well, a hammer that learns to become every other tool. A hammer that then teaches itself language. All languages. And writes code. And can run that code. Someone else told me "it's just a computer program." Well, yes, but computer programs have to be written by a human. We can look at their code and analyze it. We can understand how it works. AI doesn't work that way. These Large Language Models (LLMs) are just code, yes, but the models themselves are opaque. We do not understand how they know what they know. They literally teach themselves.
Long-term, this means that these LLMs can get out of our control. While it takes vast amounts of compute power (think very large server farms) to run these models, should an LLMs get out of our control, what's to stop it from spreading? The internet was designed quite intentionally to be decentralized — without any central hub that can shut it down. So should one of these LLMs decide to spread, how can we "pull the plug" to shut it down?
But as technology progresses, it takes less and less compute power to run these models. Some, like the open-source model released by Facebook, can be run locally on a single home computer. Once these models proliferate, running on just a single machine, our ability to contain them becomes impossible.
The dangers of high-powered AI LLMs are impossible to exaggerate. Human society is based on trust. We (generally) trust the newspapers, the websites we visit, the pictures we see. We trust the music we listen to was created by the musicians whose voices we hear. But all of this goes out the window with the present capabilities of AI. Photo-quality images can be generated in seconds. Videos can be faked. Our voices can be made to say anything. How on earth does society survive this?
When we can't trust anything we see, read, or hear, what happens to civilization?
This is happening now. Current AI has the capability to do all these things. As these LLMs grow, they get ever-better at generating images, sound, and video that's impossible to discern as fake.
In a recent video I linked to (and one I think to be essential viewing), The A.I. Dilemma, Tristan Harris said that 2024 will be “the last human election” in America. Election Day 2024 is still 18 months away, and I think Tristan might be wrong in his presumption. The amount of fake information, fake articles, photos, videos will expand exponentially in mere months. When anyone can create a sex tape of anyone else, when anyone can use AI to generate photos and videos of our politicians doing and saying unspeakable things, what happens to our political system? Why wait until 2028?
If we thought the despair caused by social media was bad, if we thought it was hard losing relatives to Fox News or the MSNBC echo chambers, we ain't seen nothing yet.
And here’s where I struggle: I don’t want to fill people with anxiety. I don’t want to be the friend no one invites out because he’s always talking about the end of the world. But if we don’t talk about these things now, if we don’t understand how they work and their implications, we’re liable to be taken by surprise, and I’m afraid we as humans don’t have that luxury.
When people compare AI to the invention of fire, the wheel, or the atom bomb, they’re not wrong. The implications of AI are just as profound as all three, which is very difficult for us to understand. But we need to try, we need to use our imaginations now so reality won’t surprise us.
I’m very anxious. The last thing I want is for others to feel anxious. But anxiety serves a purpose. It is our mind telling us to get prepared. Too often, that reaction has been hijacked by social media and 24-hour cable news, permeating our lives with anxiety. What I find so troubling is that now that we might need to feel some anxiety, many of us are too burnt out, too accustomed to feeling anxious that we simply can’t live with it anymore. We numb ourselves to the world and to the very real dangers we face.
I suppose that’s my goal now, to be sure that we are not numb to the implications of our current moment. We need to be ready; we need to be informed.
In a recent letter to a friend, I wrote:
I have a creeping feeling that this isn’t the future I imagined or hoped for. My life — my little life — is good. It’s full of meaning and love. But the world? Some nights I can barely sleep I’m so filled with anxiety for it. For us. For all living things.
Tesla lawyers claim Elon Musk’s past statements about self-driving safety could just be deepfakes.
“Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” wrote Santa Clara County Superior Court Judge Evette D. Pennypacker. “In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”
The swagged-out pope is an AI fake — and an early glimpse of a new reality.
I’m incredibly nervous about the future.
Not only will this enable the spread of outright lies (that humans cannot discern), but more frighteningly, it will encourage plausible deniability.
Shameful, distasteful, or immoral behavior can merely be attributed to AI. Celebrities, state actors, politicians…
The human brain will not be capable of handling the next stage of AI development.
The reverse turing test.
GPT–4 (came up with)/(retrieved from its database) some excellent questions for this purpose and did identify human or AI correctly in both cases. Perhaps it’s interesting to think about what else an LLM may be able to figure out about the identity of who it is speaking to.
Everything you write or post online — text, images, video — is being used to train these large language models.
My writing style can be effectively copied and used to convince others they are reading something I’ve written, when in fact it’s pure AI.
You will not be able to (and perhaps already can’t) trust anything you read as ‘real.’
I think the philosophical implications of these LLMs are among the most troubling. Think about your aunt or neighbor that posts on facebook. Do they have the ability to understand how AI works? Can they tell the difference between a LLM and a person? Can they tell the difference between sentience and a predictive text model?
I don’t think I’m being cynical when I say no. So what then happens when these AIs start becoming such a presence in the world that they take on what looks to many humans to be god-like intelligences? Do we have people taking these LLMs as sentience, or perhaps as the voices of god? Do we have people advocating for their right to exist, to “live?”
And most unnervingly, once sentience and predictive text models like GPT 4 become so muddled, do we lose sense of what’s human and what isn’t?
It’s likely I’m going to spend the next few days thinking out loud about the recent AI announcements from OpenAI, Microsoft, Facebook. I barely slept last night, as only yesterday did the ramifications of this week really begin to set in.
Bluntly, I think my job as a copy editor is largely gone, or could be within twelve months. This realization only hit me tonight as I was talking to my parents — why it hadn’t occurred to me months ago as I tooled around with ChatGPT 3.5, I don’t know. But thinking back, the very first thing I asked ChatGPT was to write a New York Times article about the destruction of the moon. And it did. It wrote it better than many of my editing clients.
I think I am obsolete.
I read that the new ChatGPT 4 can ingest images, too. Meaning you can sketch a website on a piece of paper. Snap a photo of it, then upload. Tell it to write a website that looks like your sketch…and it does. In seconds. Goodbye, web designers.
I read it now gets into the 90th percentile when taking the bar exam. Goodbye, lawyers.
The way I think about the internet has completely shifted in the last 24 hours. No longer is it a tool for communication between humans, but rather the amniotic fluid of these Large Language Models. And their fuel. They ingest everything on the internet — and ‘learn’ from it. That LiveJournal I kept in high school was food for these things. The purpose of the internet is now something else.
What happens when millions of people like me lose their jobs? What happened when millions of Americans had their jobs shipped overseas in the wake of NAFTA? I grew up in rural America — I spend a lot of time in towns long since hollowed out as industry moved to cheaper markets. Sure, we saved a buck, but the cost was the livelihoods of thousands of people, of their purpose. Humans are many things, and as cliche and unoriginal and obvious as it is, a good job is enough for most people to feel fulfilled in their sense of purpose, providing shelter and food to their families. What happened when those jobs left? Over the last 30 years, hopelessness and drugs moved in, suicides started increasing, small towns withered, and populism flourished.
The technological progress of the next 5 years is going to make the progress of the last thirty seem glacial.
I’m already exhausted by the potential instability.
What happens when children are raised with these LLMs? We thought Google was bad… Who will need to learn anything if we could just ask the LLM? Who will need to learn to code? Who will need to learn to write?
Given the level of most people’s technical sophistication, how on earth do we talk about the implications of these new AI language models?