OpenAI confirms that AI writing detectors don’t work
In a section of the FAQ titled “Do AI detectors work?”, OpenAI writes, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.”
How I love summer, for I am far less online and far less anxious about the rise of AI when I’m able to go outside.
‘The Machines We Have Now Are Not Conscious’
What we have with these LLMs isn’t low-level intelligence but rather high-level applied statistics that creates the powerful illusion of low-level intelligence.
I predict that in short order, our collective consciousnesses might become indistinguishable from “high-level applied statistics.”
Maybe we are just meat machines with anxiety.
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn
Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.
This just isn’t a path humanity needs to go down. What is it with us humans? Why can’t we stop? What motivates us to do this shit?
Maybe you think our self-destruction isn’t inevitable, but deep in my gut, that feels naive and ignorant of human nature.
Is there a word for the feeling of being deeply ashamed of my species, yet complicit in some of our worst behaviors? That shame, that fear of what feels inevitable, undergirds my entire life and has since I was an adolescent. I describe it as the awareness we’re all tethered together and collectively running toward a cliff, yet most everyone seems not to see the edge. A few of us are trying to slow down — we see what’s coming — but we can’t stop the lot of us.
I want us to slow down. I want to not wake up each morning with this itch behind my eyes, this breathlessness in my gut, this primal suspicion that we’re all fucking ourselves.
Again and again, the phrase that comes to mind is “it doesn’t have to be this way.” And yet it feels inevitable.
Make it make sense.
Poll: 61% of Americans say AI threatens humanity’s future
The poll also revealed a political divide in perceptions of AI, with 70 percent of Donald Trump voters expressing greater concern about AI versus 60 percent of Joe Biden voters. Regarding religious beliefs, evangelical Christians were more likely to “strongly agree” that AI poses risks to human civilization, at 32 percent, compared to 24 percent of non-evangelical Christians.
Strange bedfellows.
Paul Graham on Twitter
Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI.
I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think.
Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration
When asked whether society is prepared for AI technology like Bard, Pichai answered, “On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”
There’s an annoying tendency for internet journalism to be hyperbolic, but here I think it’s appropriate. “Brace for impact.”
Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot
The application has also been programmed to upsell customers, offering larger sizes, Frosties or daily specials. Once the chatbot takes an order, it appears on a screen for line cooks. From there, prepared meals are relayed to the pickup window and handed off to drivers by a worker
OpenAI contractors make $15 to train ChatGPT
The work is defined by its unsteady, on-demand nature, with people employed by written contracts either directly by a company or through a third-party vendor that specializes in temp work or outsourcing. Benefits such as health insurance are rare or nonexistent — which translates to lower costs for tech companies — and the work is usually anonymous, with all the credit going to tech startup executives and researchers.
Rethinking Authenticity in the Era of Generative AI
If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.
Also, it’ll be important for everyone to get up to speed on what these new generative AI tools really can and can’t do. I think this will involve ensuring that people learn about AI in schools and in the workplace, and having open conversations about how creative processes will change with AI being broadly available.
Google shared AI knowledge with the world — until ChatGPT caught up
Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and had been embraced by DeepMind, but was avoided by Google’s top brass.
AI plus MRI yields the ability to recognize what the mind is hearing.
Today, researchers announced a new bit of mind reading that’s impressive in its scope. By combining fMRI brain imaging with a system that’s somewhat like the predictive text of cell phones, they’ve worked out the gist of the sentences a person is hearing in near real time. While the system doesn’t get the exact words right and makes a fair number of mistakes, it’s also flexible enough that it can reconstruct an imaginary monologue that goes on entirely within someone’s head.
The hardest part of the AI revolution will be the discovery of empirical evidence that free will is a myth.
When I tell people that I’ve lost several clients to ChatGPT and its ilk over the last few weeks, they think my anxiety over AI stems from ostensibly losing my job. It does not. I’m fortunate to live a life that requires very little financial maintenance as I have no debt. While I can’t afford any big purchases — should the house need a new roof or something happens to my car, I’m in some trouble — for right now, in this moment, I’m fine. I can afford groceries. I can afford dog food and vet visits. My financial life is already quite lean, and if I need to trim a little more fat, that’s possible.
My anxiety comes from the larger implications of AI. These implications are very difficult to talk about with other people outside the tech world, mostly because I am without useful analogies. Someone recently told me AI is a tool. Well, yes, but it’s also not. It’s less a hammer and more a, well, a hammer that learns to become every other tool. A hammer that then teaches itself language. All languages. And writes code. And can run that code. Someone else told me "it's just a computer program." Well, yes, but computer programs have to be written by a human. We can look at their code and analyze it. We can understand how it works. AI doesn't work that way. These Large Language Models (LLMs) are just code, yes, but the models themselves are opaque. We do not understand how they know what they know. They literally teach themselves.
Long-term, this means that these LLMs can get out of our control. While it takes vast amounts of compute power (think very large server farms) to run these models, should an LLMs get out of our control, what's to stop it from spreading? The internet was designed quite intentionally to be decentralized — without any central hub that can shut it down. So should one of these LLMs decide to spread, how can we "pull the plug" to shut it down?
But as technology progresses, it takes less and less compute power to run these models. Some, like the open-source model released by Facebook, can be run locally on a single home computer. Once these models proliferate, running on just a single machine, our ability to contain them becomes impossible.
The dangers of high-powered AI LLMs are impossible to exaggerate. Human society is based on trust. We (generally) trust the newspapers, the websites we visit, the pictures we see. We trust the music we listen to was created by the musicians whose voices we hear. But all of this goes out the window with the present capabilities of AI. Photo-quality images can be generated in seconds. Videos can be faked. Our voices can be made to say anything. How on earth does society survive this?
When we can't trust anything we see, read, or hear, what happens to civilization?
This is happening now. Current AI has the capability to do all these things. As these LLMs grow, they get ever-better at generating images, sound, and video that's impossible to discern as fake.
In a recent video I linked to (and one I think to be essential viewing), The A.I. Dilemma, Tristan Harris said that 2024 will be “the last human election” in America. Election Day 2024 is still 18 months away, and I think Tristan might be wrong in his presumption. The amount of fake information, fake articles, photos, videos will expand exponentially in mere months. When anyone can create a sex tape of anyone else, when anyone can use AI to generate photos and videos of our politicians doing and saying unspeakable things, what happens to our political system? Why wait until 2028?
If we thought the despair caused by social media was bad, if we thought it was hard losing relatives to Fox News or the MSNBC echo chambers, we ain't seen nothing yet.
And here’s where I struggle: I don’t want to fill people with anxiety. I don’t want to be the friend no one invites out because he’s always talking about the end of the world. But if we don’t talk about these things now, if we don’t understand how they work and their implications, we’re liable to be taken by surprise, and I’m afraid we as humans don’t have that luxury.
When people compare AI to the invention of fire, the wheel, or the atom bomb, they’re not wrong. The implications of AI are just as profound as all three, which is very difficult for us to understand. But we need to try, we need to use our imaginations now so reality won’t surprise us.
I’m very anxious. The last thing I want is for others to feel anxious. But anxiety serves a purpose. It is our mind telling us to get prepared. Too often, that reaction has been hijacked by social media and 24-hour cable news, permeating our lives with anxiety. What I find so troubling is that now that we might need to feel some anxiety, many of us are too burnt out, too accustomed to feeling anxious that we simply can’t live with it anymore. We numb ourselves to the world and to the very real dangers we face.
I suppose that’s my goal now, to be sure that we are not numb to the implications of our current moment. We need to be ready; we need to be informed.
In a recent letter to a friend, I wrote:
I have a creeping feeling that this isn’t the future I imagined or hoped for. My life — my little life — is good. It’s full of meaning and love. But the world? Some nights I can barely sleep I’m so filled with anxiety for it. For us. For all living things.