‘Godfather of AI’ quits Google with regrets and fears about his life’s work.
The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.
The A.I. Dilemma
50% of AI researchers think there’s a 10% or greater chance that AI will cause the extinction of the human race.
Tesla lawyers claim Elon Musk’s past statements about self-driving safety could just be deepfakes.
“Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” wrote Santa Clara County Superior Court Judge Evette D. Pennypacker. “In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”
The swagged-out pope is an AI fake — and an early glimpse of a new reality.
I’m incredibly nervous about the future.
Not only will this enable the spread of outright lies (that humans cannot discern), but more frighteningly, it will encourage plausible deniability.
Shameful, distasteful, or immoral behavior can merely be attributed to AI. Celebrities, state actors, politicians…
The human brain will not be capable of handling the next stage of AI development.
The reverse turing test.
GPT–4 (came up with)/(retrieved from its database) some excellent questions for this purpose and did identify human or AI correctly in both cases. Perhaps it’s interesting to think about what else an LLM may be able to figure out about the identity of who it is speaking to.
Everything you write or post online — text, images, video — is being used to train these large language models.
My writing style can be effectively copied and used to convince others they are reading something I’ve written, when in fact it’s pure AI.
You will not be able to (and perhaps already can’t) trust anything you read as ‘real.’
I think the philosophical implications of these LLMs are among the most troubling. Think about your aunt or neighbor that posts on facebook. Do they have the ability to understand how AI works? Can they tell the difference between a LLM and a person? Can they tell the difference between sentience and a predictive text model?
I don’t think I’m being cynical when I say no. So what then happens when these AIs start becoming such a presence in the world that they take on what looks to many humans to be god-like intelligences? Do we have people taking these LLMs as sentience, or perhaps as the voices of god? Do we have people advocating for their right to exist, to “live?”
And most unnervingly, once sentience and predictive text models like GPT 4 become so muddled, do we lose sense of what’s human and what isn’t?
It’s likely I’m going to spend the next few days thinking out loud about the recent AI announcements from OpenAI, Microsoft, Facebook. I barely slept last night, as only yesterday did the ramifications of this week really begin to set in.
Bluntly, I think my job as a copy editor is largely gone, or could be within twelve months. This realization only hit me tonight as I was talking to my parents — why it hadn’t occurred to me months ago as I tooled around with ChatGPT 3.5, I don’t know. But thinking back, the very first thing I asked ChatGPT was to write a New York Times article about the destruction of the moon. And it did. It wrote it better than many of my editing clients.
I think I am obsolete.
I read that the new ChatGPT 4 can ingest images, too. Meaning you can sketch a website on a piece of paper. Snap a photo of it, then upload. Tell it to write a website that looks like your sketch…and it does. In seconds. Goodbye, web designers.
I read it now gets into the 90th percentile when taking the bar exam. Goodbye, lawyers.
The way I think about the internet has completely shifted in the last 24 hours. No longer is it a tool for communication between humans, but rather the amniotic fluid of these Large Language Models. And their fuel. They ingest everything on the internet — and ‘learn’ from it. That LiveJournal I kept in high school was food for these things. The purpose of the internet is now something else.
What happens when millions of people like me lose their jobs? What happened when millions of Americans had their jobs shipped overseas in the wake of NAFTA? I grew up in rural America — I spend a lot of time in towns long since hollowed out as industry moved to cheaper markets. Sure, we saved a buck, but the cost was the livelihoods of thousands of people, of their purpose. Humans are many things, and as cliche and unoriginal and obvious as it is, a good job is enough for most people to feel fulfilled in their sense of purpose, providing shelter and food to their families. What happened when those jobs left? Over the last 30 years, hopelessness and drugs moved in, suicides started increasing, small towns withered, and populism flourished.
The technological progress of the next 5 years is going to make the progress of the last thirty seem glacial.
I’m already exhausted by the potential instability.
What happens when children are raised with these LLMs? We thought Google was bad… Who will need to learn anything if we could just ask the LLM? Who will need to learn to code? Who will need to learn to write?
Given the level of most people’s technical sophistication, how on earth do we talk about the implications of these new AI language models?