Dispatches from the Empire


#

What we’ve learned about the robot apocalypse from the OpenAI debacle

The argument is not that AI will become conscious or that it will decide it hates humanity. Instead, it is that AI will become extraordinarily competent, but that when you give it a task, it will fulfill exactly that task. Just as when we tell schools that they will be judged on the number of children who get a certain grade and teachers start teaching to the test, the AI will optimize the metric we tell it to optimize. If we are dealing with something vastly more powerful than human minds, the argument goes, that could have very bad consequences.

#

Generative AI like Midjourney creates images full of stereotypes

A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too. 

Of course! Computers are all about broad data sets, not specific outliers.

This isn’t just AI, either. It’s in the algorithms behind Facebook and TikTok and YouTube, etc. We humans create these algorithms in our own image. Why do most YouTube “celebrities” look so similar? Why are so many female TikTok “stars” facsimiles of the Kardashians, themselves facsimiles of a standard of beauty now twenty years old?

These algorithms are built on millions of clicks, taps, scrolls, and hours watched. They’re extremely efficient at doing what old-school media has always done: flatten culture. After all, who were John Wayne and Frank Sinatra if not the embodiment — and perpetuation — of stereotypes?

What’s unnerving about social media and AI is that this flattening happens at terrific speed, which wasn’t possible in our analog culture.

Humans are not built for speed. We might be addicted to it, but our brains didn’t evolve to handle it.

The future looks terrifically unsettling.

#

At UK Summit, Global Leaders Warn AI Could Cause ‘Catastrophic’ Harm

On Wednesday morning, his government released a document called “The Bletchley Declaration,” signed by representatives from the 28 countries attending the event, including the U.S. and China, which warned of the dangers posed by the most advanced “frontier” A.I. systems. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these A.I. models,” the declaration said.

“Many risks arising from A.I. are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible A.I.”

The document fell short, however, of setting specific policy goals. A second meeting is scheduled to be held in six months in South Korea and a third in France in a year.

Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways.

#

How Israeli Civilians Are Using A.I. to Help Identify Victims

Incredible.

I was tempted to use the cliché “AI is an incredible tool,” but then I remembered that while yes, perhaps in its current iteration AI is a tool, it’s also something else.

Intelligence is such a nebulous thing — I rarely hear it called a “tool.” Aspects of intelligence, like critical thinking? Sure, that can be a tool. A piece of the puzzle. A component of the whole.

But AI, or perhaps AGI (artificial general intelligence, loosely defined as when machines are able to think on their own without human intervention), is meant to be a component and a whole. A tool we use…but also a tool that will one day think critically for itself. Without humans.

Remember, while the AI of today is easily explainable with metaphor, the AI of tomorrow is not.

#

AI reads text from ancient Herculaneum scroll for the first time

A 21-year-old computer-science student has won a global contest to read the first text inside a carbonized scroll from the ancient Roman city of Herculaneum, which had been unreadable since a volcanic eruption in AD 79 — the same one that buried nearby Pompeii. The breakthrough could open up hundreds of texts from the only intact library to survive from Greco-Roman antiquity.

Luke Farritor, who is at the University of Nebraska–Lincoln, developed a machine-learning algorithm that has detected Greek letters on several lines of the rolled-up papyrus, including πορϕυρας (porphyras), meaning ‘purple’. Farritor used subtle, small-scale differences in surface texture to train his neural network and highlight the ink.

#

OpenAI confirms that AI writing detectors don’t work

In a section of the FAQ titled “Do AI detectors work?”, OpenAI writes, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.”

How I love summer, for I am far less online and far less anxious about the rise of AI when I’m able to go outside.

#

Fran Drescher: “We are all going to be in jeopardy of being replaced by machines”

Not a headline I had on my bingo card.

#

Anthropic’s Claude Is Competing With ChatGPT. Even Its Builders Fear AI.

One Anthropic worker told me he routinely had trouble falling asleep because he was so worried about A.I. Another predicted, between bites of his lunch, that there was a 20 percent chance that a rogue A.I. would destroy humanity within the next decade.

#

Meta (aka Facebook) says its new speech-generating AI model is too dangerous for public release

#

OpenAI is now allowing its bot to interact with the live internet. This will make it more useful—and more problematic.

Adding plug-ins closes an air gap that has so far prevented large language models from taking actions on a person’s behalf. “We know that the models can be jailbroken, and now we’re hooking them up to the internet so that they can potentially take actions,” Hendrycks says. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”

#

Putin deep fake video is broadcast in parts of Russia

The broadcast, which also claimed there was an ongoing Ukrainian incursion into Russia, was aired in Belgorod, Voronezh, and Rostov, cities in close proximity to Ukraine’s border.

Buckle up.

#

‘The Machines We Have Now Are Not Conscious’

What we have with these LLMs isn’t low-level intelligence but rather high-level applied statistics that creates the powerful illusion of low-level intelligence.

I predict that in short order, our collective consciousnesses might become indistinguishable from “high-level applied statistics.”

Maybe we are just meat machines with anxiety.

#

Thought Cloning: Learning to Think while Acting by Imitating Human Thinking

Whoa.

#

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.

This just isn’t a path humanity needs to go down. What is it with us humans? Why can’t we stop? What motivates us to do this shit?

Maybe you think our self-destruction isn’t inevitable, but deep in my gut, that feels naive and ignorant of human nature.

Is there a word for the feeling of being deeply ashamed of my species, yet complicit in some of our worst behaviors? That shame, that fear of what feels inevitable, undergirds my entire life and has since I was an adolescent. I describe it as the awareness we’re all tethered together and collectively running toward a cliff, yet most everyone seems not to see the edge. A few of us are trying to slow down — we see what’s coming — but we can’t stop the lot of us.

I want us to slow down. I want to not wake up each morning with this itch behind my eyes, this breathlessness in my gut, this primal suspicion that we’re all fucking ourselves.

Again and again, the phrase that comes to mind is “it doesn’t have to be this way.” And yet it feels inevitable.

Make it make sense.

#

Poll: 61% of Americans say AI threatens humanity’s future

The poll also revealed a political divide in perceptions of AI, with 70 percent of Donald Trump voters expressing greater concern about AI versus 60 percent of Joe Biden voters. Regarding religious beliefs, evangelical Christians were more likely to “strongly agree” that AI poses risks to human civilization, at 32 percent, compared to 24 percent of non-evangelical Christians.

Strange bedfellows.

#

Google’s new Magic Editor pushes us toward AI-perfected fakery

#

Boring Report - Using AI to Desensationalize the News

#

Paul Graham on Twitter

Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI.

I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think.

#

Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration

When asked whether society is prepared for AI technology like Bard, Pichai answered, “On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”

There’s an annoying tendency for internet journalism to be hyperbolic, but here I think it’s appropriate. “Brace for impact.”

#

Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot

The application has also been programmed to upsell customers, offering larger sizes, Frosties or daily specials. Once the chatbot takes an order, it appears on a screen for line cooks. From there, prepared meals are relayed to the pickup window and handed off to drivers by a worker

#

China detains man for using ChatGPT for spreading fake news in first known case

Gansu police accused Hong of committing a “major crime” saying that the suspect admitted to prompting ChatGPT to generate a made-up story based on trending social media posts in China over the last few years.

#

OpenAI contractors make $15 to train ChatGPT

The work is defined by its unsteady, on-demand nature, with people employed by written contracts either directly by a company or through a third-party vendor that specializes in temp work or outsourcing. Benefits such as health insurance are rare or nonexistent — which translates to lower costs for tech companies — and the work is usually anonymous, with all the credit going to tech startup executives and researchers.

#

How Could AI Change War? U.S. Defense Experts Warn About New Tech

“If we stop, guess who’s not going to stop: potential adversaries overseas,” the Pentagon’s chief information officer, John Sherman, said on Wednesday. “We’ve got to keep moving.”

A cliff? What cliff…

Faster, faster!

#

Rethinking Authenticity in the Era of Generative AI

If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.

Also, it’ll be important for everyone to get up to speed on what these new generative AI tools really can and can’t do. I think this will involve ensuring that people learn about AI in schools and in the workplace, and having open conversations about how creative processes will change with AI being broadly available.

#

Google shared AI knowledge with the world — until ChatGPT caught up

Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and had been embraced by DeepMind, but was avoided by Google’s top brass.