Dispatches from the Empire


Grindr’s Plan to Squeeze Its Users

Grindr plans to boost revenue by monetizing the app more aggressively, putting previously free features behind a paywall, and rolling out new in-app purchases, employees say. The company is currently working on an AI chatbot that can engage in sexually explicit conversations with users, Platformer has learned. According to employees with knowledge of the project, the bot may train in part on private chats with other human users, pending their consent.

I remember the very early days of Grindr. I had one of the only smartphones in my part of the state, and the nearest fellow user was nearly 250 miles away. Chatting with other gay men was fun and refreshing.

Much has changed in the intervening 15 years. Dating (or hookup) apps have become vast wastelands of algorithmic sameness. People on these apps look, act, talk, and behave in eerily similar ways, not unlike how every young person now dresses like an "influencer." (I refuse to use that word without quotation marks.)

These apps gave us corrosion sold as connection. I'm reminded of David Foster Wallace's thoughts on entertainment, about always wondering what's on the other channel, wondering if there's something better to be watching. Shopping around (because that's precisely what these apps are: shopping) is so damn easy.

Contentment is hard when you think there's always something better just around the corner.

AI Digest

Visual explainers on AI progress and its risks.

Report: Israel used AI tool called Lavender to choose targets in Gaza

The system had a 90 percent accuracy rate, sources said, meaning that about 10 percent of the people identified as Hamas operatives weren’t members of Hamas’ military wing at all. Some of the people Lavender flagged as targets just happened to have names or nicknames identical to those of known Hamas operatives; others were Hamas operatives’ relatives or people who used phones that had once belonged to a Hamas militant. “Mistakes were treated statistically,” a source who used Lavender told +972. “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know statistically that it’s fine. So you go for it.”

Emphasis mine.

Welcome to the future.

The jobs being replaced by AI - an analysis of 5M freelancing jobs

The 3 categories with the largest declines were writing, translation and customer service jobs. The # of writing jobs declined 33%, translation jobs declined 19%, and customer service jobs declined 16%.

Too bad, too, because whoever wrote this article could have used an editor.

This article tracks with my experience in the field. I’m a freelance editor — print, audio, some video. My work has never felt so fraught, as I’ve never felt so undervalued. My work can be done by a computer!

I suddenly wonder what so many people have felt over the last thirty years since, say, NAFTA. To have your job swept out from under you and automated or sent abroad to be done by people for lower pay… I was all of eight when NAFTA went into effect, and I’ve never known what America was like beforehand. Yet I see the husks of mills and factories everywhere I go. (In fact, I gravitate to them, a moth to a flame.) I’ve not really felt what it must’ve been like to live through that transition.

Well, now I’m feeling it. It sucks. The insecurity is profound.

When I tell people of my predicament, there’s little sympathy from my fellow millennials, many of whom have never had the freedom that comes from work-from-your-computer self-employment. There’s a strong sense of something bordering on schadenfreude, that my luck finally ran out.

And I fear they’re right. I’m almost 40. I haven’t had a boss in fifteen years. I set my own schedule. My work has paid well, sure, and I’m fortunate to have assets that, if it becomes necessary, I can sell to survive. But what skills do I have? Put another way, what skills do I have that won’t be automated away by AI in the coming years? Most of what I know how to do I’ve done via a computer, and any work done on a computer is liable to be AI’d away.

Thankfully (or so I’m telling myself), this comes at a time when I’ve never been so dissatisfied with my work. People hardly read, and I no longer feel that people care to learn to write. Nor am I so sure that good journalism matters in the era of find-whatever-facts-you-want social media. I once was so certain that my work in journalism, however limited in scope, was good and just and righteous. That certainty is now gone, and I’m left adrift.

Not only have I lost my faith in what once felt like a calling, I’ve not yet felt another. It’s a dark, uncertain space.

The Expanding Dark Forest and Generative AI

Air Canada must honor refund policy invented by airline’s chatbot:

On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto. Unsure of how Air Canada's bereavement rates worked, Moffatt asked Air Canada's chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.

Here’s OpenAI’s big plan to combat election misinformation

Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and yes, I did immediately think “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been thinking about the same thing and today updated its policies to begin to address the issue.

Ah, the internet.

Adobe’s latest Premiere Pro update automatically cleans up trashy audio

These updates aren’t intended to automate audio editing entirely, but to optimize the existing process so that editors have more time to work on other projects. “As Premiere Pro becomes the first choice for more and more professional editors, we’re seeing editors being asked to do a lot more than just cut picture. At some level, most editors have to do some amount of color work, of audio work, even titling and basic effects,” said Paul Saccone, senior director for Adobe Pro Video, to The Verge. 

“Sure, there are still specialists you can hand off to depending on the project size, but the more we can enable customers to make this sort of work easier and more intuitive inside Premiere Pro, the more successful they’re going to be in their other creative endeavors.”

Oof. This one’s going to hurt. Most of my audio clients prefer Premiere (I’m a Logic Pro guy) and Adobe is using AI to automate away many of the tasks that take up the bulk of my time.

E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Very curious to see how this holds up.

Notable that any and all meaningful regulation over the tech industry is coming from Europe.

How Elon Musk and Larry Page’s AI Debate Led to OpenAI and an Industry Boom

At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about A.I. are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep A.I. from endangering Earth.

I do not want to become one with a computer.

Nor do I want to live without them.

Yet as I’ve watched the wave of social media crash over the culture in the last twenty years, I know I’m powerless to stop what’s coming. Our neurology will dictate what’s next, and just as it did with social media, most people will be swept away.

Your attention is everything — it’s all you have.

Remind yourself of this every day.

LLM Visualization

Visualize how ChatGPT and other large language models (LLMs) work.

Complicated, perhaps, but also astonishingly simple and, in hindsight, obvious.

Where I agree and disagree with Eliezer

The broader intellectual world seems to wildly overestimate how long it will take AI systems to go from “large impact on the world” to “unrecognizably transformed world.” This is more likely to be years than decades, and there’s a real chance that it’s months. This makes alignment harder and doesn’t seem like something we are collectively prepared for.

The Unsettling Lesson of the OpenAI Mess

I don’t know whether the board was right to fire Altman. It certainly has not made a public case that would justify the decision. But the nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button. The for-profit proved it can just reconstitute itself elsewhere. And don’t forget: There’s still Google’s A.I. division and Meta’s A.I. division and Anthropic and Inflection and many others who’ve built large language models similar to GPT–4 and are yoking them to business models similar to OpenAI’s. Capitalism is itself a kind of artificial intelligence, and it’s far further along than anything the computer scientists have yet coded. In that sense, it copied OpenAI’s code long ago.

…if the capabilities of these systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.

Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

I really, really, really hope my fears about AI are unfounded.

But we will build it. Humans never don’t build something because it might be dangerous. Nuclear weapons, gain-of function viral research… AI isn’t any different.

But how can we stop it from happening? We can’t prohibit everyone everywhere from building it. It’s inevitable.

I’m a doomer. I’ve long believed that humans will fuck up what we already have because we can’t learn to be content with it. We will do anything other than the hard work of learning to be content with life, to accept that misery and death are parts of it.

That’s all this is, right? Our abiding fear of death being made manifest?

Ironic, then, if it’s our inability to reconcile with death that causes our extinction.

What we’ve learned about the robot apocalypse from the OpenAI debacle

The argument is not that AI will become conscious or that it will decide it hates humanity. Instead, it is that AI will become extraordinarily competent, but that when you give it a task, it will fulfill exactly that task. Just as when we tell schools that they will be judged on the number of children who get a certain grade and teachers start teaching to the test, the AI will optimize the metric we tell it to optimize. If we are dealing with something vastly more powerful than human minds, the argument goes, that could have very bad consequences.

Generative AI like Midjourney creates images full of stereotypes

A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too. 

Of course! Computers are all about broad data sets, not specific outliers.

This isn’t just AI, either. It’s in the algorithms behind Facebook and TikTok and YouTube, etc. We humans create these algorithms in our own image. Why do most YouTube “celebrities” look so similar? Why are so many female TikTok “stars” facsimiles of the Kardashians, themselves facsimiles of a standard of beauty now twenty years old?

These algorithms are built on millions of clicks, taps, scrolls, and hours watched. They’re extremely efficient at doing what old-school media has always done: flatten culture. After all, who were John Wayne and Frank Sinatra if not the embodiment — and perpetuation — of stereotypes?

What’s unnerving about social media and AI is that this flattening happens at terrific speed, which wasn’t possible in our analog culture.

Humans are not built for speed. We might be addicted to it, but our brains didn’t evolve to handle it.

The future looks terrifically unsettling.

At UK Summit, Global Leaders Warn AI Could Cause ‘Catastrophic’ Harm

On Wednesday morning, his government released a document called “The Bletchley Declaration,” signed by representatives from the 28 countries attending the event, including the U.S. and China, which warned of the dangers posed by the most advanced “frontier” A.I. systems. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these A.I. models,” the declaration said.

“Many risks arising from A.I. are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible A.I.”

The document fell short, however, of setting specific policy goals. A second meeting is scheduled to be held in six months in South Korea and a third in France in a year.

Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways.

How Israeli Civilians Are Using A.I. to Help Identify Victims

Incredible.

I was tempted to use the cliché “AI is an incredible tool,” but then I remembered that while yes, perhaps in its current iteration AI is a tool, it’s also something else.

Intelligence is such a nebulous thing — I rarely hear it called a “tool.” Aspects of intelligence, like critical thinking? Sure, that can be a tool. A piece of the puzzle. A component of the whole.

But AI, or perhaps AGI (artificial general intelligence, loosely defined as when machines are able to think on their own without human intervention), is meant to be a component and a whole. A tool we use…but also a tool that will one day think critically for itself. Without humans.

Remember, while the AI of today is easily explainable with metaphor, the AI of tomorrow is not.

AI reads text from ancient Herculaneum scroll for the first time

A 21-year-old computer-science student has won a global contest to read the first text inside a carbonized scroll from the ancient Roman city of Herculaneum, which had been unreadable since a volcanic eruption in AD 79 — the same one that buried nearby Pompeii. The breakthrough could open up hundreds of texts from the only intact library to survive from Greco-Roman antiquity.

Luke Farritor, who is at the University of Nebraska–Lincoln, developed a machine-learning algorithm that has detected Greek letters on several lines of the rolled-up papyrus, including πορϕυρας (porphyras), meaning ‘purple’. Farritor used subtle, small-scale differences in surface texture to train his neural network and highlight the ink.

OpenAI confirms that AI writing detectors don’t work

In a section of the FAQ titled “Do AI detectors work?”, OpenAI writes, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.”

How I love summer, for I am far less online and far less anxious about the rise of AI when I’m able to go outside.

Fran Drescher: “We are all going to be in jeopardy of being replaced by machines”

Not a headline I had on my bingo card.

Anthropic’s Claude Is Competing With ChatGPT. Even Its Builders Fear AI.

One Anthropic worker told me he routinely had trouble falling asleep because he was so worried about A.I. Another predicted, between bites of his lunch, that there was a 20 percent chance that a rogue A.I. would destroy humanity within the next decade.

Meta (aka Facebook) says its new speech-generating AI model is too dangerous for public release

OpenAI is now allowing its bot to interact with the live internet. This will make it more useful—and more problematic.

Adding plug-ins closes an air gap that has so far prevented large language models from taking actions on a person’s behalf. “We know that the models can be jailbroken, and now we’re hooking them up to the internet so that they can potentially take actions,” Hendrycks says. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”

Putin deep fake video is broadcast in parts of Russia

The broadcast, which also claimed there was an ongoing Ukrainian incursion into Russia, was aired in Belgorod, Voronezh, and Rostov, cities in close proximity to Ukraine’s border.

Buckle up.