Dispatches from the Empire


Trump Gives CNBC a Rambling Answer on Why He Backtracked on TikTok Ban

“Frankly, there are a lot of people on TikTok that love it,” Mr. Trump said. “There are a lot of young kids on TikTok who will go crazy without it.”

“There’s a lot of good and there’s a lot of bad with TikTok,” he added, “but the thing I don’t like is that without TikTok, you can make Facebook bigger, and I consider Facebook to be an enemy of the people, along with a lot of the media.”

Mr. Trump tried to ban TikTok while in office, pushing its Chinese parent company, ByteDance, to sell the platform to a new owner or face being blocked from American app stores. A House committee advanced legislation last week that would similarly force TikTok to cut ties with ByteDance.

This says everything you need to know about Trump. He’ll say anything that serves him in the right now. He has no impulse control, he has no ability to think strategically, he has no long-term plan.

Banning TikTok (i.e. forcing ByteDance, a Chinese company, to sell off TikTok) is the right thing to do. It’s a parasite destroying the ability of people to think critically and deeply. It has decimated the attention spans of our young people, who don’t know a world without social media. TikTok is a cancer.

And so is Facebook. Merely forcing the sale of TikTok to an American company won’t fix the problem. Letting our corporations mine the attention of our young people is better than letting China do it, but not by much.

Start treating all social media like what it is: addictive advertising.

E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Very curious to see how this holds up.

Notable that any and all meaningful regulation over the tech industry is coming from Europe.

Elon Musk Doesn’t Understand What ‘Blackmail’ Means

In general, blackmail is a crime where the criminal demands payment from the victim. It does not involve the criminal refusing to give money to the victim for a service they don’t want. Blackmailing somebody “with money,” as Musk put it, is not a thing.

In general, fuck this guy.

The Unsettling Lesson of the OpenAI Mess

I don’t know whether the board was right to fire Altman. It certainly has not made a public case that would justify the decision. But the nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button. The for-profit proved it can just reconstitute itself elsewhere. And don’t forget: There’s still Google’s A.I. division and Meta’s A.I. division and Anthropic and Inflection and many others who’ve built large language models similar to GPT–4 and are yoking them to business models similar to OpenAI’s. Capitalism is itself a kind of artificial intelligence, and it’s far further along than anything the computer scientists have yet coded. In that sense, it copied OpenAI’s code long ago.

…if the capabilities of these systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.

Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

I really, really, really hope my fears about AI are unfounded.

But we will build it. Humans never don’t build something because it might be dangerous. Nuclear weapons, gain-of function viral research… AI isn’t any different.

But how can we stop it from happening? We can’t prohibit everyone everywhere from building it. It’s inevitable.

I’m a doomer. I’ve long believed that humans will fuck up what we already have because we can’t learn to be content with it. We will do anything other than the hard work of learning to be content with life, to accept that misery and death are parts of it.

That’s all this is, right? Our abiding fear of death being made manifest?

Ironic, then, if it’s our inability to reconcile with death that causes our extinction.

Apple pushes back against the EU’s Digital Markets Act

Christ, Apple. Have some faith that people use your phones because they’re the best on the market, not because they’re locked into iMessage. 🤦🏼‍♂️

I’m all for security — end-to-end encryption is table stakes at this point, and I won’t use anything without it to meaningfully communicate — but inhibiting innovation solely to protect a monopoly of a marketplace (in this case, the App Store)? C’mon.

Apple, you have the technical expertise to protect people’s data even if they sideload. It won’t be easy, I know, but come on.

Give people a choice.


On the other hand, my work is tech-adjacent and my passions are obviously tech-y. I work with a lot of people — smart, professional people — that don’t know shit about the phones in their pocket, not least of all how to safeguard them.

If smart people can’t be bothered to protect themselves, no doubt sideloading will invite bad actors (i.e. advertisers) to get people to download software that tracks the hell out of them. More than it already does.

So maybe Apple has a point.

But the least they could do is lower their 30% App Store commission. It’s difficult to take anyone arguing the moral high ground seriously when they’re making such an extreme profit from their position.

Jewish Celebrities and Influencers Confront TikTok Executives in Private Call

“What is happening at TikTok is it is creating the biggest antisemitic movement since the Nazis,” Mr. Cohen, who does not appear to have an official TikTok account, said early in the call. He criticized violent imagery and disinformation on the platform, telling Mr. Presser, “Shame on you,” and claiming that TikTok could “flip a switch” to fix antisemitism on its platform.

How is everyone liking their corporate rule?

TikTok has real power. Facebook (which owns Instagram) has real power. Google has real power.

This is not okay.

I hate these companies, all of whom, at their heart, are advertising companies. They run social media platforms so they can sell you ads. That’s how they make their money. That’s the whole point.

People want to blame their phones, as I am wont to do at times, but the smartphone is merely a tool. You don’t have to use it for social media.

Fuck.

What on earth are people thinking when they use social media?

Oh right, they’re not thinking — their neurology has been hijacked. They’re addicted.

Children. We let children use TikTok. We’ve let our children become addicts, just like us. How is this okay? Why are we not filled with rage each and every time we see a parent hand over their phone to their child?

Yes, I’m blaming addicts for their addiction, but we’ve let our children become addicts, too.

Let that sink in.

For fuck’s sake.

Hamas Cheerleaders Are All Over Instagram

Instagram has become a particularly active arena for pro-Hamas propaganda. At last count, the hashtag #freepalestine had appeared on over 5.8-million posts, exceeding #standwithisrael’s 220,000 by a geometric factor of more than 20. Similarly, #gazaunderattack has amassed 1.8 million instances, an order of magnitude more than #israelunderattack’s 134,000.

I used to think numbers like this were bullshit. “Likes” and “views” and “engagements” have never felt like salient measurements of, well, anything but the ego of some large social media companies.

Of course I was wrong.


I love when people tell me that advertising “doesn’t work” on them. As if their mind is so strong that it can’t be swayed one way or the other.

In response to their claims, I yell, “_HOT DOG!_”

“What are you thinking about now,” I then ask.

Surprise: they’re thinking about hot dogs.

Advertising really is that simple. Our neurology isn’t that complicated. We like to think we’re exceptions to rules, but rules are rules for a reason.

A few friends that lived through the 1960s and 1970s like to say “advertising is propaganda.” I’m inclined to agree. Of course it is.

Yet if all advertising is mere suggestion, then it makes absolute sense that in capitalism, the money flows to the most persuasive, even if those of us being persuaded don’t fully understand how persuasion works.

Generative AI like Midjourney creates images full of stereotypes

A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too. 

Of course! Computers are all about broad data sets, not specific outliers.

This isn’t just AI, either. It’s in the algorithms behind Facebook and TikTok and YouTube, etc. We humans create these algorithms in our own image. Why do most YouTube “celebrities” look so similar? Why are so many female TikTok “stars” facsimiles of the Kardashians, themselves facsimiles of a standard of beauty now twenty years old?

These algorithms are built on millions of clicks, taps, scrolls, and hours watched. They’re extremely efficient at doing what old-school media has always done: flatten culture. After all, who were John Wayne and Frank Sinatra if not the embodiment — and perpetuation — of stereotypes?

What’s unnerving about social media and AI is that this flattening happens at terrific speed, which wasn’t possible in our analog culture.

Humans are not built for speed. We might be addicted to it, but our brains didn’t evolve to handle it.

The future looks terrifically unsettling.

Asian and Middle Eastern users tilt TikTok balance toward Palestinians

TikTok has denied the claims and said in a blog post they were based on “unsound analysis.” The data reviewed by Semafor suggests that the imbalance on the platform is largely outside the U.S. — and may skew heavily toward the Palestinian side because of the app’s popularity in Muslim countries and the fact that it is blocked in India.

The central promise of the internet was, after all, to be a great equalizer. I’m not saying TikTok’s (a Chinese company) algorithms are “fair” (however you define that), but it shouldn’t come as a surprise that with a global population that’s largely online, America and our interests aren’t always going to be the most popular.

Democratization is great.

Until it’s not.

Unsubscribe From Everything

If, back in 2003, government surveillance had reached a point that many of us felt the need to self-censor, today it’s private citizens who are imposing the censorship regime. Online mobs savage people for making an insensitive remark, communities shun people for asking questions. The desire to speak freely and without fear is driving not only the creation of platforms like Substack, but actual migration patterns. This is what happens when surveillance and social control are pervasive enough: True enemies, like al-Qaida, are replaced by boogeymen like @TrumpDyke, and dubious figments like “disinformation” supplant real threats like terror. The zealous among us begin policing speech so the actual police don’t have to, and the press, the inevitable organ of every authoritarian regime, either turns a blind eye or actively colludes with the government and its partners to smother unsanctioned views.

We lost a lot for choosing not to have a dialogue about government overreach back in 2013, when Snowden revealed the government’s mass surveillance programs. “Study after study has shown that human behavior changes when we know we’re being watched,” he once said. “Under observation, we act less free, which means we are less free.” Maybe you hesitated to do a search on Google, or say something in an email because you thought someone might intercept it. After Snowden, writers admitted to turning down work out of the mere possibility of surveillance. The “war on terror” had a chilling effect on speech, which was bad enough. Fast forward to 2020, and scientists were voluntarily taking themselves out of the lockdown debate. If in 2013, we lost a core American value when we chose not to take up the cause of privacy, in 2020, we lost jobs and lives.

SpaceX Starlink satellites had to make 25,000 collision-avoidance maneuvers in just 6 months — and it will only get worse

Lewis expects that, unless regulators cap the number of satellites in orbit, collisions will soon become a regular part of the space business. Such collisions would lead to rapid growth in the amount of space debris fragments that are completely out of control, which would lead to more and more collisions. The end point of this process might be the Kessler Syndrome, a scenario predicted in the late 1970s by former NASA physicist Donald Kessler. Depicted in the 2013 Oscar-winning movie “Gravity,” the Kessler Syndrome is an unstoppable cascade of collisions that might render parts of the orbital environment completely unusable.

Modernity is untenable.

Meta (aka Facebook) says its new speech-generating AI model is too dangerous for public release

Dark Patterns

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.

This just isn’t a path humanity needs to go down. What is it with us humans? Why can’t we stop? What motivates us to do this shit?

Maybe you think our self-destruction isn’t inevitable, but deep in my gut, that feels naive and ignorant of human nature.

Is there a word for the feeling of being deeply ashamed of my species, yet complicit in some of our worst behaviors? That shame, that fear of what feels inevitable, undergirds my entire life and has since I was an adolescent. I describe it as the awareness we’re all tethered together and collectively running toward a cliff, yet most everyone seems not to see the edge. A few of us are trying to slow down — we see what’s coming — but we can’t stop the lot of us.

I want us to slow down. I want to not wake up each morning with this itch behind my eyes, this breathlessness in my gut, this primal suspicion that we’re all fucking ourselves.

Again and again, the phrase that comes to mind is “it doesn’t have to be this way.” And yet it feels inevitable.

Make it make sense.

Driver’s Licenses, Addresses, Photos: Inside How TikTok Shares User Data

Google’s new Magic Editor pushes us toward AI-perfected fakery

OpenAI contractors make $15 to train ChatGPT

The work is defined by its unsteady, on-demand nature, with people employed by written contracts either directly by a company or through a third-party vendor that specializes in temp work or outsourcing. Benefits such as health insurance are rare or nonexistent — which translates to lower costs for tech companies — and the work is usually anonymous, with all the credit going to tech startup executives and researchers.

Google shared AI knowledge with the world — until ChatGPT caught up

Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and had been embraced by DeepMind, but was avoided by Google’s top brass.

‘Godfather of AI’ quits Google with regrets and fears about his life’s work.

The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.

The A.I. Dilemma

50% of AI researchers think there’s a 10% or greater chance that AI will cause the extinction of the human race.

Microsoft Now Claims GPT–4 Shows ‘Sparks’ of General Intelligence.

Enshittification.