Dispatches from the Empire


The Nihilism of Trump’s GOP

MAGA is not interested in building anything, in winning a real majority, in constructing an actual future rather than lamenting an invented past. Everything is performative and destructive. It’s all driven by who they are against rather than what they are for. As a Republican Senator told Romney as he settled in, their view is that the first consideration in voting on any bill should always be: “Will this help me win re-election?”

There’s no definitive moment in the collapse of a republic, but that quote comes close. If all you care about is your own grip on power, regard the opposing party as ipso facto illegitimate, and give zero fucks for the system as a whole, a liberal democracy has effectively ceased to exist. A single major party, captured by radicals and nihilists, can do that.

It’s unrecognizable

That morning, there were so many moments I was hopeful. First, it was just an accident. Next, that everyone could escape the towers if given the chance. Then, that only one tower would fall. Then, that these would be the only casualties. Then, worst of all, that something this profound and dramatic would soften people’s hearts and make them reflect.

I was wrong, over and over again. I don’t know exactly where the line is between optimistic and naive, but these days I feel tremendous compassion and empathy towards the young man who had all those foolish beliefs. I don’t regret hoping that such a horrible day could lead to something better.

But I really did underestimate how power works, and how little it would take to push people from their better angels to their most vicious, vengeful selves.

The small web is beautiful

Summary: I believe that small websites are compelling aesthetically, but are also important to help us resist selling our souls to large tech companies. In this essay I present a vision for the “small web” as well as the small software and architectures that power it. Also, a bonus rant about microservices.

and

Kagi Small Web via mjtsai

Bessel Van der Kolk on Trauma, America’s Favorite Diagnosis

The appeal of traumatic literalism is not so much its scientific rigor as its scientific sheen, which seems to promise objective, graspable solutions to our defining political crises. For the past three decades, liberals have insisted that the institutions of American power, while flawed, were in essentially good shape. Those for whom the status quo wasn’t working out were welcome to jockey for inclusion by claiming identity-related injury. For a liberal politics of inclusion founded on claims of injury, what could be more useful than a way to turn that injury into biological trauma, something objective, observable, and measurable in the brain? In their focus on narrative — that is, on recovering and integrating declarative memories — the battle lines of the ’80s and ’90s trauma culture wars were staked out along clear lines. If you were a feminist or an antiwar activist, you invoked trauma; if you were a conservative, you didn’t. But today’s literalization of trauma is politically promiscuous. In fact, rather than treating trauma as an ideological weapon of the left, now the right wants in on it too.

Emphasis mine.

Something to chew on.

Vivek Ramaswamy Is Suddenly Part of Our Political Life

Don’t worry, this article is only a touch about Vivek. Much like the man himself, I encourage you to skip over that part and get right to the take-away line:

Ron DeSantis was right when he said at the debate that America is a nation in decline and that decline is a choice. He just wasn’t right in the way he meant it. We’re in decline because a spirit of lawlessness, shamelessness and brainlessness have become leading features of a conservative movement that was supposed to be a bulwark against all three.

How Samuel R. Delany Reimagined Sci-Fi, Sex, and the City

As we said our goodbyes, it felt like we’d just emerged from one of Delany’s late novels. Their pastoral pornotopias, conjured as though from the homoerotic subtext of “Huckleberry Finn,” had more of a basis in reality than I’d suspected, one hidden by the shopworn map that divides the country into poor rural traditionalists and libertine city folk. Delany hadn’t abandoned science fiction to wallow in pornography, as some contended; he’d stopped imagining faraway worlds to describe queer lives deemed unreal in this one.

The Supreme Court Has Killed Affirmative Action. Mediocre Whites Can Rest Easier. via Kottke

I guess I can take some small solace in knowing that even without affirmative action, there will still be a lot of white rejects out there who will die mad.

What a line.

As I approach 40, I must remind myself that I’m glad I’m no longer young. This country — my home — seems to be tearing itself apart. If all we expect is the worst from each other, haven’t we lost the republic?

Cormac McCarthy, Novelist of a Darker America, Is Dead at 89

With an eye for the darker side of human nature, his novels remain some of my favorite.

Headed Into the Abyss by Brian T. Watson 📚

I just finished, laying here in my bed, the dogs and cat asleep beside me. Crickets chirp out my window. In the distance a train’s whistle breaks and rolls over the valley.

More than anything, I prize seeing things clearly. Nothing fills me with that particular and precious joie de vivre — that electric sizzle — quite like close proximity to the truth. But most people don’t like the truth. We’ll do anything to avoid it, if we know it at all. So it’s a rare thrill to read something so transgressive in its honesty, so clear-eyed.

Credit to Brian T. Watson for his courage to accept the inevitable, and then to write it. May his acceptance be an inspiration.

The Dark Mountain Manifesto

Around the world, discontent can be heard. The extremists are grinding their knives and moving in as the machine’s coughing and stuttering exposes the inadequacies of the political oligarchies who claimed to have everything in hand. Old gods are rearing their heads, and old answers: revolution, war, ethnic strife. Politics as we have known it totters, like the machine it was built to sustain. In its place could easily arise something more elemental, with a dark heart.

A lot has changed since I first read this almost 15 years ago, but it has only become more prescient.

TV’s Streaming Model Is Broken. It’s Also Not Going Away.

“If you could bring back the heyday of Brandon Tartikoff–Warren Littlefield NBC with shows like The West Wing, ER, Friends, and Seinfeld — maybe with some nudity and F-bombs — every streamer would be very happy right now.”

America Is Headed Toward Collapse

The long history of human society compiled in our database suggests that America’s current economy is so lucrative for the ruling elites that achieving fundamental reform might require a violent revolution. But we have reason for hope. It is not unprecedented for a ruling class—with adequate pressure from below—to allow for the nonviolent reversal of elite overproduction. But such an outcome requires elites to sacrifice their near-term self-interest for our long-term collective interests. At the moment, they don’t seem prepared to do that.

Poll: 61% of Americans say AI threatens humanity’s future

The poll also revealed a political divide in perceptions of AI, with 70 percent of Donald Trump voters expressing greater concern about AI versus 60 percent of Joe Biden voters. Regarding religious beliefs, evangelical Christians were more likely to “strongly agree” that AI poses risks to human civilization, at 32 percent, compared to 24 percent of non-evangelical Christians.

Strange bedfellows.

Rent control works

Rather than doing the thing we want, neoliberal economists insist we must unleash “markets” to solve the problems, by “creating incentives.” That may sound like a recipe for a small state, but in practice, “creating incentives” often involves building huge bureaucracies to “keep the incentives aligned” (that is, to prevent private firms from ripping off public agencies).

This is how we get “solutions” that fail catastrophically.

Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot

The application has also been programmed to upsell customers, offering larger sizes, Frosties or daily specials. Once the chatbot takes an order, it appears on a screen for line cooks. From there, prepared meals are relayed to the pickup window and handed off to drivers by a worker

A group is its own worst enemy

Now, suddenly, when you create software, it isn’t sufficient to think about making it possible to communicate; you have to think about making communication socially successful. In the age of usability, technical design decisions had to be taken to make software easier for a mass audience to use; in the age of social software, design decisions must be taken to make social groups survive and thrive and meet the goals of the group even when they contradict the goals of the individual.

There’s this very complicated moment of a group coming together, where enough individuals, for whatever reason, sort of agree that something worthwhile is happening, and the decision they make at that moment is “This is good and must be protected.” And at that moment, even if it’s subconscious, you start getting group effects. And the effects that we’ve seen come up over and over and over again in online communities.

Of the things you have to accept, the first is that you cannot completely separate technical and social issues. There are two attractive patterns for thinking about the intersection of social and technological issues. One says, “We’ll handle technology over here, we’ll do social issues there. We’ll have separate mailing lists with separate discussion groups, or we’ll have one track here and one track there.” This doesn’t work; you can’t separate the two.

May the Force always be with you via Kottke

In 1977, when Star Wars took the “domestic film rentals” crown from Jaws, Steven Spielberg wrote a congratulatory letter to George Lucas and had it printed full page in Variety—a charming move, tastefully done, that kickstarted a tradition amongst filmmakers and studios that continues to this day.

When I tell people that I’ve lost several clients to ChatGPT and its ilk over the last few weeks, they think my anxiety over AI stems from ostensibly losing my job. It does not. I’m fortunate to live a life that requires very little financial maintenance as I have no debt. While I can’t afford any big purchases — should the house need a new roof or something happens to my car, I’m in some trouble — for right now, in this moment, I’m fine. I can afford groceries. I can afford dog food and vet visits. My financial life is already quite lean, and if I need to trim a little more fat, that’s possible.

My anxiety comes from the larger implications of AI. These implications are very difficult to talk about with other people outside the tech world, mostly because I am without useful analogies. Someone recently told me AI is a tool. Well, yes, but it’s also not. It’s less a hammer and more a, well, a hammer that learns to become every other tool. A hammer that then teaches itself language. All languages. And writes code. And can run that code. Someone else told me "it's just a computer program." Well, yes, but computer programs have to be written by a human. We can look at their code and analyze it. We can understand how it works. AI doesn't work that way. These Large Language Models (LLMs) are just code, yes, but the models themselves are opaque. We do not understand how they know what they know. They literally teach themselves.

Long-term, this means that these LLMs can get out of our control. While it takes vast amounts of compute power (think very large server farms) to run these models, should an LLMs get out of our control, what's to stop it from spreading? The internet was designed quite intentionally to be decentralized — without any central hub that can shut it down. So should one of these LLMs decide to spread, how can we "pull the plug" to shut it down?

But as technology progresses, it takes less and less compute power to run these models. Some, like the open-source model released by Facebook, can be run locally on a single home computer. Once these models proliferate, running on just a single machine, our ability to contain them becomes impossible.

The dangers of high-powered AI LLMs are impossible to exaggerate. Human society is based on trust. We (generally) trust the newspapers, the websites we visit, the pictures we see. We trust the music we listen to was created by the musicians whose voices we hear. But all of this goes out the window with the present capabilities of AI. Photo-quality images can be generated in seconds. Videos can be faked. Our voices can be made to say anything. How on earth does society survive this?

When we can't trust anything we see, read, or hear, what happens to civilization?

This is happening now. Current AI has the capability to do all these things. As these LLMs grow, they get ever-better at generating images, sound, and video that's impossible to discern as fake.

In a recent video I linked to (and one I think to be essential viewing), The A.I. Dilemma, Tristan Harris said that 2024 will be “the last human election” in America. Election Day 2024 is still 18 months away, and I think Tristan might be wrong in his presumption. The amount of fake information, fake articles, photos, videos will expand exponentially in mere months. When anyone can create a sex tape of anyone else, when anyone can use AI to generate photos and videos of our politicians doing and saying unspeakable things, what happens to our political system? Why wait until 2028?

If we thought the despair caused by social media was bad, if we thought it was hard losing relatives to Fox News or the MSNBC echo chambers, we ain't seen nothing yet.

And here’s where I struggle: I don’t want to fill people with anxiety. I don’t want to be the friend no one invites out because he’s always talking about the end of the world. But if we don’t talk about these things now, if we don’t understand how they work and their implications, we’re liable to be taken by surprise, and I’m afraid we as humans don’t have that luxury.

When people compare AI to the invention of fire, the wheel, or the atom bomb, they’re not wrong. The implications of AI are just as profound as all three, which is very difficult for us to understand. But we need to try, we need to use our imaginations now so reality won’t surprise us.

I’m very anxious. The last thing I want is for others to feel anxious. But anxiety serves a purpose. It is our mind telling us to get prepared. Too often, that reaction has been hijacked by social media and 24-hour cable news, permeating our lives with anxiety. What I find so troubling is that now that we might need to feel some anxiety, many of us are too burnt out, too accustomed to feeling anxious that we simply can’t live with it anymore. We numb ourselves to the world and to the very real dangers we face.

I suppose that’s my goal now, to be sure that we are not numb to the implications of our current moment. We need to be ready; we need to be informed.

In a recent letter to a friend, I wrote:

I have a creeping feeling that this isn’t the future I imagined or hoped for. My life — my little life — is good. It’s full of meaning and love. But the world? Some nights I can barely sleep I’m so filled with anxiety for it. For us. For all living things.

The A.I. Dilemma

50% of AI researchers think there’s a 10% or greater chance that AI will cause the extinction of the human race.