Dispatches from the Empire


#

Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration

When asked whether society is prepared for AI technology like Bard, Pichai answered, “On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”

There’s an annoying tendency for internet journalism to be hyperbolic, but here I think it’s appropriate. “Brace for impact.”

#

Defending against Bluetooth tracker abuse: it’s complicated

Ever wonder how AirTags work?

#

A group is its own worst enemy

Now, suddenly, when you create software, it isn’t sufficient to think about making it possible to communicate; you have to think about making communication socially successful. In the age of usability, technical design decisions had to be taken to make software easier for a mass audience to use; in the age of social software, design decisions must be taken to make social groups survive and thrive and meet the goals of the group even when they contradict the goals of the individual.

There’s this very complicated moment of a group coming together, where enough individuals, for whatever reason, sort of agree that something worthwhile is happening, and the decision they make at that moment is “This is good and must be protected.” And at that moment, even if it’s subconscious, you start getting group effects. And the effects that we’ve seen come up over and over and over again in online communities.

Of the things you have to accept, the first is that you cannot completely separate technical and social issues. There are two attractive patterns for thinking about the intersection of social and technological issues. One says, “We’ll handle technology over here, we’ll do social issues there. We’ll have separate mailing lists with separate discussion groups, or we’ll have one track here and one track there.” This doesn’t work; you can’t separate the two.

#

The A.I. Dilemma

50% of AI researchers think there’s a 10% or greater chance that AI will cause the extinction of the human race.

#

Tesla lawyers claim Elon Musk’s past statements about self-driving safety could just be deepfakes.

“Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” wrote Santa Clara County Superior Court Judge Evette D. Pennypacker. “In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”

#

Utah governor signs new laws requiring parental consent for under–18s to use social media.

I mean, this is both insane…and also kinda reasonable?

I don’t love the idea of the state getting between us and the internet. But social media is absolutely designed to be addictive. The state steps in between young people and cigarettes, young people and alcohol, young people and drugs. Is social media any different? And haven’t we proven that social media in its current form is more destructive to mental health as most of those things?

#

You’re pointing the camera the wrong way.

#

Enshittification

Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two sided market," where a platform sits between buyers and sellers, holding each hostage to the other, raking off an ever-larger share of the value that passes between them.