Dispatches from the Empire


Mrs. Davis - Official Trailer

The second-best show currently on television, right behind Somebody Somewhere.

Rethinking Authenticity in the Era of Generative AI

If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.

Also, it’ll be important for everyone to get up to speed on what these new generative AI tools really can and can’t do. I think this will involve ensuring that people learn about AI in schools and in the workplace, and having open conversations about how creative processes will change with AI being broadly available.

Reservoir: A Series

New York’s reservoirs exemplify the social compact that undergirds ambitious public infrastructures, while the stories of their making emphasize divisions between city and country, wealth and poverty, the potentials and risks inherent in large-scale environmental intervention.

How ‘I Spy’ Books Are Made

Google shared AI knowledge with the world — until ChatGPT caught up

Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and had been embraced by DeepMind, but was avoided by Google’s top brass.

A Paper That Says Science Should Be Impartial Was Rejected From Major Journals. You Can’t Make This Up.

According to its 29 authors, who are primarily scientists (including two Nobel laureates) in fields as varied as theoretical physics, psychology and pharmacokinetics, ideological concerns are threatening independence and rigor in science, technology, engineering, mathematics and medicine. Though the goal of expanding opportunity for more diverse researchers in the sciences is laudable, the authors write, it should not be pursued at the expense of foundational scientific concepts like objective truth, merit and evidence, which they claim are being jeopardized by efforts to account for differing perspectives.

This encapsulates why the Left, once the bastion of Enlightenment principles, has left me behind in recent years.

It should go without saying — but in today’s polarized world, unfortunately, it doesn’t — that the authors of this paper do not deny the existence of historical racism or sexism or dispute that inequalities of opportunity persist. Nor do they deny that scientists have personal views, which are in turn informed by culture and society. They acknowledge biases and blind spots. Where they depart from the prevailing ideological winds is in arguing that however imperfect, meritocracy is still the most effective way to ensure high quality science and greater equity.

The lack of nuance on the political Left is troubling and has become stunningly common. Here are some of their greatest hits: I’ve been called a “white supremacist” by fellow grad school writers because I edit their work. (In their view, the very act of editing is oppression.) Fellow academics have called me “conservative” because I argue for compassion for everyone — including for white, rural, conservative people. I’ve been called a transphobe because biological sex is real, and I have no compunctions saying so. I’ve been called an “assimilationist” because I’m a gay man who lives in a small rural town surrounded by conservative straight people.

(It’s important to note that the Left hasn’t swung out farther left. They’ve swung toward illiberalism, and in that sense, I think they’ve made a swing to the right.)

One needn’t agree with every aspect of the authors’ politics or with all of their solutions. But to ignore or dismiss their research rather than impartially weigh the evidence would be a mistake. We need, in other words, to judge the paper on the merits. That, after all, is how science works.

Logic, reason, the scientific method, the pursuit of objectivity… when and why did these ideals fall from favor?

Pornhub shocks Utah by restricting access over age-verification law.

According to Pornhub, Utah’s law mandating age verification differs from Louisiana’s law in at least one meaningful way. In Louisiana, the state government created a digital wallet that Pornhub could access to securely verify state IDs. Because Utah has no such technology, Axios reported, Pornhub said it had no choice but to make “the difficult decision to completely disable access to our website” in Utah.

🤭

The Disappearing Acts of Haruki Murakami

Snapchat is already testing sponsored links in its My AI chatbot.

The ads will appear between posts as you scroll through the feed, much like they do on TikTok and within Instagram’s Reels. Snapchat is also giving advertisers a way to reserve the first video ad that you see when opening a friend’s story.

For the children!

Bank failures visualized.

AI plus MRI yields the ability to recognize what the mind is hearing.

Today, researchers announced a new bit of mind reading that’s impressive in its scope. By combining fMRI brain imaging with a system that’s somewhat like the predictive text of cell phones, they’ve worked out the gist of the sentences a person is hearing in near real time. While the system doesn’t get the exact words right and makes a fair number of mistakes, it’s also flexible enough that it can reconstruct an imaginary monologue that goes on entirely within someone’s head.

The hardest part of the AI revolution will be the discovery of empirical evidence that free will is a myth.

Mark Zuckerberg says Meta wants to ‘introduce AI agents to billions of people.’

Zuckerberg said today that generative AI is “literally going to touch every single one of our products."

Facebook, now with AI. If you thought your aunt was insufferable before, just wait.

Drake’s AI clone is here — and Drake might not be able to stop him.

…the tracks aren’t copying anything concretely protected by the law. Both songs appear to be written by a human who isn’t Drake and fed into voice cloning software, so the compositions are new, original works. An artist’s voice, style, or flow is not protected by copyright.

When I tell people that I’ve lost several clients to ChatGPT and its ilk over the last few weeks, they think my anxiety over AI stems from ostensibly losing my job. It does not. I’m fortunate to live a life that requires very little financial maintenance as I have no debt. While I can’t afford any big purchases — should the house need a new roof or something happens to my car, I’m in some trouble — for right now, in this moment, I’m fine. I can afford groceries. I can afford dog food and vet visits. My financial life is already quite lean, and if I need to trim a little more fat, that’s possible.

My anxiety comes from the larger implications of AI. These implications are very difficult to talk about with other people outside the tech world, mostly because I am without useful analogies. Someone recently told me AI is a tool. Well, yes, but it’s also not. It’s less a hammer and more a, well, a hammer that learns to become every other tool. A hammer that then teaches itself language. All languages. And writes code. And can run that code. Someone else told me "it's just a computer program." Well, yes, but computer programs have to be written by a human. We can look at their code and analyze it. We can understand how it works. AI doesn't work that way. These Large Language Models (LLMs) are just code, yes, but the models themselves are opaque. We do not understand how they know what they know. They literally teach themselves.

Long-term, this means that these LLMs can get out of our control. While it takes vast amounts of compute power (think very large server farms) to run these models, should an LLMs get out of our control, what's to stop it from spreading? The internet was designed quite intentionally to be decentralized — without any central hub that can shut it down. So should one of these LLMs decide to spread, how can we "pull the plug" to shut it down?

But as technology progresses, it takes less and less compute power to run these models. Some, like the open-source model released by Facebook, can be run locally on a single home computer. Once these models proliferate, running on just a single machine, our ability to contain them becomes impossible.

The dangers of high-powered AI LLMs are impossible to exaggerate. Human society is based on trust. We (generally) trust the newspapers, the websites we visit, the pictures we see. We trust the music we listen to was created by the musicians whose voices we hear. But all of this goes out the window with the present capabilities of AI. Photo-quality images can be generated in seconds. Videos can be faked. Our voices can be made to say anything. How on earth does society survive this?

When we can't trust anything we see, read, or hear, what happens to civilization?

This is happening now. Current AI has the capability to do all these things. As these LLMs grow, they get ever-better at generating images, sound, and video that's impossible to discern as fake.

In a recent video I linked to (and one I think to be essential viewing), The A.I. Dilemma, Tristan Harris said that 2024 will be “the last human election” in America. Election Day 2024 is still 18 months away, and I think Tristan might be wrong in his presumption. The amount of fake information, fake articles, photos, videos will expand exponentially in mere months. When anyone can create a sex tape of anyone else, when anyone can use AI to generate photos and videos of our politicians doing and saying unspeakable things, what happens to our political system? Why wait until 2028?

If we thought the despair caused by social media was bad, if we thought it was hard losing relatives to Fox News or the MSNBC echo chambers, we ain't seen nothing yet.

And here’s where I struggle: I don’t want to fill people with anxiety. I don’t want to be the friend no one invites out because he’s always talking about the end of the world. But if we don’t talk about these things now, if we don’t understand how they work and their implications, we’re liable to be taken by surprise, and I’m afraid we as humans don’t have that luxury.

When people compare AI to the invention of fire, the wheel, or the atom bomb, they’re not wrong. The implications of AI are just as profound as all three, which is very difficult for us to understand. But we need to try, we need to use our imaginations now so reality won’t surprise us.

I’m very anxious. The last thing I want is for others to feel anxious. But anxiety serves a purpose. It is our mind telling us to get prepared. Too often, that reaction has been hijacked by social media and 24-hour cable news, permeating our lives with anxiety. What I find so troubling is that now that we might need to feel some anxiety, many of us are too burnt out, too accustomed to feeling anxious that we simply can’t live with it anymore. We numb ourselves to the world and to the very real dangers we face.

I suppose that’s my goal now, to be sure that we are not numb to the implications of our current moment. We need to be ready; we need to be informed.

In a recent letter to a friend, I wrote:

I have a creeping feeling that this isn’t the future I imagined or hoped for. My life — my little life — is good. It’s full of meaning and love. But the world? Some nights I can barely sleep I’m so filled with anxiety for it. For us. For all living things.

‘Godfather of AI’ quits Google with regrets and fears about his life’s work.

The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.

The A.I. Dilemma

50% of AI researchers think there’s a 10% or greater chance that AI will cause the extinction of the human race.

Tesla lawyers claim Elon Musk’s past statements about self-driving safety could just be deepfakes.

“Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” wrote Santa Clara County Superior Court Judge Evette D. Pennypacker. “In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”

I am anxious.

I am not hopeful for the future.

When I go for a walk in the forest, sometimes I see small sections of the forest where one species has taken over. A rust fungus. Caterpillars. Barberry bushes. When one species grows unchecked, the balance of the ecosystem is thrown off, and it collapses.

I am astounded at how persistent the belief in us humans that we are an exception. That we know better. That it won’t happen to us.

When you learn to recognize this cognitive dissonance, you begin to see it everywhere. A friend raising two young boys, hoping they will grow up to play pro baseball. (Statistically, they won’t, and they will have a childhood devoid of dreams of any other possibilities.) Several friends, all vegetarian, all concerned about the environment, yet planning to have children. (And not adopt.) Me, thinking that I can change human nature merely by berating people with facts born of the scientific method.

Climate change. AI. Guns. Political violence. Social media. Our brains have not evolved to handle much of the world in which we find ourselves. We are not as evolved as we think we are.

I have lost faith in leaders, in the political parties, in many institutions. I work in the media and I see people I know — people I respect — succumbing to partisanship over logic and reason.

In my 20s, I thought I could change the world. I was always one epiphany away from a paradigm shift.

But now, as I approach 40, I don’t look to the future with excitement as I once did. I now feel vague-yet-persistent anxiety about what’s to come. I look around my little town and see how the technological changes of the last 30 years have helped people here. Sure, everyone has a phone and access to the world of information, but they’re addicted to social media. Opioid addiction is rampant. Many people lost their jobs when manufacturing was sent overseas.

Change is inevitable, I know. But don’t we tell ourselves that we are a compassionate culture? Are the foundational myths of this culture — Christianity chief among them — based on compassion? Then why have so many people here been left to suffer in poverty and addiction? Why can’t things change, but we also have grace and mercy for others — and ourselves — when things do?

My dreams to change the world have long since evaporated. I don’t believe we can change human nature, even through education. Most people will always remain animated by their insecurities. Now, my only goal is to stay out of the way, to find a quiet corner somewhere and watch the future happen around me.

The swagged-out pope is an AI fake — and an early glimpse of a new reality.

I’m incredibly nervous about the future.

Not only will this enable the spread of outright lies (that humans cannot discern), but more frighteningly, it will encourage plausible deniability.

Shameful, distasteful, or immoral behavior can merely be attributed to AI. Celebrities, state actors, politicians…

The human brain will not be capable of handling the next stage of AI development.

The reverse turing test.

GPT–4 (came up with)/(retrieved from its database) some excellent questions for this purpose and did identify human or AI correctly in both cases. Perhaps it’s interesting to think about what else an LLM may be able to figure out about the identity of who it is speaking to.

Everything you write or post online — text, images, video — is being used to train these large language models.

My writing style can be effectively copied and used to convince others they are reading something I’ve written, when in fact it’s pure AI.

You will not be able to (and perhaps already can’t) trust anything you read as ‘real.’

Microsoft Now Claims GPT–4 Shows ‘Sparks’ of General Intelligence.

Utah governor signs new laws requiring parental consent for under–18s to use social media.

I mean, this is both insane…and also kinda reasonable?

I don’t love the idea of the state getting between us and the internet. But social media is absolutely designed to be addictive. The state steps in between young people and cigarettes, young people and alcohol, young people and drugs. Is social media any different? And haven’t we proven that social media in its current form is more destructive to mental health as most of those things?

Was this written by a human or AI? ¯_(ツ)_/¯

I think the philosophical implications of these LLMs are among the most troubling. Think about your aunt or neighbor that posts on facebook. Do they have the ability to understand how AI works? Can they tell the difference between a LLM and a person? Can they tell the difference between sentience and a predictive text model?

I don’t think I’m being cynical when I say no. So what then happens when these AIs start becoming such a presence in the world that they take on what looks to many humans to be god-like intelligences? Do we have people taking these LLMs as sentience, or perhaps as the voices of god? Do we have people advocating for their right to exist, to “live?”

And most unnervingly, once sentience and predictive text models like GPT 4 become so muddled, do we lose sense of what’s human and what isn’t?