Dispatches from the Empire


Google’s AI-powered smart glasses are a little closer to being real

Google is working on a lot of AI stuff — like, a lot of AI stuff — but if you want to really understand the company’s vision for virtual assistants, take a look at Project Astra. Google first showed a demo of its all-encompassing, multimodal virtual assistant at Google I/O this spring and clearly imagines Astra as an always-on helper in your life. In reality, the tech is somewhere between “neat concept video” and “early prototype,” but it represents the most ambitious version of Google’s AI work.

Watch this video and enjoy being mildly horrified, both by how thoughtless this technology will make us if we needn't be bothered to remember our friends' taste in books and also by just how much data Google will be hoovering up about every single thing we do.

These types of interactions with AI will become anodyne in short order, especially in younger generations, but imagine the implications of a network outage on an entire generation of people who will have have needed to remember or learn anything. (I say this as a stan of the Reminders app. I use it all the time, for everything, adding reminders via Siri on every connected device I own, but yes, it's been somewhat detrimental to my ability to remember certain things. And yeah, I'd be rightly fucked if every Apple device I had went dark all at once. But boy, is it useful.)

Have you ever wondered how many people asked AI which candidate to vote for in the last election? Do you think that number isn't going to grow dramatically over time? 

thispersondoesnotexist.com

(Be sure to refresh the page at least once.)

No one’s ready for this

Anyone who buys a Pixel 9 — the latest model of Google’s flagship phone, available starting this week — will have access to the easiest, breeziest user interface for top-tier lies, built right into their mobile device. This is all but certain to become the norm, with similar features already available on competing devices and rolling out on others in the near future. When a smartphone “just works,” it’s usually a good thing; here, it’s the entire problem in the first place.

…the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.

No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside?

My AI anxiety is high this week, as I’ve been following the release of the Pixel 9. Embarrassingly, I have extended family (that I rarely see) that work not just for Google, but specifically in Pixel marketing.

What the hell are they thinking?

Say what you will about Apple Intelligence, the new set of AI features due to be released on iPhones, iPads and Macs in the fall, but it doesn’t do anything like this by design. In fairness, I’m unsure Apple has the compute power (they want to do much of their AI on-device, whereas Google does theirs in the cloud) to do this kind of thing, but I’m almost certain they wouldn’t want to if they could.

Google is being extraordinarily reckless here. The lack of guardrails around this technology speaks volumes, and their terms of service is typical corporate legalese bullshit that avoids any and all responsibility for how this feature will be used.

Famously, Google’s corporate motto was once “don’t be evil,” but somehow that’s become “don’t blame us.”

Google threatened tech influencers unless they ‘preferred’ the Pixel

The agreement tells participants they’re “expected to feature the Google Pixel device in place of any competitor mobile devices.” It also notes that “if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.” The link to the form appears to have since been shut down.

“Google Pixel: Please don’t put us next to an iPhone.”

Introducing Apple’s On-Device and Server Foundation Models

Sam Altman Was Bending the World to His Will Long Before OpenAI

A followup to my recent post about Mr. Altman.

Before OpenAI, Sam Altman was fired from Y Combinator by his mentor

Though a revered tactician and chooser of promising start-ups, Altman had developed a reputation for favoring personal priorities over official duties and for an absenteeism that rankled his peers and some of the start-ups he was supposed to nurture, said two of the people, as well as an additional person, all of whom spoke on the condition of anonymity to candidly describe private deliberations. The largest of those priorities was his intense focus on growing OpenAI, which he saw as his life’s mission, one person said.

A separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.

“It was the school of loose management that is all about prioritizing what’s in it for me,” said one of the people.

I only now learned about this from Helen Toner's newly-released interview about Altman's firing from OpenAI in November. From The Verge

Toner says that one reason the board stopped trusting Altman was his failure to tell the board that he owned the OpenAI Startup Fund; another was how he gave inaccurate info about the company’s safety processes “on multiple occasions.” Additionally, Toner says she was personally targeted by the CEO after she published a research paper that angered him. “Sam started lying to other board members in order to try and push me off the board,” she says.

After two executives spoke directly to the board about their own experiences with Altman, describing a toxic atmosphere at OpenAI, accusing him of “psychological abuse,” and providing evidence of Altman “lying and being manipulative in different situations,” the board finally made its move.

Perhaps not the guy we want in charge of safety at one of the largest AI companies, given that the employees in charge of safety have been leaving in droves. From Vox

If you’ve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity. 

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him. 

“It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” a person with inside knowledge of the company told me, speaking on condition of anonymity. 

Not many employees are willing to speak about this publicly. That’s partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

Just what we want: an artificial intelligence company motivated by profit that creates something truly dangerous and disruptive, run by a guy like Altman.

Here are two fun excerpts from his Wikipedia page

He is an apocalypse preparer. Altman said in 2016: "I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israel Defense Forces, and a big patch of land in Big Sur I can fly to."

and 

In 2021, Altman's sister Annie wrote on Twitter accusing Sam of "sexual, physical, emotional, verbal, and financial abuse".

 

Grindr’s Plan to Squeeze Its Users

Grindr plans to boost revenue by monetizing the app more aggressively, putting previously free features behind a paywall, and rolling out new in-app purchases, employees say. The company is currently working on an AI chatbot that can engage in sexually explicit conversations with users, Platformer has learned. According to employees with knowledge of the project, the bot may train in part on private chats with other human users, pending their consent.

I remember the very early days of Grindr. I had one of the only smartphones in my part of the state, and the nearest fellow user was nearly 250 miles away. Chatting with other gay men was fun and refreshing.

Much has changed in the intervening 15 years. Dating (or hookup) apps have become vast wastelands of algorithmic sameness. People on these apps look, act, talk, and behave in eerily similar ways, not unlike how every young person now dresses like an "influencer." (I refuse to use that word without quotation marks.)

These apps gave us corrosion sold as connection. I'm reminded of David Foster Wallace's thoughts on entertainment, about always wondering what's on the other channel, wondering if there's something better to be watching. Shopping around (because that's precisely what these apps are: shopping) is so damn easy.

Contentment is hard when you think there's always something better just around the corner.

AI Digest

Visual explainers on AI progress and its risks.

Report: Israel used AI tool called Lavender to choose targets in Gaza

The system had a 90 percent accuracy rate, sources said, meaning that about 10 percent of the people identified as Hamas operatives weren’t members of Hamas’ military wing at all. Some of the people Lavender flagged as targets just happened to have names or nicknames identical to those of known Hamas operatives; others were Hamas operatives’ relatives or people who used phones that had once belonged to a Hamas militant. “Mistakes were treated statistically,” a source who used Lavender told +972. “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know statistically that it’s fine. So you go for it.”

Emphasis mine.

Welcome to the future.

The jobs being replaced by AI - an analysis of 5M freelancing jobs

The 3 categories with the largest declines were writing, translation and customer service jobs. The # of writing jobs declined 33%, translation jobs declined 19%, and customer service jobs declined 16%.

Too bad, too, because whoever wrote this article could have used an editor.

This article tracks with my experience in the field. I’m a freelance editor — print, audio, some video. My work has never felt so fraught, as I’ve never felt so undervalued. My work can be done by a computer!

I suddenly wonder what so many people have felt over the last thirty years since, say, NAFTA. To have your job swept out from under you and automated or sent abroad to be done by people for lower pay… I was all of eight when NAFTA went into effect, and I’ve never known what America was like beforehand. Yet I see the husks of mills and factories everywhere I go. (In fact, I gravitate to them, a moth to a flame.) I’ve not really felt what it must’ve been like to live through that transition.

Well, now I’m feeling it. It sucks. The insecurity is profound.

When I tell people of my predicament, there’s little sympathy from my fellow millennials, many of whom have never had the freedom that comes from work-from-your-computer self-employment. There’s a strong sense of something bordering on schadenfreude, that my luck finally ran out.

And I fear they’re right. I’m almost 40. I haven’t had a boss in fifteen years. I set my own schedule. My work has paid well, sure, and I’m fortunate to have assets that, if it becomes necessary, I can sell to survive. But what skills do I have? Put another way, what skills do I have that won’t be automated away by AI in the coming years? Most of what I know how to do I’ve done via a computer, and any work done on a computer is liable to be AI’d away.

Thankfully (or so I’m telling myself), this comes at a time when I’ve never been so dissatisfied with my work. People hardly read, and I no longer feel that people care to learn to write. Nor am I so sure that good journalism matters in the era of find-whatever-facts-you-want social media. I once was so certain that my work in journalism, however limited in scope, was good and just and righteous. That certainty is now gone, and I’m left adrift.

Not only have I lost my faith in what once felt like a calling, I’ve not yet felt another. It’s a dark, uncertain space.

The Expanding Dark Forest and Generative AI

Air Canada must honor refund policy invented by airline’s chatbot:

On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto. Unsure of how Air Canada's bereavement rates worked, Moffatt asked Air Canada's chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.

Here’s OpenAI’s big plan to combat election misinformation

Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and yes, I did immediately think “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been thinking about the same thing and today updated its policies to begin to address the issue.

Ah, the internet.

Adobe’s latest Premiere Pro update automatically cleans up trashy audio

These updates aren’t intended to automate audio editing entirely, but to optimize the existing process so that editors have more time to work on other projects. “As Premiere Pro becomes the first choice for more and more professional editors, we’re seeing editors being asked to do a lot more than just cut picture. At some level, most editors have to do some amount of color work, of audio work, even titling and basic effects,” said Paul Saccone, senior director for Adobe Pro Video, to The Verge. 

“Sure, there are still specialists you can hand off to depending on the project size, but the more we can enable customers to make this sort of work easier and more intuitive inside Premiere Pro, the more successful they’re going to be in their other creative endeavors.”

Oof. This one’s going to hurt. Most of my audio clients prefer Premiere (I’m a Logic Pro guy) and Adobe is using AI to automate away many of the tasks that take up the bulk of my time.

E.U. Agrees on AI Act, Landmark Regulation for Artificial Intelligence

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Very curious to see how this holds up.

Notable that any and all meaningful regulation over the tech industry is coming from Europe.

How Elon Musk and Larry Page’s AI Debate Led to OpenAI and an Industry Boom

At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about A.I. are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep A.I. from endangering Earth.

I do not want to become one with a computer.

Nor do I want to live without them.

Yet as I’ve watched the wave of social media crash over the culture in the last twenty years, I know I’m powerless to stop what’s coming. Our neurology will dictate what’s next, and just as it did with social media, most people will be swept away.

Your attention is everything — it’s all you have.

Remind yourself of this every day.

LLM Visualization

Visualize how ChatGPT and other large language models (LLMs) work.

Complicated, perhaps, but also astonishingly simple and, in hindsight, obvious.

Where I agree and disagree with Eliezer

The broader intellectual world seems to wildly overestimate how long it will take AI systems to go from “large impact on the world” to “unrecognizably transformed world.” This is more likely to be years than decades, and there’s a real chance that it’s months. This makes alignment harder and doesn’t seem like something we are collectively prepared for.

The Unsettling Lesson of the OpenAI Mess

I don’t know whether the board was right to fire Altman. It certainly has not made a public case that would justify the decision. But the nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button. The for-profit proved it can just reconstitute itself elsewhere. And don’t forget: There’s still Google’s A.I. division and Meta’s A.I. division and Anthropic and Inflection and many others who’ve built large language models similar to GPT–4 and are yoking them to business models similar to OpenAI’s. Capitalism is itself a kind of artificial intelligence, and it’s far further along than anything the computer scientists have yet coded. In that sense, it copied OpenAI’s code long ago.

…if the capabilities of these systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.

Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

I really, really, really hope my fears about AI are unfounded.

But we will build it. Humans never don’t build something because it might be dangerous. Nuclear weapons, gain-of function viral research… AI isn’t any different.

But how can we stop it from happening? We can’t prohibit everyone everywhere from building it. It’s inevitable.

I’m a doomer. I’ve long believed that humans will fuck up what we already have because we can’t learn to be content with it. We will do anything other than the hard work of learning to be content with life, to accept that misery and death are parts of it.

That’s all this is, right? Our abiding fear of death being made manifest?

Ironic, then, if it’s our inability to reconcile with death that causes our extinction.

What we’ve learned about the robot apocalypse from the OpenAI debacle

The argument is not that AI will become conscious or that it will decide it hates humanity. Instead, it is that AI will become extraordinarily competent, but that when you give it a task, it will fulfill exactly that task. Just as when we tell schools that they will be judged on the number of children who get a certain grade and teachers start teaching to the test, the AI will optimize the metric we tell it to optimize. If we are dealing with something vastly more powerful than human minds, the argument goes, that could have very bad consequences.

Generative AI like Midjourney creates images full of stereotypes

A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too. 

Of course! Computers are all about broad data sets, not specific outliers.

This isn’t just AI, either. It’s in the algorithms behind Facebook and TikTok and YouTube, etc. We humans create these algorithms in our own image. Why do most YouTube “celebrities” look so similar? Why are so many female TikTok “stars” facsimiles of the Kardashians, themselves facsimiles of a standard of beauty now twenty years old?

These algorithms are built on millions of clicks, taps, scrolls, and hours watched. They’re extremely efficient at doing what old-school media has always done: flatten culture. After all, who were John Wayne and Frank Sinatra if not the embodiment — and perpetuation — of stereotypes?

What’s unnerving about social media and AI is that this flattening happens at terrific speed, which wasn’t possible in our analog culture.

Humans are not built for speed. We might be addicted to it, but our brains didn’t evolve to handle it.

The future looks terrifically unsettling.

At UK Summit, Global Leaders Warn AI Could Cause ‘Catastrophic’ Harm

On Wednesday morning, his government released a document called “The Bletchley Declaration,” signed by representatives from the 28 countries attending the event, including the U.S. and China, which warned of the dangers posed by the most advanced “frontier” A.I. systems. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these A.I. models,” the declaration said.

“Many risks arising from A.I. are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible A.I.”

The document fell short, however, of setting specific policy goals. A second meeting is scheduled to be held in six months in South Korea and a third in France in a year.

Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways.

How Israeli Civilians Are Using A.I. to Help Identify Victims

Incredible.

I was tempted to use the cliché “AI is an incredible tool,” but then I remembered that while yes, perhaps in its current iteration AI is a tool, it’s also something else.

Intelligence is such a nebulous thing — I rarely hear it called a “tool.” Aspects of intelligence, like critical thinking? Sure, that can be a tool. A piece of the puzzle. A component of the whole.

But AI, or perhaps AGI (artificial general intelligence, loosely defined as when machines are able to think on their own without human intervention), is meant to be a component and a whole. A tool we use…but also a tool that will one day think critically for itself. Without humans.

Remember, while the AI of today is easily explainable with metaphor, the AI of tomorrow is not.