Dispatches from the Empire


The Information: OpenAI shows ‘Strawberry’ to feds, races to launch it

An excerpt from an excerpt:

OpenAI is also using the bigger version of Strawberry to generate data for training Orion, said a person with knowledge of the situation. That kind of AI-generated data is known as “synthetic.” It means that Strawberry could help OpenAI overcome limitations on obtaining enough high-quality data to train new models from real-world data such as text or images pulled from the internet.

Using AI to create data on which to train ever-larger models of AI.

Huh.

Well, now that I know this, yeah, of course this is the next step. The whole of the internet is not nearly large enough (nor does most of it quialify as “high quality data”) to train the ever-larger models.

As the summer stretches on, I’m more in line with Gary Marcus than I’ve ever been. The anxiety I have over artificial general intelligence (AGI) — defined by ChatGPT as “a type of artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to or exceeding that of a human being” — is waning, at least temporarily. I don’t see the path for LLMs in their current iteration to become AGI, at least not in-and-of themselves.

But will LLMs be enough to fool many, many people into thinking that they are sentient? Of course. They’ve long passed that threshold. And that’s remarkably dangerous in a populace with little to no understanding of how computers work.

There’s a short-sightedness of the AI optimists that willfully ignores just how incomprehensible this stuff is to non-tech people — the people I engage with every day, often within some form of tech support. Most everyone has no clue what AI or LLMs are, let alone how they work.

Sure, those optimists would say, but most people have no clue how a tractor works, or an alarm system or a steel mill or an elevator, either.

And while that’s true, none of those things are designed to present as human, and therein lies the danger. When computers can present as human beings convincingly enough to fool other human beings — and other computers! — we are in trouble.

We will adapt. Humans adapt. But this doesn’t condone recklessly incorporating AI into our lives in ever-more complext ways without being sure the general population understands how it works.

What (still) keeps me up at night is not the emergence of AGI, it is the disruption that regular ol’ AI will have on our lives. Just look at the release of the ‘Reimagine’ feature on the new Google Pixel 9 phones. John is correct to point out that “this technology becoming ubiquitous feels inevitable,” but does it have to be? I’m not saying that it isn’t inevitable, I’m saying why isn’t there more of a conversation around these things?

Here I am, bemoaning the loss of the old ways in the face of an inevitable future. Sure. But at any juncture, humans have the ability to question the ethical implications of new technology and to not be held prisoner by its "inevitability."

Right?

To use an example the AI industry itself so often uses: nuclear weapons! We’ve seemed to reach consensus as a species that using them is too dangerous…after we tried them out…and used them on other humans twice…and keep them around as deterrent…for the last 80 years. (That's a lot of caveats!) And nukes, unlike AI, were not given to the masses.

While I’m privy to simmering conversations of how AI will change our lives, the nuances of the subject are nowhere near mainstream. And until it is, I’m unsure of how ethical it is to deploy this technology.

And now we’ve arrived at the very obvious: of course we’re going to release this technology upon the masses, consequences be damned. This is the very story of humanity! This is how progress and innovation happen.

Up until now, most people would argue this ‘progress’ has been a net positive. I have my doubts.

But my doubts aside, let’s presume that technological progress has been a net good for us all. Who’s to say there’s not some tipping point, some innovation that frays the fabric of society, that hot-wires our neurology so thoroughly that we can’t help but trigger a collapse? Things can be good…until they aren't. And to assume progress will always tend toward benefit is just as delusional, just as grounded in confirmation bias, as infinite economic growth. 

When we won’t be able to agree on the veracity of a photo, of anything printed, of what we saw in a video, what then? When a shared truth can no longer be shared, then what? When we cannot agree on anything, how do we progress as a species?

I'm either a cynic or an optimist — I do not believe any future is inevitable.

Maybe that just makes me delusional.