Anyone who buys a Pixel 9 — the latest model of Google’s flagship phone, available starting this week — will have access to the easiest, breeziest user interface for top-tier lies, built right into their mobile device. This is all but certain to become the norm, with similar features already available on competing devices and rolling out on others in the near future. When a smartphone “just works,” it’s usually a good thing; here, it’s the entire problem in the first place.
…the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.
No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened. Consider all the ways in which the assumed veracity of a photograph has, previously, validated the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside?
My AI anxiety is high this week, as I’ve been following the release of the Pixel 9. Embarrassingly, I have extended family (that I rarely see) that work not just for Google, but specifically in Pixel marketing.
What the hell are they thinking?
Say what you will about Apple Intelligence, the new set of AI features due to be released on iPhones, iPads and Macs in the fall, but it doesn’t do anything like this by design. In fairness, I’m unsure Apple has the compute power (they want to do much of their AI on-device, whereas Google does theirs in the cloud) to do this kind of thing, but I’m almost certain they wouldn’t want to if they could.
Google is being extraordinarily reckless here. The lack of guardrails around this technology speaks volumes, and their terms of service is typical corporate legalese bullshit that avoids any and all responsibility for how this feature will be used.
Famously, Google’s corporate motto was once “don’t be evil,” but somehow that’s become “don’t blame us.”
The agreement tells participants they’re “expected to feature the Google Pixel device in place of any competitor mobile devices.” It also notes that “if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.” The link to the form appears to have since been shut down.
“Google Pixel: Please don’t put us next to an iPhone.”
The podcast itself is an extraordinary performance. At one point, Andreessen concedes that their major problems with President Joe Biden — the ones that led them to support Trump — are what most voters would consider “subsidiary” issues. “It doesn’t have anything to do with the big issues that people care about,” he says. If we take this podcast at face value, we are to believe that these subsidiary issues are the only reason they’ve chosen to endorse and donate to Trump.
These subsidiary issues take precedence for Andreessen and Horowitz over, say, mass deportations and Project 2025’s attempt to end no-fault divorce. We are looking at a simple trade against personal liberty — abortion, the rights of gay and trans people, and possibly democracy itself — in favor of crypto, AI, and a tax policy they like better.
Hackers broke into a cloud platform used by AT&T and downloaded call and text records of “nearly all” of AT&T’s cellular customers across a several month period, AT&T announced early on Friday.
The worst telcom hack in history. (That we know of.)
“Frankly, there are a lot of people on TikTok that love it,” Mr. Trump said. “There are a lot of young kids on TikTok who will go crazy without it.”
“There’s a lot of good and there’s a lot of bad with TikTok,” he added, “but the thing I don’t like is that without TikTok, you can make Facebook bigger, and I consider Facebook to be an enemy of the people, along with a lot of the media.”
Mr. Trump tried to ban TikTok while in office, pushing its Chinese parent company, ByteDance, to sell the platform to a new owner or face being blocked from American app stores. A House committee advanced legislation last week that would similarly force TikTok to cut ties with ByteDance.
This says everything you need to know about Trump. He’ll say anything that serves him in the right now. He has no impulse control, he has no ability to think strategically, he has no long-term plan.
Banning TikTok (i.e. forcing ByteDance, a Chinese company, to sell off TikTok) is the right thing to do. It’s a parasite destroying the ability of people to think critically and deeply. It has decimated the attention spans of our young people, who don’t know a world without social media. TikTok is a cancer.
And so is Facebook. Merely forcing the sale of TikTok to an American company won’t fix the problem. Letting our corporations mine the attention of our young people is better than letting China do it, but not by much.
Start treating all social media like what it is: addictive advertising.
European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.
The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.
Very curious to see how this holds up.
Notable that any and all meaningful regulation over the tech industry is coming from Europe.
In general, blackmail is a crime where the criminal demands payment from the victim. It does not involve the criminal refusing to give money to the victim for a service they don’t want. Blackmailing somebody “with money,” as Musk put it, is not a thing.
I don’t know whether the board was right to fire Altman. It certainly has not made a public case that would justify the decision. But the nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button. The for-profit proved it can just reconstitute itself elsewhere. And don’t forget: There’s still Google’s A.I. division and Meta’s A.I. division and Anthropic and Inflection and many others who’ve built large language models similar to GPT–4 and are yoking them to business models similar to OpenAI’s. Capitalism is itself a kind of artificial intelligence, and it’s far further along than anything the computer scientists have yet coded. In that sense, it copied OpenAI’s code long ago.
…
…if the capabilities of these systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
I really, really, really hope my fears about AI are unfounded.
But we will build it. Humans never don’t build something because it might be dangerous. Nuclear weapons, gain-of function viral research… AI isn’t any different.
But how can we stop it from happening? We can’t prohibit everyone everywhere from building it. It’s inevitable.
I’m a doomer. I’ve long believed that humans will fuck up what we already have because we can’t learn to be content with it. We will do anything other than the hard work of learning to be content with life, to accept that misery and death are parts of it.
That’s all this is, right? Our abiding fear of death being made manifest?
Ironic, then, if it’s our inability to reconcile with death that causes our extinction.
Christ, Apple. Have some faith that people use your phones because they’re the best on the market, not because they’re locked into iMessage. 🤦🏼♂️
I’m all for security — end-to-end encryption is table stakes at this point, and I won’t use anything without it to meaningfully communicate — but inhibiting innovation solely to protect a monopoly of a marketplace (in this case, the App Store)? C’mon.
Apple, you have the technical expertise to protect people’s data even if they sideload. It won’t be easy, I know, but come on.
Give people a choice.
On the other hand, my work is tech-adjacent and my passions are obviously tech-y. I work with a lot of people — smart, professional people — that don’t know shit about the phones in their pocket, not least of all how to safeguard them.
If smart people can’t be bothered to protect themselves, no doubt sideloading will invite bad actors (i.e. advertisers) to get people to download software that tracks the hell out of them. More than it already does.
So maybe Apple has a point.
But the least they could do is lower their 30% App Store commission. It’s difficult to take anyone arguing the moral high ground seriously when they’re making such an extreme profit from their position.
“What is happening at TikTok is it is creating the biggest antisemitic movement since the Nazis,” Mr. Cohen, who does not appear to have an official TikTok account, said early in the call. He criticized violent imagery and disinformation on the platform, telling Mr. Presser, “Shame on you,” and claiming that TikTok could “flip a switch” to fix antisemitism on its platform.
How is everyone liking their corporate rule?
TikTok has real power. Facebook (which owns Instagram) has real power. Google has real power.
This is not okay.
I hate these companies, all of whom, at their heart, are advertising companies. They run social media platforms so they can sell you ads. That’s how they make their money. That’s the whole point.
People want to blame their phones, as I am wont to do at times, but the smartphone is merely a tool. You don’t have to use it for social media.
Fuck.
What on earth are people thinking when they use social media?
Oh right, they’re not thinking — their neurology has been hijacked. They’re addicted.
Children. We let children use TikTok. We’ve let our children become addicts, just like us. How is this okay? Why are we not filled with rage each and every time we see a parent hand over their phone to their child?
Yes, I’m blaming addicts for their addiction, but we’ve let our children become addicts, too.
Instagram has become a particularly active arena for pro-Hamas propaganda. At last count, the hashtag #freepalestine had appeared on over 5.8-million posts, exceeding #standwithisrael’s 220,000 by a geometric factor of more than 20. Similarly, #gazaunderattack has amassed 1.8 million instances, an order of magnitude more than #israelunderattack’s 134,000.
I used to think numbers like this were bullshit. “Likes” and “views” and “engagements” have never felt like salient measurements of, well, anything but the ego of some large social media companies.
Of course I was wrong.
I love when people tell me that advertising “doesn’t work” on them. As if their mind is so strong that it can’t be swayed one way or the other.
In response to their claims, I yell, “_HOT DOG!_”
“What are you thinking about now,” I then ask.
Surprise: they’re thinking about hot dogs.
Advertising really is that simple. Our neurology isn’t that complicated. We like to think we’re exceptions to rules, but rules are rules for a reason.
A few friends that lived through the 1960s and 1970s like to say “advertising is propaganda.” I’m inclined to agree. Of course it is.
Yet if all advertising is mere suggestion, then it makes absolute sense that in capitalism, the money flows to the most persuasive, even if those of us being persuaded don’t fully understand how persuasion works.
A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too.
Of course! Computers are all about broad data sets, not specific outliers.
This isn’t just AI, either. It’s in the algorithms behind Facebook and TikTok and YouTube, etc. We humans create these algorithms in our own image. Why do most YouTube “celebrities” look so similar? Why are so many female TikTok “stars” facsimiles of the Kardashians, themselves facsimiles of a standard of beauty now twenty years old?
These algorithms are built on millions of clicks, taps, scrolls, and hours watched. They’re extremely efficient at doing what old-school media has always done: flatten culture. After all, who were John Wayne and Frank Sinatra if not the embodiment — and perpetuation — of stereotypes?
What’s unnerving about social media and AI is that this flattening happens at terrific speed, which wasn’t possible in our analog culture.
Humans are not built for speed. We might be addicted to it, but our brains didn’t evolve to handle it.
TikTok has denied the claims and said in a blog post they were based on “unsound analysis.” The data reviewed by Semafor suggests that the imbalance on the platform is largely outside the U.S. — and may skew heavily toward the Palestinian side because of the app’s popularity in Muslim countries and the fact that it is blocked in India.
The central promise of the internet was, after all, to be a great equalizer. I’m not saying TikTok’s (a Chinese company) algorithms are “fair” (however you define that), but it shouldn’t come as a surprise that with a global population that’s largely online, America and our interests aren’t always going to be the most popular.
Lewis expects that, unless regulators cap the number of satellites in orbit, collisions will soon become a regular part of the space business. Such collisions would lead to rapid growth in the amount of space debris fragments that are completely out of control, which would lead to more and more collisions. The end point of this process might be the Kessler Syndrome, a scenario predicted in the late 1970s by former NASA physicist Donald Kessler. Depicted in the 2013 Oscar-winning movie “Gravity,” the Kessler Syndrome is an unstoppable cascade of collisions that might render parts of the orbital environment completely unusable.
Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.
This just isn’t a path humanity needs to go down. What is it with us humans? Why can’t we stop? What motivates us to do this shit?
Maybe you think our self-destruction isn’t inevitable, but deep in my gut, that feels naive and ignorant of human nature.
Is there a word for the feeling of being deeply ashamed of my species, yet complicit in some of our worst behaviors? That shame, that fear of what feels inevitable, undergirds my entire life and has since I was an adolescent. I describe it as the awareness we’re all tethered together and collectively running toward a cliff, yet most everyone seems not to see the edge. A few of us are trying to slow down — we see what’s coming — but we can’t stop the lot of us.
I want us to slow down. I want to not wake up each morning with this itch behind my eyes, this breathlessness in my gut, this primal suspicion that we’re all fucking ourselves.
Again and again, the phrase that comes to mind is “it doesn’t have to be this way.” And yet it feels inevitable.
The work is defined by its unsteady, on-demand nature, with people employed by written contracts either directly by a company or through a third-party vendor that specializes in temp work or outsourcing. Benefits such as health insurance are rare or nonexistent — which translates to lower costs for tech companies — and the work is usually anonymous, with all the credit going to tech startup executives and researchers.
Pichai and other executives have increasingly begun talking about the prospect of AI tech matching or exceeding human intelligence, a concept known as artificial general intelligence, or AGI. The once fringe term, associated with the idea that AI poses an existential risk to humanity, is central to OpenAI’s mission and had been embraced by DeepMind, but was avoided by Google’s top brass.
The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.