Police officers are scanning for Teslas that may have ambiently recorded nearby crimes on their external cameras — and even going as far as to attempt to tow the vehicles away to inspect the footage.
President of the Richmond Police Officers Association Ben Therriault told the Chronicle that officers usually attempt to ask for the owner's consent first, but sometimes resort to towing the vehicles anyway.
Hopper was a very popular speaker not just because of her pioneering contributions to computing, but because she was a natural raconteur, telling entertaining and often irreverent war stories from her early days. And she spoke plainly, as evidenced in the 1982 lecture when she drew an analogy between using pairs of oxen to move large logs in the days before large tractors, and pairing computers to get more computer power rather than just getting a bigger computer—“which of course is what common sense would have told us to begin with.” For those who love the history of computers and computation, the full lecture is very much worth the time.
I’ve long heard about Rear Admiral Grace Hopper, often from unexpected people in disparate places.
Before he died, my neighbor, once an employee of American Satellite and IBM, mentioned her name more than once.
The agreement tells participants they’re “expected to feature the Google Pixel device in place of any competitor mobile devices.” It also notes that “if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.” The link to the form appears to have since been shut down.
“Google Pixel: Please don’t put us next to an iPhone.”
…the court found that even though investigators seek warrants for geofence location data, these searches are inherently unconstitutional. As the court noted, geofence warrants require a provider, almost always Google, to search “the entirety” of its reserve of location data “while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result.” Therefore, “the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient.”
Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally — including non-Apple devices like Starlink systems — and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.
At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.
Grindr plans to boost revenue by monetizing the app more aggressively, putting previously free features behind a paywall, and rolling out new in-app purchases, employees say. The company is currently working on an AI chatbot that can engage in sexually explicit conversations with users, Platformer has learned. According to employees with knowledge of the project, the bot may train in part on private chats with other human users, pending their consent.
I remember the very early days of Grindr. I had one of the only smartphones in my part of the state, and the nearest fellow user was nearly 250 miles away. Chatting with other gay men was fun and refreshing.
Much has changed in the intervening 15 years. Dating (or hookup) apps have become vast wastelands of algorithmic sameness. People on these apps look, act, talk, and behave in eerily similar ways, not unlike how every young person now dresses like an "influencer." (I refuse to use that word without quotation marks.)
These apps gave us corrosion sold as connection. I'm reminded of David Foster Wallace's thoughts on entertainment, about always wondering what's on the other channel, wondering if there's something better to be watching. Shopping around (because that's precisely what these apps are: shopping) is so damn easy.
Contentment is hard when you think there's always something better just around the corner.
The 3 categories with the largest declines were writing, translation and customer service jobs. The # of writing jobs declined 33%, translation jobs declined 19%, and customer service jobs declined 16%.
Too bad, too, because whoever wrote this article could have used an editor.
This article tracks with my experience in the field. I’m a freelance editor — print, audio, some video. My work has never felt so fraught, as I’ve never felt so undervalued. My work can be done by a computer!
I suddenly wonder what so many people have felt over the last thirty years since, say, NAFTA. To have your job swept out from under you and automated or sent abroad to be done by people for lower pay… I was all of eight when NAFTA went into effect, and I’ve never known what America was like beforehand. Yet I see the husks of mills and factories everywhere I go. (In fact, I gravitate to them, a moth to a flame.) I’ve not really felt what it must’ve been like to live through that transition.
Well, now I’m feeling it. It sucks. The insecurity is profound.
When I tell people of my predicament, there’s little sympathy from my fellow millennials, many of whom have never had the freedom that comes from work-from-your-computer self-employment. There’s a strong sense of something bordering on schadenfreude, that my luck finally ran out.
And I fear they’re right. I’m almost 40. I haven’t had a boss in fifteen years. I set my own schedule. My work has paid well, sure, and I’m fortunate to have assets that, if it becomes necessary, I can sell to survive. But what skills do I have? Put another way, what skills do I have that won’t be automated away by AI in the coming years? Most of what I know how to do I’ve done via a computer, and any work done on a computer is liable to be AI’d away.
Thankfully (or so I’m telling myself), this comes at a time when I’ve never been so dissatisfied with my work. People hardly read, and I no longer feel that people care to learn to write. Nor am I so sure that good journalism matters in the era of find-whatever-facts-you-want social media. I once was so certain that my work in journalism, however limited in scope, was good and just and righteous. That certainty is now gone, and I’m left adrift.
Not only have I lost my faith in what once felt like a calling, I’ve not yet felt another. It’s a dark, uncertain space.
The Watch Ultra is different. Its large screen, clear yet dense interface, and rugged yet refined physical design all suggest that it must be far more expensive than the rest of the Apple Watch lineup. Yet, at $799, it’s only a mere seven percent costlier than the $749 stainless steel model.
The Ultra is second only to my phone as my favorite piece of Apple hardware. For my lifestyle, habits, location, and interests, it’s close to ideal (though I’ll never say no to more battery life), and the goodwill inspired by its utility is notable.
Pricing it just above the stainless steel regular watch was smart — Apple convinced me the Ultra was a bargain, effectively obfuscating their infamous profit margin.
The National Security Agency (NSA) has admitted to buying records from data brokers detailing which websites and apps Americans use, US Senator Ron Wyden (D-Ore.) revealed Thursday.
Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and yes, I did immediately think “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been thinking about the same thing and today updated its policies to begin to address the issue.
Apple’s positive effect on my life should not be underestimated. My Mom once (lovingly, teasingly) said to me that my alternate career, had all this never happened, was “criminal genius.” Which might have been fun too, but possibly more stressful than I might have liked. At any rate, Apple has saved me from a life of crime, and I should love Apple for that.
But I need to remember, now and again, that Apple is a corporation, and corporations aren’t people, and they can’t love you back. You wouldn’t love GE or Exxon or Comcast — and you shouldn’t love Apple. It’s not an exception to the rule: there are no exceptions.
Emphasis mine.
I have long "loved" Apple. My first Mac came at a very lonely time in life (first year of college). Before then, computers were work, literally: my first job was at the local internet service provider in my small town. I had my own desktop at home where I'd play games, but thoughts about drivers and memory and storage and sound cards were never far from my mind.
With a Mac, that all changed. Apple created this functional computer in a beautifully designed enclosure. The Mac OS X was whimsical and fun in a way Windows never was (and never has been). Most of all, Apple was attentive to detail. Sure, it couldn't do everything a Windows computer did, but it sure did what it could do much more thoughtfully.
I've been in the Apple ecosystem for nearly a two decades. More than half my life. I have few regrets about entrusting the company with my data. Hell, their devices enabled me to have a career while living out of my tent. Apple devices have enabled an unprecedented amount of freedom, and for that, I'm grateful.
I am also an Apple shareholder. I became one shortly after the first iPhone was announced. That investment was, back in 2007, risky and ill-advised. But it has paid off, literally, and again, for that I'm grateful.
But Apple is still a corporation. Their computers are just things. iPhone, as central as it is to my life — to my ability to have the kind of life I live — it merely a thing. An incredibly powerful, almost god-like tool, but still just a thing.
People accuse me of loving Apple. At times, I'm ashamed to say I have. But they are a corporation. Their only loyalty is to profit, to the financial benefit of their shareholders. Corporations do not give a shit about anything other than profit.
Researchers found that, on average, Facebook received data from 2,230 different companies for each of the 709 volunteers. One extreme example showed that “nearly 48,000 different companies were found in the data of a single volunteer.” In total, Facebook data archives showed that 186,892 companies had provided data on all of the study’s participants.
Surveillance capitalism. This should horrify us all.
I struggle to tell people in my life the extent to which they are being tracked. They think Facebook is it, "and what could they know about me?"
People don't realize that thousands of companies feed data to bigger tech companies like Facebook. Property records. Purchase histories. Tax payments. Health records. Online browsing history. Everything. Facebook merely collates all that data.
That this doesn't bother the hell out of people always mystifies me. When did we give up on a reasonable expectation of privacy?
These updates aren’t intended to automate audio editing entirely, but to optimize the existing process so that editors have more time to work on other projects. “As Premiere Pro becomes the first choice for more and more professional editors, we’re seeing editors being asked to do a lot more than just cut picture. At some level, most editors have to do some amount of color work, of audio work, even titling and basic effects,” said Paul Saccone, senior director for Adobe Pro Video, to The Verge.
“Sure, there are still specialists you can hand off to depending on the project size, but the more we can enable customers to make this sort of work easier and more intuitive inside Premiere Pro, the more successful they’re going to be in their other creative endeavors.”
Oof. This one’s going to hurt. Most of my audio clients prefer Premiere (I’m a Logic Pro guy) and Adobe is using AI to automate away many of the tasks that take up the bulk of my time.
European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.
The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.
Very curious to see how this holds up.
Notable that any and all meaningful regulation over the tech industry is coming from Europe.
Getting your DNA or your loved ones’ DNA sequenced means you are potentially putting people who are related to those people at risk in ways that are easily predictable, but also in ways we cannot yet predict because these databases are still relatively new. I am writing this article right now because of the hack, but my stance on this issue has been the same for years, for reasons outside of the hack.
At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about A.I. are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep A.I. from endangering Earth.
I do not want to become one with a computer.
Nor do I want to live without them.
Yet as I’ve watched the wave of social media crash over the culture in the last twenty years, I know I’m powerless to stop what’s coming. Our neurology will dictate what’s next, and just as it did with social media, most people will be swept away.
The idea of a skyhook has been under study for half a century now; it would take the form of a long and strong tether extending from a base station on Earth’s surface into space. The other end of the tether, a counterweight like Envisat, would remain in orbit around Earth. As the tether rotates, the counterweight generates centrifugal force, creating tension in the tether. Spacecrafts and payloads can then be attached to the tether and released into space when they reach the desired velocity, essentially ‘hooking’ them into orbit. The counterweight’s substantial mass and its fixed position in space would act as the pivot point for the entire system, allowing the tether to maintain tension and transfer momentum. Depending on the tether’s length, materials and the specific rotational characteristics of the skyhook, the momentum it imparts to payloads could potentially extend their reach beyond Earth’s orbit to reach other celestial bodies. Further into the future, skyhooks could span across three celestial bodies – Earth, the Moon and Mars – forming a seamless interconnected network.
I don’t know whether the board was right to fire Altman. It certainly has not made a public case that would justify the decision. But the nonprofit board was at the center of OpenAI’s structure for a reason. It was supposed to be able to push the off button. But there is no off button. The for-profit proved it can just reconstitute itself elsewhere. And don’t forget: There’s still Google’s A.I. division and Meta’s A.I. division and Anthropic and Inflection and many others who’ve built large language models similar to GPT–4 and are yoking them to business models similar to OpenAI’s. Capitalism is itself a kind of artificial intelligence, and it’s far further along than anything the computer scientists have yet coded. In that sense, it copied OpenAI’s code long ago.
…
…if the capabilities of these systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
I really, really, really hope my fears about AI are unfounded.
But we will build it. Humans never don’t build something because it might be dangerous. Nuclear weapons, gain-of function viral research… AI isn’t any different.
But how can we stop it from happening? We can’t prohibit everyone everywhere from building it. It’s inevitable.
I’m a doomer. I’ve long believed that humans will fuck up what we already have because we can’t learn to be content with it. We will do anything other than the hard work of learning to be content with life, to accept that misery and death are parts of it.
That’s all this is, right? Our abiding fear of death being made manifest?
Ironic, then, if it’s our inability to reconcile with death that causes our extinction.
The argument is not that AI will become conscious or that it will decide it hates humanity. Instead, it is that AI will become extraordinarily competent, but that when you give it a task, it will fulfill exactly that task. Just as when we tell schools that they will be judged on the number of children who get a certain grade and teachers start teaching to the test, the AI will optimize the metric we tell it to optimize. If we are dealing with something vastly more powerful than human minds, the argument goes, that could have very bad consequences.