Mastodon Feed: Posts

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Altman has proposed that conversations with chatbots should be protected with a new kind of "privilege" akin to attorney-client privilege and related forms, such as doctor-patient and confessor-penitent privilege:

https://venturebeat.com/ai/sam-altman-calls-for-ai-privilege-as-openai-clarifies-court-order-to-retain-temporary-and-deleted-chatgpt-sessions/

I'm all for adding new privacy protections for the things we key or speak into information-retrieval services of all types.

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

AI bosses are the latest and worst offenders in a long and bloody lineage of privacy-hating tech bros. No one should ever, ever, *ever* trust them with *any* private or sensitive information. Take Sam Altman, a man whose products routinely barf up the most ghastly privacy invasions imaginable, a completely foreseeable consequence of his totally indiscriminate scraping for training data.

22/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

The keyword here, though, is *private*. Zuckerberg's insatiable, all-annihilating drive to expose our private activities as an attention-harvesting spectacle is poisoning the well, and he's far from alone. The entire AI chatbot sector is so surveillance-crazed that anyone who uses an AI chatbot as a therapist needs their head examined:

https://pluralistic.net/2025/04/01/doctor-robo-blabbermouth/#fool-me-once-etc-etc

21/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

The use of chatbots as therapists is not without its risks. Chatbots can - and do - lead vulnerable people into extensive, dangerous, delusional, life-destroying ratholes:

https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

But that's a (disturbing and tragic) minority. A journal that responds to your thoughts with bland, probing prompts would doubtless help many people with their own private reflections.

20/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Which shouldn't surprise us. After all, divination tools, from the I Ching to tarot to Brian Eno and Peter Schmidt's Oblique Strategies deck have been with us for thousands of years: even random responses can make us better thinkers:

https://en.wikipedia.org/wiki/Oblique%5FStrategies

I make daily, extensive use of my own weird form of random divination:

https://pluralistic.net/2022/07/31/divination/

19/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

I can easily believe that one enduring use-case for chatbots is as a kind of enhanced diary-cum-therapist. Journalling is a well-regarded therapeutic tactic:

https://www.charliehealth.com/post/cbt-journaling

And the invention of chatbots was *instantly* followed by ardent fans who found that the benefits of writing out their thoughts were magnified by even primitive responses:

https://en.wikipedia.org/wiki/ELIZA%5Feffect

18/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Those standalone models were released as toys by companies pumping tens of billions into the unsustainable "foundation models," who bet that - despite the worst unit economics of any techn in living memory - these tools would someday become economically viable, capturing a winner-take-all market with trillions of upside. That bet remains a longshot, but the littler "toy" models are beating everyone's expectations by wide margins, with no end in sight:

https://www.nature.com/articles/d41586-025-00259-0

17/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

The AI bubble is overdue for a pop:

https://www.wheresyoured.at/measures/

When it does, it will leave behind some kind of residue - cheaper, spin-out, standalone models that will perform many useful functions:

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

16/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

There's no warning about the privacy settings for your prompts, and if you use Meta's AI to log in to Meta services like Instagram, it publishes your Instagram search queries as well, including "big booty women."

As Silberling writes, the only saving grace here is almost no one is using Meta's AI app. The company has only racked up a paltry 6.5m downloads, across its ~3 billion users, after spending tens of billions of dollars developing the app and its underlying technology.

15/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

* "how to write a character reference letter for an employee facing legal troubles, with that person’s first and last name included."

While the security researcher Rachel Tobac found "people’s home addresses and sensitive court details, among other private information":

https://twitter.com/racheltobac/status/1933006223109959820

14/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Users are clearly hitting this button without understanding that this means that their intimate, compromising queries are being published in a public feed. *Techcrunch*'s Amanda Silberling trawled the feed and found:

* "An audio recording of a man in a Southern accent asking, 'Hey, Meta, why do some farts stink more than other farts?'"

* "people ask[ing] for help with tax evasion"

* "[whether family members would be arrested for their proximity to white-collar crimes"

13/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

They ended up paying $9.5m to settle a lawsuit brought by some of their users, and created a "Digital Trust Foundation" which they funded with another $6.5m. Mark Zuckerberg published a solemn apology and promised that he'd learned his lesson.

Apparently, Zuck is a slow learner.

Depending on which "submit" button you click, Meta's AI chatbot publishes a feed of all the prompts you feed it:

https://techcrunch.com/2025/06/12/the-meta-ai-app-is-a-privacy-disaster/

12/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

But the most persistent, egregious and consequential sinner here is Facebook (naturally). In 2007, Facebook opted its 20,000,000 users into a new system called "Beacon" that published a public feed of every page you looked at on sites that partnered with Facebook:

https://en.wikipedia.org/wiki/Facebook%5FBeacon

Facebook didn't just publish this - they also lied about it. Then they admitted it and promised to stop, but that was also a lie.

11/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Then there was the time that Etsy decided that it would publish a feed of everything you bought, never once considering that maybe the users buying gigantic handmade dildos shaped like lovecraftian tentacles might not want to advertise their purchase history:

https://arstechnica.com/information-technology/2011/03/etsy-users-irked-after-buyers-purchases-exposed-to-the-world/

10/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Or Venmo, which, by default, lets *anyone* see what payments you've sent and received (researchers have a field day just filtering the Venmo firehose for emojis associated with drug buys like "pills" and "little trees"):

https://www.nytimes.com/2023/08/09/technology/personaltech/venmo-privacy-oversharing.html

9/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

But companies are perennially willing to trade your privacy for a glitzy new product launch. Amazingly, the people who *run* these companies and design their products seem to have *no clue* as to how their users *use* those products. Take Strava, a fitness app that dumped maps of where its users went for runs and revealed a bunch of secret military bases:

https://gizmodo.com/fitness-apps-anonymized-data-dump-accidentally-reveals-1822506098

8/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

But that doesn't mean there's no safe way to data-mine large data-sets. "Trusted research environments" (TREs) can allow researchers to run queries against multiple sensitive databases without ever seeing a copy of the data, and good procedural vetting as to the research questions processed by TREs can protect the privacy of the people in the data:

https://pluralistic.net/2022/10/01/the-palantir-will-see-you-now/#public-private-partnership

7/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Indeed, it appears that there may be *no* way to truly de-identify a data-set:

https://pursuit.unimelb.edu.au/articles/understanding-the-maths-is-crucial-for-protecting-privacy

Which is a serious bummer, given the potential insights to be gleaned from, say, population-scale health records:

https://www.nytimes.com/2019/07/23/health/data-privacy-protection.html

It's clear that de-identification is not fit for purpose when it comes to these data-sets:

https://www.cs.princeton.edu/~arvindn/publications/precautionary.pdf

6/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

But firms stubbornly refuse to learn this lesson. They would love it if they could "safely" sell the data they suck up from our everyday activities, so they declare that they *can* safely do so, and sell giant data-sets, and then bam, the next thing you know, a federal judge's porn-browsing habits are published for all the world to see:

https://www.theguardian.com/technology/2017/aug/01/data-browsing-habits-brokers

5/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

It turns out that de-identification is *fucking hard*. Just a couple of datapoints associated with an "anonymous" identifier can be sufficent to de-anonymize the user in question:

https://www.pnas.org/doi/full/10.1073/pnas.1508081113

4/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

The AOL dump is notable for many reasons, not least because it jumpstarted the academic and technical discourse about the limits of "de-identifying" datasets by stripping out personally identifying information prior to releasing them for use by business partners, researchers, or the general public.

3/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

The AOL dump was a catastrophe. In an eyeblink, many of the users in the dataset were de-anonymized. The dump revealed personal, intimate and compromising facts about the lives of AOL search users.

2/

Mastodon Feed

pluralistic@mamot.fr ("Cory Doctorow") wrote:

Back in 2006, AOL tried something incredibly bold and even more incredibly *stupid*: they dumped a data-set of 20,000,000 "anonymized" search queries from 650,000 users (yes, AOL had a search engine - there used to be *lots* of search engines!):

https://en.wikipedia.org/wiki/AOL%5Fsearch%5Flog%5Frelease

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2025/06/19/privacy-breach-by-design#bringing-home-the-beacon

1/

A moody room with Shining-esque broadloom. In fhe foreground stands a giant figure the with the head of Mark Zuckerberg's metaverse avatar; its eyes have been replaced with the glaring red eyes of HAL 9000 from Kubrick's '2001: A Space Odyssey' and has the logo for Meta AI on its lapel; it peers though a magnifying glass at a tiny figure standing on its vast palm. The tiny figure has a leg caught in a leg-hold trap and wears an expression of eye-rolling horror. In the background, gathered around a sofa and an armchair, is a ranked line of grinning businessmen, who are blue and flickering in the manner of a hologram display in Star Wars. Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en

Mastodon Feed

Boosted by pluralistic@mamot.fr ("Cory Doctorow"):
nlarson830@techhub.social ("Nick 'The Viking' O'Pelican") wrote:

@pluralistic @Mastodon@mamot.fr

Thank you Cory for bringing this issue out into the light.

@Mastodon thanks for correcting your error.

Mastodon Feed

Boosted by fromjason ("fromjason.xyz ❤️ 💻"):
heydon@front-end.social ("Large Heydon Collider") wrote:

Been working on some monochrome #pride gear and this came to me out of nowhere. Thoughts?

The trans am logo (with phoenix) but it says Am Trans.

Mastodon Feed

Boosted by taral ("JP Sugarbroad"):
antifaintl@kolektiva.social ("Antifa International") wrote:

#antifa, #nazis, #weimar #history, #WeimarRepublic

Post by Lee J. Carter (@carterforva): "The history of Nazis holding rallies in left-wing areas of Weimar Germany, instigating street fights, and then telling the press that only they could save Germany from the 'violent communists' seems like an important thing for people to be studying right now."

Mastodon Feed

Boosted by taral ("JP Sugarbroad"):
miriamboosh.bsky.social@bsky.brid.gy ("Miriam Boosh") wrote:

The NYT was cited in the Supreme Court ruling used to strip trans people of healthcare. This is EXACTLY WHY the “just asking questions” crowd of centrist edgelord journalists are dangerous. They launder transphobia through faux neutral “debate” and act like it’s intellectual curiosity

Mastodon Feed

fromjason ("fromjason.xyz ❤️ 💻") wrote:

Americans have become so used to the idea of slave-wage labor, that when we argue in favor of immigrants, it's always "well, who will pick our fruits?"

And that is terrible. #ICE

Mastodon Feed

Boosted by pluralistic@mamot.fr ("Cory Doctorow"):
huxley@furry.engineer ("huxley(fur) 🔜 Prowl Pride 6/28") wrote:

Enshittification comes for open source:
Slack is kicking two large open source groups, Cloud Native Computing Foundation and Kubernetes, off of their donated enterprise tier, giving them one week notice to migrate multiple years of data to a new platform before it's all deleted: https://www.cncf.io/blog/2025/06/16/cncf-slack-workspace-changes-coming-on-friday-june-20/

Instead of learning from this experience and not trusting the good will of profit-motivated closed source companies, it looks like both projects will be moving to ... Discord. Because "people know it." Will we never learn?

@pluralistic

#enshittification

Mastodon Feed

Boosted by pluralistic@mamot.fr ("Cory Doctorow"):
pteryx@dice.camp ("Pteryx the Puzzle Secretary") wrote:

@Myoldpiano @pluralistic
Seems the situation has just changed in any case due to public outcry:

https://mastodon.social/@Mastodon/114709820512537821