
Boosted by adele@social.pollux.casa ("Adële"):
elilla@transmom.love ("elilla&, europe penetrator") wrote:
when email spam started getting too big a problem to handle by filtering keywords, the use of statistical methods to classify spam was a small revolution. you'd click "spam" or "not spam", and the computer would slowly learn to categorise the emails you in particular receive.
generative "AI" is nothing more than the cooptation of that mechanism on the spam side of the equation. capitalists now use very powerful computers to write spam that would statistically look like "not spam", therefore unfilterable by statistical methods.
I don't see any solution to this problem other than a "web of trust"–like system of human curation. like I trust the judgement of my friend Rebeca, Rebeca trusts Fátima, Fátima trusts the Gamer's Quarter magazine that she's read for years and knows the writing style of the authors. So when I open Gamer's Quarter the computer says, "probably human-written", according to my network of trust.
but we never managed to get PGP signatures to go mainstream, even when corporate phishing became a real-life problem. the sociology is always the crucial part, not the technology.