Boosted by cstanhope@social.coop ("Your friendly 'net denizen"):
researchfairy@scholar.social ("The research fairy") wrote:
> being plausible but slightly wrong and un-auditable—at scale—is the killer feature of LLMs, not a bug that will ever be meaningfully addressed, and this combination of properties makes it an essentially fascist technology. By “fascist” in this context, I mean that it is well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda.