Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
nyhan@fediscience.org ("Kate Nyhan") wrote:
Maybe someday I will be as forthright as @researchfairy (whose long-ago post about how, even if LLM output is correct, it's not worth the harms experienced by data workers/moderators in the course of training the model, has stuck with me for years)
But until then, at least I can point to external guidance as I beg people to slow down a little.