Mastodon Feed: Post

Mastodon Feed

Boosted by jsonstein@masto.deoan.org ("Jeff Sonstein"):
inthehands@hachyderm.io ("Paul Cantrell") wrote:

LLMs have no model of correctness, only typicality. So:

“How much does it matter if it’s wrong?”

It’s astonishing how frequently both providers and users of LLM-based services fail to ask this basic question — which I think has a fairly obvious answer in this case, one that the research bears out.

(Repliers, NB: Research that confirms the seemingly obvious is useful and important, and “I already knew that” is not information that anyone is interested in except you.)

1/ https://www.404media.co/chatbots-health-medical-advice-study/