cstanhope@social.coop ("Your friendly 'net denizen") wrote:
One last thing before I head out again for a bit. American Scientist has an interesting article arguing that the way people have been measuring the capabilities of things like LLMs creates the illusion of sudden improvements. This feeds the hype cycle, "Wow! Some magical inflection as we approach the singularity and AI rapture!"
The author argues we're scoring capabilities in a binary fashion, and so that obscures the linear, gradual improvements actually happening.
https://www.americanscientist.org/article/is-there-an-ai-metrics-mirage