Boosted by glyph ("Glyph"):
jonny@neuromatch.social ("jonny (good kind)") wrote:
to be clear - what does this one extremely simple feature reflect for the user?
In the "traditional way," the user can go to a website and see an example of how to do this themselves with information that's derived from the actual values used in computation. cool.
In the "LLM way," the user has no agency, can't see why the LLM might fail because they can't see the system prompt is directly feeding in the metadata about the available fields and thus has no idea that the model is capable of being wrong about its own fucking code and so they are shown wrong fucking values.
So the cost of transforming something to the "just prompt it" modality is "it being completely fucking wrong" even when that thing is literally just a feature that refers to the program state that is entirely owned by the fucking program" - to say nothing about how that pattern of development being recursively applied to the develpoment of the tool causes it to be fucking wrong as a matter of practice.