I've tried a bunch of local open-weights LLMs that fit under 64GB (I can't fix the social disruption they cause, but at least I can use renewable energy and the hardware I already have).
They're good enough for "fuzzy regex" automations and answering easy questions like Yahoo Answers.
But for programming tasks they're infuriatingly bad. All models I could run locally are net-negative for productivity, even on easy chore tasks.