Boosted by slightlyoff@toot.cafe ("Alex Russell"):
dracos ("Matthew") wrote:
@slightlyoff As no-one else seemed to have done, I opened https://github.com/FujitsuResearch/FieldWorkArena/issues/1 :)
Boosted by slightlyoff@toot.cafe ("Alex Russell"):
dracos ("Matthew") wrote:
@slightlyoff As no-one else seemed to have done, I opened https://github.com/FujitsuResearch/FieldWorkArena/issues/1 :)
Boosted by glyph ("Glyph"):
tef wrote:
huge week, got to break out one of my favourite puns
a friend was elated "i saw a whale in the harbour!" she'd taken the cross bay ferry and caught a glimpse
i couldn't stop myself from replying
"bit of a fluke"
Boosted by glyph ("Glyph"):
susankayequinn@wandering.shop ("Sue is Writing Solarpunk 🌞🌱") wrote:
If you think things are rough, let me tell you a little story re:my HOA:
My neighborhood HOA was very chill... so chill I never heard from them. Until suddenly the board sold us out to a management corp who jacked up our fees & vowed to crank up "enforcement" (sound familiar? it gets better)
1/n
Boosted by ChrisWere@toot.wales ("Chris Were ⁂🐧🌱☕"):
freebooters.uk@rss-parrot.net ("🦜 The Freebooters Podcast") wrote:
Freebooters, food and drink edition, with Wing
freebooters.uk/media/20260419-freebooters.mp3
Chris and Drew are joined by friend of the show Wing for a chat about food, drink, and computers.
Boosted by glyph ("Glyph"):
hynek ("Hynek Schlawack") wrote:
RE: https://mastodon.social/@AlSweigart/116431940330126637
I have arrived in the highest creator echelon 🥲❤️
dysfun@treehouse.systems ("gaytabase") wrote:
there's space left over too, but anything i add can't take up too many bits because it's got to pack into 64 bits with a type tag (currently up to 6 bits for the extended tags). but i could add null or various sorts of sentinel error values easily enough
dysfun@treehouse.systems ("gaytabase") wrote:
my immediates can now encode:
- booleans
- 8/16 bit integers
- 61/32 bit integers (64-bit only)
- 61-bit floats (64-bit only)
- 30 bit integers (32-bit only)
- up to 7 char strings (64-bit only)
so i can overlap any of those with a pointer and avoid heap data for all the values that fit in immediates!
pzmyers@freethought.online ("pzmyers 🕷") wrote:
Waking up to snow.
https://freethoughtblogs.com/pharyngula/2026/04/19/my-poor-spiders/
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Combined with the inherent hazard of using chatbots for a UI then, in theory, any tech designed like this makes for a perfect storm that converts, convinces, and spreads without adding much to the overall economy itself.
/end
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Those who are lucky in their first go (+X%) will become tool gamblers, those unlucky (-X%) will be perplexed by the hype. If the hype continues, they'll get over their disappointment and try again. Depending on the roll of the dice they risk getting converted to tool gamblers like their peers
baldur@toot.cafe ("Baldur Bjarnason") wrote:
For each person who vibe-codes a useful app there'll be another causing a disaster. Given the nature of the tool the OVERALL effect on the economy from the tool ITSELF is likely to be more volatility but minimal benefit. Then you factor in the bubble, abuses, etc and it becomes a clear negative
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Variability in sequences tend to cancel out
If an intervention adds a +/-10% variability to each step in a sequence then as the number of sequences grows the closer the overall effect will trend to zero. Scale this up to an economy and you get added volatility with little benefit
baldur@toot.cafe ("Baldur Bjarnason") wrote:
I'm sceptical that LLMs provide a meaningful economic benefit because of their variability
"Variability destroys variability" so when you apply the same tech, with the same variability dynamic, throughout an economy, whatever benefit it might intrinsically provide will wash out at scale
/thread
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Just to make it extra clear. This is not a dunk. I’m agreeing with the post I quoted. Just saying that I don’t think it’s a line of argument that works on people unless they’re hesitant to begin with.
baldur@toot.cafe ("Baldur Bjarnason") wrote:
RE: https://mementomori.social/@juergen%5Fhubert/116429168342399754
I’ve been making this argument for over three years now to little success
I’m also extremely sceptical of the notion that there is anything truly useful to LLMs, whatever benefit they might have is likely to wash out at scale because of their variability and UI (prompts and chatbots are an incredibly poor UI for productive work), but I I’ve generally avoided making that argument until recently because devs are notoriously bad at assessing what genuinely improves the process of making software
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
juergen_hubert@mementomori.social ("Jürgen Hubert") wrote:
My stance on #LLM :
1. There _might_ be some useful use cases with this technology that could be worth exploring.
2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.
3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"
4. Thus, there is currently pretty much no ethical way of using LLMs.
5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
cwebber@social.coop ("Christine Lemmer-Webber") wrote:
RE: https://social.coop/@cwebber/116426025287444979
I see myself getting subtooted by various people, and let me clarify what this thread is, and is not. My opinions on LLM usage are more complicated than "good vs bad".
But I have created a scoped analysis here. My opinion is that we are facing a *licensing hygiene crisis* from a situation where we do not and probably will not know the licensing situation of these tools for some time.
There are only two viable scenarios I can see:
- LLM output is unusable and a copyright mess that cannot be incorporated in any FOSS project
- *All* LLM output is effectively in the public domain.I am willing to accept either one of those, but the lack of knowledge of which situation we are facing makes me concerned about LLM based contributions entering FOSS projects on copyright grounds.
(There are plenty of other debates one can have about LLMs also, I have scoped them out of this particular thread.)
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
sindarina@ngmx.com ("Sindarina, Edge Case Detective") wrote:
This is your regularly scheduled reminder that, even if you are only using the FOTM LLM to automate the 'boring' code, you're still using work stolen from people who did not consent.
The idea that there is somehow a line that you cross when you use 'AI' to make art instead of having it write SQL queries, commit messages, or internal tooling for you is an illusion, something you come up with to fool yourself into thinking that there is a 'moral' use of these tools.
There is not. This is technology built on extraction, exploitation of real people and tangible resources, and your use of it, no matter how 'small' you think it is, makes you complicit.
If you use these tools, at least be honest about the fact that you care more about the convenience it provides for you than the lives it ruins, and stop trying to justify it with weasel words.
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Either iPad apps that have been updated for iPadOS 26 are all quite buggy (rendering, visual state updates, lag) on iPadOS 18 or everybody used shortcuts to get the transition done and they’re buggy everywhere, but I’m not about to go out and replace my almost decade-old iPad to find out which it is
baldur@toot.cafe ("Baldur Bjarnason") wrote:
“Infinite Patience Is Not Good for Education”
https://biblioracle.substack.com/p/infinite-patience-is-not-good-for
> They have always been wrong. They were wrong before they got started and yet hundreds of millions of dollars have gone towards a project that was doomed from the outset, dollars that could have - at least in theory - gone to, I don’t know, human beings who teach.
Boosted by jwz:
Natasha_Jay@tech.lgbt ("Natasha :mastodon:🇪🇺") wrote:
Seen in Hardcore Bikes (HCB), a bike shop in Edmonton Alberta 🚴♀️
Boosted by jwz:
MLNow@sfba.social ("Mission Local") wrote:
S.F. activists display 120-foot ‘End U.S. aid to Israel’ banner on Twin Peaks
Activists said they are "urging an end to U.S. funding for Israel's killing of over 100,000 Palestinians, Lebanese, and Iranians."
https://missionlocal.org/2026/04/sf-arms-embargo-israel-banner-twin-peaks/
Boosted by jwz:
wren6991@types.pl ("Luke Wren") wrote:
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
Boosted by adele@social.pollux.casa ("Adële 🐁!"):
nixCraft ("nixCraft 🐧") wrote:
welcome to enterprise IT support ... 😅 https://xcancel.com/f%5Fa%5Finfinityy/status/2044868607822135728
isagalaev ("Ivan Sagalaev :flag_wbw:") wrote:
Holy shit, I mastered border-image!!! #css
Boosted by adele@social.pollux.casa ("Adële 🐁!"):
fedify@hollo.social ("Fedify: ActivityPub server framework") wrote:
Naru, the Korean version of #Neocities, reportedly added an #ActivityPub implementation in just an hour using #Fedify. If you also want to implement ActivityPub quickly, give Fedify a try!
https://hackers.pub/@jihyeok/019da3d9-45b8-7629-96a8-b26bd62867c2
Boosted by soatok@furry.engineer ("Soatok Dreamseeker"):
0xabad1dea@infosec.exchange ("abadidea") wrote:
as some of you know, I was raised in a fundamentalist proto-maga environment, and so I have extensive first hand experience with cognitive dissonance and cult tactics
I wanted to point out that if someone wildly disagrees with you: they’re not lying. No matter how stupid or obviously wrong you think their take is, they’re not lying to you about what they believe; almost no-one does that.
To be clear, I’m not talking about national politicians or billionaires or obvious influence campaign bots. I’m talking about real, normal people that you know perfectly well are real, normal people but their take is so ridiculous or extreme to you that you feel tempted to conclude they’re lying.
There are real, actual reasons they believe those things and real, actual reasons they feel the need to tell you about it. If you can’t engage with that, you’ve not only already lost any chance of convincing them, but you’re setting yourself up to believe patently untrue things yourself, because you have taught yourself to shut down contrary thoughts with “they’re lying.”
accepting that the people I was told were agents of Satan *weren’t lying about their beliefs just to hurt me* is a big part of how I got out of fundamentalism.
Boosted by adele@social.pollux.casa ("Adële 🐁!"):
venelles ("Philip Wittamore") wrote:
After testing the demo I now have my own smolfedi instance up & running.
https://adele.pages.casa/md/blog/the-fediverse-deserves-a-dumb-graphical-client.md
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
tim@union.place ("Tim W RESISTS") wrote:
The "logic" of blocking content from being archived on the Wayback Machine to prevent it being used for AI training blows my mind.
Locks only keep the honest people out. In this case, all you're doing is restricting access for quite probably the best, and possibly the only, long-term durable archive of the Internet. The downsides for society are countless.
Breathless headline, but good piece, from Wired: https://www.wired.com/story/the-internets-most-powerful-archiving-tool-is-in-mortal-peril/
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
jmeowmeow@hachyderm.io ("Jeff Miller (orange hatband)") wrote:
@jchyip Theory of Constraints looms very large in my picture of model-generated code. Anything upstream or downstream of the augmented procedure is now under stress.
One obvious danger is that the temptation to ship the prototype becomes more of a plague than it has been.
With probabilistically assembled code trained from an overall mediocre and miscellaneous corpus, a boost to static and dynamic analysis looks to be in order.
Maybe liability case law is going to shape the landscape.