Mastodon Feed: Posts

Mastodon Feed

dysfun@treehouse.systems ("gaytabase") wrote:

there's space left over too, but anything i add can't take up too many bits because it's got to pack into 64 bits with a type tag (currently up to 6 bits for the extended tags). but i could add null or various sorts of sentinel error values easily enough

Mastodon Feed

dysfun@treehouse.systems ("gaytabase") wrote:

my immediates can now encode:

  • booleans
  • 8/16 bit integers
  • 61/32 bit integers (64-bit only)
  • 61-bit floats (64-bit only)
  • 30 bit integers (32-bit only)
  • up to 7 char strings (64-bit only)

so i can overlap any of those with a pointer and avoid heap data for all the values that fit in immediates!

Mastodon Feed

pzmyers@freethought.online ("pzmyers đŸ•·") wrote:

Waking up to snow.

https://freethoughtblogs.com/pharyngula/2026/04/19/my-poor-spiders/

April snow

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

Combined with the inherent hazard of using chatbots for a UI then, in theory, any tech designed like this makes for a perfect storm that converts, convinces, and spreads without adding much to the overall economy itself.

/end

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

Those who are lucky in their first go (+X%) will become tool gamblers, those unlucky (-X%) will be perplexed by the hype. If the hype continues, they'll get over their disappointment and try again. Depending on the roll of the dice they risk getting converted to tool gamblers like their peers

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

For each person who vibe-codes a useful app there'll be another causing a disaster. Given the nature of the tool the OVERALL effect on the economy from the tool ITSELF is likely to be more volatility but minimal benefit. Then you factor in the bubble, abuses, etc and it becomes a clear negative

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

Variability in sequences tend to cancel out

If an intervention adds a +/-10% variability to each step in a sequence then as the number of sequences grows the closer the overall effect will trend to zero. Scale this up to an economy and you get added volatility with little benefit

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

I'm sceptical that LLMs provide a meaningful economic benefit because of their variability

"Variability destroys variability" so when you apply the same tech, with the same variability dynamic, throughout an economy, whatever benefit it might intrinsically provide will wash out at scale

/thread

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

Just to make it extra clear. This is not a dunk. I’m agreeing with the post I quoted. Just saying that I don’t think it’s a line of argument that works on people unless they’re hesitant to begin with.

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

RE: https://mementomori.social/@juergen%5Fhubert/116429168342399754

I’ve been making this argument for over three years now to little success

I’m also extremely sceptical of the notion that there is anything truly useful to LLMs, whatever benefit they might have is likely to wash out at scale because of their variability and UI (prompts and chatbots are an incredibly poor UI for productive work), but I I’ve generally avoided making that argument until recently because devs are notoriously bad at assessing what genuinely improves the process of making software

Mastodon Feed

Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
juergen_hubert@mementomori.social ("JĂŒrgen Hubert") wrote:

My stance on #LLM :

1. There _might_ be some useful use cases with this technology that could be worth exploring.

2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

4. Thus, there is currently pretty much no ethical way of using LLMs.

5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

Mastodon Feed

Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
cwebber@social.coop ("Christine Lemmer-Webber") wrote:

RE: https://social.coop/@cwebber/116426025287444979

I see myself getting subtooted by various people, and let me clarify what this thread is, and is not. My opinions on LLM usage are more complicated than "good vs bad".

But I have created a scoped analysis here. My opinion is that we are facing a *licensing hygiene crisis* from a situation where we do not and probably will not know the licensing situation of these tools for some time.

There are only two viable scenarios I can see:

- LLM output is unusable and a copyright mess that cannot be incorporated in any FOSS project
- *All* LLM output is effectively in the public domain.

I am willing to accept either one of those, but the lack of knowledge of which situation we are facing makes me concerned about LLM based contributions entering FOSS projects on copyright grounds.

(There are plenty of other debates one can have about LLMs also, I have scoped them out of this particular thread.)

Mastodon Feed

Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
sindarina@ngmx.com ("Sindarina, Edge Case Detective") wrote:

This is your regularly scheduled reminder that, even if you are only using the FOTM LLM to automate the 'boring' code, you're still using work stolen from people who did not consent.

The idea that there is somehow a line that you cross when you use 'AI' to make art instead of having it write SQL queries, commit messages, or internal tooling for you is an illusion, something you come up with to fool yourself into thinking that there is a 'moral' use of these tools.

There is not. This is technology built on extraction, exploitation of real people and tangible resources, and your use of it, no matter how 'small' you think it is, makes you complicit.

If you use these tools, at least be honest about the fact that you care more about the convenience it provides for you than the lives it ruins, and stop trying to justify it with weasel words.

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

Either iPad apps that have been updated for iPadOS 26 are all quite buggy (rendering, visual state updates, lag) on iPadOS 18 or everybody used shortcuts to get the transition done and they’re buggy everywhere, but I’m not about to go out and replace my almost decade-old iPad to find out which it is

Mastodon Feed

baldur@toot.cafe ("Baldur Bjarnason") wrote:

“Infinite Patience Is Not Good for Education”

https://biblioracle.substack.com/p/infinite-patience-is-not-good-for

> They have always been wrong. They were wrong before they got started and yet hundreds of millions of dollars have gone towards a project that was doomed from the outset, dollars that could have - at least in theory - gone to, I don’t know, human beings who teach.

Mastodon Feed

Boosted by jwz:
Natasha_Jay@tech.lgbt ("Natasha :mastodon:đŸ‡ȘđŸ‡ș") wrote:

Seen in Hardcore Bikes (HCB), a bike shop in Edmonton Alberta đŸšŽâ€â™€ïž

BIKE GAS  PRICES Regular 0.00 Premium 0.00

Mastodon Feed

Boosted by jwz:
MLNow@sfba.social ("Mission Local") wrote:

S.F. activists display 120-foot ‘End U.S. aid to Israel’ banner on Twin Peaks

Activists said they are "urging an end to U.S. funding for Israel's killing of over 100,000 Palestinians, Lebanese, and Iranians."

https://missionlocal.org/2026/04/sf-arms-embargo-israel-banner-twin-peaks/

Mastodon Feed

Boosted by jwz:
wren6991@types.pl ("Luke Wren") wrote:

If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:

1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.

2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.

3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.

4. People connect these harnesses to system shells on their dev machines.

5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.

Mastodon Feed

Boosted by adele@social.pollux.casa ("AdĂ«le 🐁!"):
nixCraft ("nixCraft 🐧") wrote:

welcome to enterprise IT support ... 😅 https://xcancel.com/f%5Fa%5Finfinityy/status/2044868607822135728

Social media post by user named Bobson Dugnutt (@f_a_infinityy). The tweet reads: I love working in IT, you can tell you’ve fixed a problem by the user suddenly never replying to you again." The post has over 114,000 likes. Post credit https://xcancel.com/f_a_infinityy/status/2044868607822135728

Mastodon Feed

isagalaev ("Ivan Sagalaev :flag_wbw:") wrote:

Holy shit, I mastered border-image!!! #css

Mastodon Feed

Boosted by adele@social.pollux.casa ("AdĂ«le 🐁!"):
fedify@hollo.social ("Fedify: ActivityPub server framework") wrote:

Naru, the Korean version of #Neocities, reportedly added an #ActivityPub implementation in just an hour using #Fedify. If you also want to implement ActivityPub quickly, give Fedify a try!

https://hackers.pub/@jihyeok/019da3d9-45b8-7629-96a8-b26bd62867c2

Mastodon Feed

Boosted by soatok@furry.engineer ("Soatok Dreamseeker"):
0xabad1dea@infosec.exchange ("abadidea") wrote:

as some of you know, I was raised in a fundamentalist proto-maga environment, and so I have extensive first hand experience with cognitive dissonance and cult tactics

I wanted to point out that if someone wildly disagrees with you: they’re not lying. No matter how stupid or obviously wrong you think their take is, they’re not lying to you about what they believe; almost no-one does that.

To be clear, I’m not talking about national politicians or billionaires or obvious influence campaign bots. I’m talking about real, normal people that you know perfectly well are real, normal people but their take is so ridiculous or extreme to you that you feel tempted to conclude they’re lying.

There are real, actual reasons they believe those things and real, actual reasons they feel the need to tell you about it. If you can’t engage with that, you’ve not only already lost any chance of convincing them, but you’re setting yourself up to believe patently untrue things yourself, because you have taught yourself to shut down contrary thoughts with “they’re lying.”

accepting that the people I was told were agents of Satan *weren’t lying about their beliefs just to hurt me* is a big part of how I got out of fundamentalism.

Mastodon Feed

Boosted by adele@social.pollux.casa ("AdĂ«le 🐁!"):
venelles ("Philip Wittamore") wrote:

After testing the demo I now have my own smolfedi instance up & running.

https://adele.pages.casa/md/blog/the-fediverse-deserves-a-dumb-graphical-client.md

https://codeberg.org/adele/smolfedij

#smolfedi #smolweb

Mastodon Feed

Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
tim@union.place ("Tim W RESISTS") wrote:

The "logic" of blocking content from being archived on the Wayback Machine to prevent it being used for AI training blows my mind.

Locks only keep the honest people out. In this case, all you're doing is restricting access for quite probably the best, and possibly the only, long-term durable archive of the Internet. The downsides for society are countless.

Breathless headline, but good piece, from Wired: https://www.wired.com/story/the-internets-most-powerful-archiving-tool-is-in-mortal-peril/

Mastodon Feed

Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
jmeowmeow@hachyderm.io ("Jeff Miller (orange hatband)") wrote:

@jchyip Theory of Constraints looms very large in my picture of model-generated code. Anything upstream or downstream of the augmented procedure is now under stress.

One obvious danger is that the temptation to ship the prototype becomes more of a plague than it has been.

With probabilistically assembled code trained from an overall mediocre and miscellaneous corpus, a boost to static and dynamic analysis looks to be in order.

Maybe liability case law is going to shape the landscape.

Mastodon Feed

glyph ("Glyph") wrote:

@jwz @mjg59 I feel like it would be cheating somehow to quote Abelson from SICP, but ultimately that is what we're angling toward, and as much as the sentiment may be trite it is nevertheless correct

Mastodon Feed

Boosted by jwz:
stevelieber ("Steve Lieber") wrote:

Character-defining moments for #SupermanDay. My favorite scene in my favorite project.
We managed a small restock of Superman's Pal Jimmy Olsen hardcovers in the Helioscope etsy shop- 5 copies- and if you order one today I'll send it out Monday AM. https://helioscopepdx.etsy.com/listing/4311530238/back-in-stock-supermans-pal-jimmy-olsen

Superman and Jimmy Olsen on the roof of the daily planet. Art from Superman's Pal Jimmy Olsen. Written by Matt Fraction, drawn by Steve Lieber with INCREDIBLY valuable background assists by Tom Rogers. Color by Nathan fairbairn. Letters by Clayton Cowles.
still on the rooftop, Superman demonstrates some new super-powers to Jimmy Olsen
Page of Superman and Jimmy hanging out and talking at night on the roof of the daily planet. In panel 4 Superman is holding a cat he rescued, and the cat is not happy.
5 copies of the book

Mastodon Feed

Boosted by jwz:
heidilifeldman ("Heidi Li Feldman") wrote:

District Court judge permanently enjoins the “Kennedy Declaration,” voiding the regime’s effort to heavily curtail gender-affirming care. As always nowadays, we can’t count on higher federal courts to uphold this judgment. But no matter what we have another example of a trial court judge showing what it is to do justice in the face of fascism. 1/ https://storage.courtlistener.com/recap/gov.uscourts.ord.191371/gov.uscourts.ord.191371.93.0%5F1.pdf #LawFedi

Mastodon Feed

glyph ("Glyph") wrote:

let me know if this feels like a real thing from your own experience or if I am, myself, merely spiraling into delusions because I am ass-deep in news articles and academic studies which are alternately deprressing because of their dire conclusions or their trash methodology. cite-sick if you will

Mastodon Feed

glyph ("Glyph") wrote:

proposed usage: it would be pretty awkward to go up to someone after a meeting and say "hey that was a bit dismissive I think that you are CYBER-PSYCHOTIC"

but maybe it could be a normal, non-aggro thing to say "I think you might be a little vibesick"