pzmyers@freethought.online ("pzmyers 🕷") wrote:
I know that quarters on cruise ships are cramped, but this is ridiculous.
https://freethoughtblogs.com/pharyngula/2026/04/27/spiders-in-spaaaaaaaace/
pzmyers@freethought.online ("pzmyers 🕷") wrote:
I know that quarters on cruise ships are cramped, but this is ridiculous.
https://freethoughtblogs.com/pharyngula/2026/04/27/spiders-in-spaaaaaaaace/
dysfun@treehouse.systems ("gaytabase") wrote:
what am i doing? oh remember that silly idea to generate code that does an optimal node lookup for each node in a data structure? well as it turns out one of my friends has done that already and it's not completely impractical.
anyway i was looking at how we could shave reading a HOT node down to fewer versions of the code that would need to be generated. maybe we could get the entropy small enough that we could reasonably just generate all of them in advance?
well, if you compress the entropy enough, we can do it with our current setup and not need the code generation at all! i've now managed to do that in full for avx512 even with arbitrary string keys, i only need 4 versions for 4 node types! this is of course only possible because AMD had fixed their slow pext/pdep by zen 3, before they supported avx-512 at all, otherwise i would have a real mess on my hands
avx2 is harder owing to the lack of predication and the smaller register size, but if i manage to figure it out, it's only 5 versions. SSE is a fairly abysmal 9 versions - incredibly tedious to write but maybe workable. i say maybe, because this doesn't account for the mess induced by the vast number of ways you can implement pext/pdep. even if i only go back as far as zen 2 (which i'm currently using, so i will go back at least that far...) i'll have to provide twice as many implementations.
next year will be the 10 year anniversary of zen, maybe i can talk myself into believing that this is old enough that noone would give a shit about older hardware. i wouldn't be talking about no support, just it would fall back to a portable algorithm and be probably quite slow. it would certainly save implementing SSE versions...
fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷") wrote:
The Nightlife EP immediately takes me back to 2011. One of my all-time favorite albums by one of my all-time favorite bands., Phantogram.
pzmyers@freethought.online ("pzmyers 🕷") wrote:
I'm worried that I might start liking birds, even birds that might eat spiders. Do I have to pick a side?
https://freethoughtblogs.com/pharyngula/2026/04/27/learning-about-birds/
Boosted by pzmyers@freethought.online ("pzmyers 🕷"):
LunaDragofelis@void.lgbt ("Luna Dragofelis ΘΔ🏳️⚧️🐱") wrote:
Gender segregation in public toilets and changing rooms is a weak and cisheteronormative substitute for well-designed stalls with actual privacy.
dysfun@treehouse.systems ("gaytabase") wrote:
i've had a look in agner fog's spreadsheet and frankly i'm none the wiser - zen 5 doesn't exactly look like a speed demon either and he doesn't list his methodology for masks (and then frankly it'd be useful to see at least 2 figures - mask on and mask off (and probably at least mask half full as well)).
meanwhile, i have calculated the worst case for doing it on the cpu and it looks like it may or may not be better, depending on how masks are processed, provided we know the size in advance. there are some means by which i can know the size in advance, of course (the simplest being padding to the worst case size with inert data).
overall i wouldn't say it's looking promising for avx512 gather intrinsics, but again i'd have to write a benchmark.
cstanhope@social.coop ("Your weary 'net denizen") wrote:
A picture is worth a thousand words they say:
Boosted by ChrisWere@toot.wales ("Chris Were ⁂🐧🌱☕"):
ZackPolanski@mastox.eu ("Zack Polanski") wrote:
RT: @implausibleblog Zack Polanski talks about the need to make buses more affordable and accessible, and talks about addressing wealth inequality
"It's important our cities and rural areas have cheaper public transport"
"Sometimes when I visit rural areas and say you have buses every few hours,
Boosted by cstanhope@social.coop ("Your weary 'net denizen"):
recursive@hachyderm.io ("recursive 🏳️🌈") wrote:
It annoys me that we are culturally prone to talking about impressive technology we (personally) don't understand as "magic"
I think this does a disservice to those who may be hear us (especially children) and thus think, "I could never do something like that!"
It's a lot of hard work and cooperation that got us the things that exist today. And you could be someone who makes the next amazing thing exist
Boosted by cstanhope@social.coop ("Your weary 'net denizen"):
aparrish@friend.camp ("allison") wrote:
fascist billionaires HATE this 1 weird trick to lower your claude code bill
baldur@toot.cafe ("Baldur Bjarnason") wrote:
I'm worrying that this is starting to matter a lot as it looks like expertise is an important qualifier limiting the harm that comes from using LLMs for coding. If this is a general observation and not just a reflection of my poor career choices then it seems likely that the client-side web specifically will be hit harder by the code "slop-apocalypse" than other software.
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Basically, I know a lot of web devs with 10+ years of experience whose knowledge about any given topic in their field is at about the level of a recent graduate. Most of what they know is hearsay and superstition and most of what they do is play around with trends
And, again this is in my experience and I may just have been quite unlucky, this is more common in web dev than other parts of the software industry.
baldur@toot.cafe ("Baldur Bjarnason") wrote:
One issue I rarely see mentioned are the sharp differences between expertise progression in general web dev, online web dev subcommunities, and the rest of the software dev community.
Namely, in my experience, it's very common in general web dev for people not to have the expertise you'd expect based on their seniority that you see in the people who have the dedication and interest to participate in discussions on specialities within their field.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
phoronix@masto.ai ("Phoronix") wrote:
Ubuntu Linux Will Begin Landing AI Features Throughout The Next Year
Now that Ubuntu 26.04 LTS has shipped, Canonical is opening up on their next major focus for Ubuntu development: lots of AI features...
https://www.phoronix.com/news/Ubuntu-AI-Features-2026
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
jmaris@eupolicy.social ("Jordan Maris 🇪🇺 🇺🇦 #NAFO") wrote:
The west irreversibly diminished its own military industry for cost savings. Now it is doing the same to its software Industry.
https://techtrenches.dev/p/the-west-forgot-how-to-make-things
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
VeroniqueB99 ("Vee") wrote:
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
jcoglan wrote:
RE: https://mastodon.social/@hynek/116476031032569096
this is what I mean when I say genAI has got people deciding to act stupid on purpose. things like "prompt injection" are just things we previously recognised as glaring categories errors, but suddenly we can't recognise very obviously terrible ideas because they're wrapped up in the bullshit machine
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
jcoglan wrote:
you can argue the evidence on deskilling effects at the individual level but I think it's beyond doubt that this is happening at an institutional level
dysfun@treehouse.systems ("gaytabase") wrote:
Allah will help us land
That's... I don't think Allah would advise you to rely entirely on him when you could just... not go into terrible weather.
dysfun@treehouse.systems ("gaytabase") wrote:
remember that Zen was AMD's comeback. it took them 5 gos to actually make a vaguely acceptable cpu. and it draws too much fucking power.
dysfun@treehouse.systems ("gaytabase") wrote:
AMD before Zen2 only has 128-bit vector execution units anyway,
thanks AMD.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
tommorris ("Tom Morris") wrote:
When someone tells me we need to replace unpleasant American surveillance tech with the exact same thing but open source and hosted in a data centre in Brussels.
My heart swells with pride when we can get the drone murder decider running on a GPU in a Welsh AI growth zone with a picture of the Queen Mother on the side. It’s very important that human rights are undermined by your own government rather than a foreign corporation, after all. #SovereignTech
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
alineblankertz@indieweb.social ("Aline Blankertz") wrote:
"AI" is a huge redistribution scheme from the bottom to the top. It pays off for the billionaires no matter how big the bubble is. All we can do is limit the damage.
Pointing out the lack of profitability of "AI" products is pretty much meaningless. Investors have been speaking about this for years WHILE making billions from their investments into "AI". It is profitable for them, and that is what they care about.
I wrote about this already some time ago:
https://www.structural-integrity.eu/crashing-hard-why-talking-about-bubbles-obscures-the-real-social-cost-of-overinvesting-into-artificial-intelligence/
Boosted by dysfun@treehouse.systems ("gaytabase"):
jneen@unstable.systems ("jneen collective") wrote:
half the point of programming-tool design is to reduce the need for hypervigilance on the user.
if we're designing tools that require you to be *more* hypervigilant, legitimately what use are they?
baldur@toot.cafe ("Baldur Bjarnason") wrote:
RE: https://mastodon.acm.org/@mxp/116475436932395582
“LMs Corrupt Your Documents When You Delegate”
https://arxiv.org/abs/2604.15597
> Our large-scale experiment with 19 LLMs reveals that current models degrade documents during delegation: even frontier models [...] corrupt an average of 25% of document content by the end of long workflows
The only use case that didn't show catastrophic degradation was coding, although bear in mind that this only attempts to benchmark degradation and doesn't assess design, reliability, or quality of the output.
dysfun@treehouse.systems ("gaytabase") wrote:
aaaaanyway, i have come up with a wonderful trick which would work marvellously on zen 5 without a great deal of effort. but less marvellously on zen 4 and i have no fucking clue how well intel's doing with avx512 anymore.
dysfun@treehouse.systems ("gaytabase") wrote:
these things are insidious because the way you're told to deal with ISA support is to ask is it supported with cpuid or something. and so you ask and it helpfully says "yes, i support that". it doesn't say anything about whether it's dogshit slow.
and how do you work that out? glad you asked:
- know the bug exists at all, somehow
- implement a fallback
- sniff the processor model and switch between the two implementations.
- have a few of these stack up, give up and choose which architectures to penalise.
dysfun@treehouse.systems ("gaytabase") wrote:
Gather/Scatter slow on AMD's Zen4 implementation.
thanks AMD, another fucking performance bug to work around
dysfun@treehouse.systems ("gaytabase") wrote:
my own hopeless cheeriness is starting to stretch a bit thin
db@social.lol ("David Bushell 🪿") wrote:
solid advice: avoid GoDaddy https://vale.rocks/micros/20260427-0430