dysfun@treehouse.systems ("gaytabase") wrote:
trans day of variables
until today you thought they were constants, mwahaha
dysfun@treehouse.systems ("gaytabase") wrote:
trans day of variables
until today you thought they were constants, mwahaha
dysfun@treehouse.systems ("gaytabase") wrote:
happy trans day of violence, my pretties
jscalzi@threads.net ("John Scalzi") wrote:
Happy Trans Day of Visibility. I know trans people and I know fascists who are making life difficult for trans people. The trans people are pretty cool and I'm glad I know them. The fascists can go fall down a deep dark hole.
Boosted by adam@social.lol ("Adam"):
neatnik@social.lol ("Neatnik") wrote:
Happy Trans Day of Visibility! To all of my trans friends: I see you, you’re beautiful, and you matter. :prami_hearts_red:
baldur@toot.cafe ("Baldur Bjarnason") wrote:
The worst case scenario here is Götterdämmerung/ragnarök so let’s just hope it doesn’t come to that.
baldur@toot.cafe ("Baldur Bjarnason") wrote:
AFAICT and very much not an expert, but some of the places to watch for the really serious warning signs: India (food), Australia (energy-intensive mining), Japan (plastics), South Korea and Taiwan (semiconductors and other industry). Shortages are likely to be a much bigger issue than the market price of oil
baldur@toot.cafe ("Baldur Bjarnason") wrote:
Trying to read up on and comprehend the economic impact of certain current events is outright nightmare fuel. There’s a non-zero chance this will play out in a substantially worse way than the pandemic 😬
aredridel@kolektiva.social ("Mx. Aria Stewart") wrote:
RE: https://mastodon.world/@paninid/116324402697434215
This is a huge problem and we need to be holding people making chatbot interfaces accountable for some of it. The _shape_ of these tools matter immensely. They need to be presented as not just ‘might be wrong' in a disclaimer sense, but deeply integrating their epistemic status in the UI. These are suggestions, leads, pointers, rough summaries.
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
paninid@mastodon.world ("Coach Pāṇini ®") wrote:
#CognitiveSurrender is when people give up their own thinking to follow #ChatGPT's recommendations 90%+ of the time when it’s correct and also still followed its advice ~80% of the time when it was completely wrong.
https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us
aredridel@kolektiva.social ("Mx. Aria Stewart") wrote:
Also, honestly? Dumb models are easier to understand and less likely to try to escape the box while solving problems.
Easier to trick, but that's always a small matter of how much easier not whether, and we need to be building process to mitigate anyway.
aredridel@kolektiva.social ("Mx. Aria Stewart") wrote:
RE: https://social.coop/@eloquence/116297123919323315
Absolutely this.
MiniMax M2.7 and GLM-5.1 are incredibly capable.
And I think _tooling_ (that is: traditional, deterministic as can be software) to guide and give structure to the probabilistic problem-solving is actually incredibly effective. And better, in terms of energy, very very cheap. You can throw 500 billion more parameters and 2 TB of RAM and many megawatts at things to make them better... or you can throw tools to shape things at them.
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
eloquence@social.coop ("Erik Moeller") wrote:
IMO the most likely cause of any kind of #AI bubble pop are Chinese models breaking through. And at least in the open inference markets, that appears to be happening already:
"Since February, Chinese AI models made by groups such as DeepSeek and MiniMax have overtaken US rivals in token consumption, according to OpenRouter data"
A true SOTA model from China could send markets for a tailspin just like the original DeepSeek release did.
https://www.ft.com/content/2567877b-9acc-4cf3-a9e5-5f46c1abd13e?syn-25a6b1a6=1
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
ruby@toot.cat ("Ruby S.") wrote:
There's no better time to start transitioning our lives and our infrastructure away from fossil fuel dependence
From: @stroughtonsmith
https://mastodon.social/@stroughtonsmith/116324021476185254
Boosted by soatok@furry.engineer ("Soatok Dreamseeker"):
fenrislorsrai@blahaj.zone ("Fenris Lorsrai") wrote:
@karl@infosec.exchange @soatok@furry.engineer find your local harm reduction service, comp them a table, and they will generally be THERE unless they have a previous scheduling conflict.
free mobile events are big part of their mission so they really just need an invite to set up somewhere. so its really just they need an invite, like vampires, because they're going to steal your blood for a snap test.
they'll also usually bring condoms, lube, dental dams, drug test strips, and NARCAN.
(yes I have invited these to many events. plus had them do monkeypox vaccinations at events)
baldur@toot.cafe ("Baldur Bjarnason") wrote:
If LLMs were only used by a small number of experienced devs working with well-engineered guardrails, we'd have less of a problem
But once they start getting more commonly used, they start to pollute the entire ecosystem and the only way forward is stiff regulation for everybody
Boosted by glyph ("Glyph"):
dreid@wandering.shop wrote:
Today is my birthday. My wife got me the complete hitchhiker's guide trilogy.
I bet you can't guess how old I am.
If you'd like to get me something I am proud to share my birthday with trans day of visibility. So perhaps make a donation to https://tcpipeline.org/
baldur@toot.cafe ("Baldur Bjarnason") wrote:
I keep coming back to the leaded petrol analogy for LLMs and coding
Harms that are manageable when it's only used by a small number of experts become catastrophic pollution when it's used broadly throughout society
If LLMs were only used by a small number of experienced devs working with well-engineered guardrails, we'd have less of a problem
But once they start getting more commonly used, they start to pollute the entire ecosystem and the only way forward is stiff regulation for everybody
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
quinn@social.circl.lu ("Quinn Norton") wrote:
RE: https://mastodon.social/@zackwhittaker/116323734408381625
My trans homies, remember that your stealth roll is nerfed today, but you can use hit dice to restore more HP than usual 🙌
pzmyers@freethought.online ("pzmyers 🕷") wrote:
I'm happy to see one state is bold enough to tax the rich.
baldur@toot.cafe ("Baldur Bjarnason") wrote:
So, still digesting the argument of this article and haven't quite made up my mind about it.
But the fact that various forms of self-aware reasoning towards problem-solving exist across the animal kingdom that doesn't seem to be a direct function of neural network size would seem to indicate that there is some other mechanism at play than network size (which has been the core operating thesis of "AI")
"Studies on animal minds suggest consciousness is not computation"
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
zachleat@zachleat.com ("Zach Leatherman") wrote:
It needs to be known that @cloudfour are some of the best folks in the business and now is your chance to learn first-hand: https://cloudfour.com/thinks/more-projects-please/
aredridel@kolektiva.social ("Mx. Aria Stewart") wrote:
Also you're not going to have much luck bringing ethics to an economics fight. (God I wish that were easier to bring, but you have to bring _regulation_ to an economics fight if you want to embed ethics.)
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
davidgerard@circumstances.run ("David Gerard") wrote:
the precise timeline of how OpenAI fucked over the RAM market
> October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time.
Boosted by jsonstein@masto.deoan.org ("Jeff Sonstein"):
BleepingComputer@infosec.exchange wrote:
Hackers hijacked the npm account of the Axios package, a JavaScript HTTP client with 100M+ weekly downloads, to deliver remote access trojans to Linux, Windows, and macOS systems.
fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷") wrote:
So Trump's ballroom is actually a military bunker underneath.
Why. Are. Rich. People. Building. Bunkers.
pzmyers@freethought.online ("pzmyers 🕷") wrote:
The "future of education" is blindingly tasteless and vapid.
https://freethoughtblogs.com/pharyngula/2026/03/31/the-future-of-education/
aredridel@kolektiva.social ("Mx. Aria Stewart") wrote:
A lotta y'all gotta learn the difference between _LLM Use_ and _slop generation_.
You can make slop by hand like the old days and it's still slop. You can use an LLM with good engineering practices as guardrails and end up without slop.
(If it's creative work without engineering guardrails though, it's almost certainly slop.)
db@social.lol ("David Bushell 🪿") wrote:
excited to try this but disappointed that my paid Proton subscription gets zero benefits (unless i pay more)
Boosted by aredridel@kolektiva.social ("Mx. Aria Stewart"):
ngaylinn@tech.lgbt ("Nate Gaylinn") wrote:
Sigh. I just discovered an experiment very similar to what I've been designing over the last several weeks.
This has to be one of the most difficult and discouraging parts of science.
I don't feel as "scooped" as I used to when this happens. I've come to realize that I almost always have something else to add, so this is generally a sign to pivot rather than give up.
But I did waste effort researching what this paper clearly explains, and now I have to stop and rethink everything I'm doing, and maybe even start over from scratch on some things. Frustrating!
What bothers me most about this is how accidental it all is. The scientific literature is barely organized. I find things because someone points them out, or because I have the right magic keywords. Both of those methods are painfully unreliable, and I often find things much later than I'd like.
There's gotta be a better way.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
PavelASamsonov ("Pavel A. Samsonov") wrote:
The main problem with checking AI outputs is that you need to have an idea of what you actually wanted it to do, and most people use AI as a substitute for having to figure that out.