dysfun@treehouse.systems ("gaytabase") wrote:
in the netherlands this usually represents as headwinds there, headwinds on the way back when cycling.
dysfun@treehouse.systems ("gaytabase") wrote:
in the netherlands this usually represents as headwinds there, headwinds on the way back when cycling.
dysfun@treehouse.systems ("gaytabase") wrote:
watching about singapore flight 319 and this is amazing, they've got dutch weather there - tailwinds on opposite runways.
Boosted by jwz:
tomjennings@tldr.nettime.org ("tom jennings") wrote:
This asshole driving his codpiece on rhe LA River Bike Trail. Came up behind, took this photo. They stopped passenger gor out. I said something ingenious like "get off the bike path asshole" as i passed and wow they went nuts.
Rode up ahead to call 911 (correct use of cops in this instance i think) the pass. got out to threaten me, etc.
Saw him pull into a warehouse that fronts the path. Ill probably call back to report that.
Happy to harrass a cyber twuck with police. They deserve each other.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
darkuncle@infosec.exchange ("Scott Francis") wrote:
This is really well stated:
“I guess what I’m saying is great consumer products don’t make young people feel anger and despair the more they use them.” — Nilay Patel on AI
https://bsky.app/profile/reckless.bsky.social/post/3mj3vb6wxnc2l
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
benfry@information.garden ("Ben Fry") wrote:
RE: https://thepit.social/@peter/116376219055579156
I simply cannot get my head around why anyone would let AI anywhere near their data analysis work.
Why would you add something to your work that can drop its accuracy by half? by 10%? by 1%? What would be an acceptable amount? What is the possible upside that would make this worth it?
I'm floored by the number of people in this field who take themselves all too seriously but are out there starting their whatever *dot AI* companies to get in on the grift, or say things like “and of course, AI” like it's both obvious and inevitable. And how is this so shiny that even educators have taken leave of their senses?
What the f*k is the point of analyzing a dataset if you're ok with answers simply being incorrect? What are we even doing here?
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
emilymbender@dair-community.social ("Prof. Emily M. Bender(she/her)") wrote:
Almost a year ago, I was described in the FT as "a Cassandra with a wry grin and twinkling eye", and was entertained because Cassandra (famously) was right.
It's actually not fun, though, to watch the world do things you've been warning against:
https://www.newstatesman.com/technology/2026/04/the-silent-coup
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
uglyreykjavik.bsky.social@bsky.brid.gy ("Ugly Reykjavik") wrote:
They really do match. Hopefully, they'll be restored to their former quaint glory one day.#Iceland #photography #streetphotography #nature #landscape #naturephotography #landscapephotography #abandoned #decay #snow #rust
jscalzi@threads.net ("John Scalzi") wrote:
(I'm aware that some Hollywood A-listers have publicists/managers who make vetting questions a condition of interview access to their clients. I am not the right interviewer for that type of scenario. On the flip side, neither my publicists nor my manager ask for that from my interviewers, and also, I am not a Hollywood A-lister.)
jscalzi@threads.net ("John Scalzi") wrote:
Having been on both sides of the interview table: Veronica is correct, this is not it. As an interviewer I don't clear questions ahead of time and as a subject I don't ask for that. At most, as an interviewer, I'll entertain requests about subjects the interviewee doesn't want to talk about. When I'm asked that same question by an interviewer I tell them to ask whatever they want and if I don't want to answer I will charmingly deflect.
jscalzi@threads.net ("John Scalzi") wrote:
Charles Emerson Winchester the Third, on the other hand, handily defeats her, he's surprisingly wily
Boosted by jwz:
RadicalGraffiti@todon.eu ("Radical Graffiti") wrote:
"No Flock"
Poster spotted in Bloomington, Indiana denouncing Flock surveillance cameras.
Boosted by jwz:
RadicalGraffiti@todon.eu ("Radical Graffiti") wrote:
Anti-surveillance poster spotted in Sydney
Boosted by jwz:
digyoursoul@universeodon.com ("Voting is Your POWER") wrote:
Apple is closing down the first of its US stores to unionize.
Apple says that because of the collective bargaining agreement with these workers, they "couldn’t offer to transfer them to nearby locations.”
The union is outraged, and exploring options to hold Apple accountable.
https://bsky.app/profile/moreperfectunion.bsky.social/post/3mj5khm64fk2b
Boosted by soatok@furry.engineer ("Soatok Dreamseeker"):
toonie@meow.social ("🌿Toonie🍸") wrote:
Boosted by glyph ("Glyph"):
mcc wrote:
@jplebreton y'know, i think i knew this, but i never actually bothered learning what any of the other colors *correspond* to. like what are the other color book standards for.
*checks*
hahahahahaa
Boosted by glyph ("Glyph"):
jplebreton ("JP") wrote:
https://en.wikipedia.org/wiki/Rainbow%5FBooks TIL "red book audio" was part of a larger set of CD standards books with different colors. feels like discovering new Ages of Myst.
casual thinkpieces and lazy attempts at scicomm are what has set me off but the actual thing I'm mad about is that we are ruled by people with a child's understanding of the world and the economy and that's actually really bad
seriously just imagine the plot of one of the movies that doomers seem to think are documentaries, like Terminator 2. imagine the scene where the T-1000 is getting pelted with bullets. instead of seamlessly autonomously healing, imagine it has to lie down and wait for a human to place an order for $1,000,000 of NVIDIA GPUs to be delivered in a shipping container and then a construction crew to set up a methane generator to run for two weeks straight before it got up again. is that still scary?
like if anyone had halfway-plausible "grey goo" nanotech that could do anything that looked like computation, that might be worrying. a locally viable self-reproducing platform that can make another one of itself from a pile of dirt, even if it's like, special dirt, that might scare me a little bit. but an overlord hive-mind that requires an uninterrupted global high-purity helium supply chain just to make ONE more of itself is supposed to be a threat?
put ME on CNN and MSNBC, you cowards.
RE: https://mastodon.social/@glyph/115076275195904439
I've written about this before and I will probably do it again. but I don't know what else to do but repeat myself when allegedly serious, internationally-renowned academic experts and influential public intellectuals are just going out there and saying stuff that would get you laughed out of a late night freshman dorm room conversation about philosophy
doomers might look at my rant here and think, "but wait, once it's self-sustaining, even a little, it's TOO LATE, it's already out of control!!!" and to that I say: no. not even close. look the evolution of *any* business. managing resource flows is really hard. there is an off-ramp every single day
if, in order to achieve your out-of-control doomsday robot scenario, a trillion dollars worth of human effort must be expended annually, and if any of it stops for even a moment than the whole thing implodes and grinds to a halt, _you can stop worrying_ that it is "the machines" which dominate us
we are not even remotely close to a single LLM meaningfully constructing even a portion of the pipeline to train another LLM. you can sort of argue around the edges that maybe under certain synthetic conditions this is borderline possible now, but on the "singularity" progress bar, that is 0.5%
in order to be a singularity candidate, an AI would need to achieve vertical integration from silicon fabrication through logistics and integration, into operating systems and applications, with tight whole-system feedback from the robotics to the shipping to the power generation and back
it is so mind-meltingly frustrating to see people think that we are close to a "singularity" with current AI technology. here's a hint about when you could worry about a disruption so big that it might, even momentarily, *appear* to be a singularity:
a single corporation turning a profit even once
resources run out. processes hit bottlenecks. optimizations reach physical limits. perpetual motion machines are impossible for reasons that are pretty well understood
the idea that a "singularity" is possible is just the idea that you can turn "mistaking a sigmoid for an exponential" into a millenarian religion
slightlyoff@toot.cafe ("Alex Russell") wrote:
Heading back to the US from two weeks in Korea, and I'm struck by how rational this policy response is. Western democracies are not showering themselves in glory by comparison:
https://www.korea.net/NewsFocus/policies/view?articleId=290296
jscalzi@threads.net ("John Scalzi") wrote:
I will be here!