Boosted by fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷"):
tonymottaz@social.lol ("Tony Mottaz") wrote:
Boosted by fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷"):
njr@mathstodon.xyz ("Nick Radcliffe") wrote:
@fromjason NewNewsWire.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
jamie@zomglol.wtf ("Jamie Gaskins") wrote:
If you use AI-generated code, you currently cannot claim copyright on it in the US. If you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on *the entire codebase*.
This means copyright notices and even licenses folks are putting on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain.
Source: https://www.congress.gov/crs%5Fexternal%5Fproducts/LSB/PDF/LSB10922/LSB10922.8.pdf
baldur@toot.cafe ("Baldur Bjarnason") wrote:
RE: https://infosec.exchange/@mttaggart/116065569757449515
Ars Technica seems to have deleted all the threads talking about the quote fabrication issue?
Edit: I think it was just a loading issue on their forum as it’s back now. Guess they’re getting a lot of traffic.
Boosted by mbrubeck@mefi.social:
djm62@beige.party ("серафими многоꙮчитїи") wrote:
cstanhope@social.coop ("Your weary 'net denizen") wrote:
I didn't want to learn this because I feel agentic development is worse than a waste of time. But in following up on a (now pulled) story from Ars Technica about the situation above where Ars apparently used made up quotes from the target of the "AI" harassment, I learned that the software running these agents have a SOUL.md document that feeds into the prompt that the software can modify on its own (generating bizarre feedback loops I am sure).
This is all an incredible waste of time and energy. It's an attack on individuals, culture, and society. We should spend less time figuring out how people deployed these "agents" and more time simply saying "no".
I hope not, but maybe, as Shambaugh says, there's a quarter of developers that "side" with these pieces of software. I think that's all the more reason we must consistently say no before it gets worse.
Last 2 releases of enumeratum have been made using GH-hosted Copilot.
Not saying AI-all-the-things, but it has its usecases... esp for small, unpaid open source libs.
Boosted by jwz:
Natasha_Jay@tech.lgbt ("Natasha :mastodon: 🇪🇺") wrote:
A Valentine’s message from your ZX Spectrum. :zxstripes:
Boosted by jwz:
Lana@beige.party ("𝐿𝒶𝓃𝒶 "not yet begun to fight"") wrote:
Pick the best fallacy
db@social.lol ("David Bushell ☕") wrote:
noted: AI attack dogs
https://dbushell.com/notes/2026-02-14T11:42Z/
Boosted by jwz:
RadicalGraffiti@todon.eu ("Radical Graffiti") wrote:
'Anarchy is for lovers'
Valentines Day graffiti in Sacramento
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
tante@tldr.nettime.org wrote:
This whole "OpenClaw" thing has made me very angry and I wrote a bit about the why. It's not that "it's AI": It is the way that kind of project invalidates decades of work and care in free software. "AI" software isn't just careless, it is actively rejecting responsibility and care.
Boosted by baldur@toot.cafe ("Baldur Bjarnason"):
mttaggart@infosec.exchange ("Taggart") wrote:
Putting this here so all can see it. Ars forum thread where the pull and investigation are mentioned: https://arstechnica.com/civis/threads/journalistic-standards.1511650/
Tear Us a Heart.
https://jwz.org/b/yk3b
Attachments:
- gifv: ea69d1067b096869.mp4
baldur@toot.cafe ("Baldur Bjarnason") wrote:
“Diffusion of Responsibility”
https://tante.cc/2026/02/14/diffusion-of-responsibility/
> One of the features of “AI” is the diffusion of responsibility: “AI” systems are being put in all kinds of processes and when they fuck up (and they always fuck up) it was just the “AI”, or “someone should have checked things”.
db@social.lol ("David Bushell ☕") wrote:
Samsung appliances every 10 minutes
Attachments:
slightlyoff@toot.cafe ("Alex Russell") wrote:
The "approve" numbers remain depressingly high, but it is hopeful to watch them trend down, week-over-week:
Boosted by ChrisWere@toot.wales ("Chris Were ⁂🐧🌱☕"):
brentsimmons@indieweb.social ("Brent Simmons") wrote:
People:
2014: OMG RSS is still around!
2015: OMG RSS is still around!
2016: OMG RSS is still around!
2017: OMG RSS is still around!
2018: OMG RSS is still around!
2019: OMG RSS is still around!
2020: OMG RSS is still around!
2021: OMG RSS is still around!
2022: OMG RSS is still around!
2023: OMG RSS is still around!
2024: OMG RSS is still around!
2025: OMG RSS is still around!
2026: OMG RSS is still around!Maybe in 2027 we’ll stop being surprised!
Boosted by jwz:
paninid@mastodon.world ("Coach Pāṇini ®") wrote:
🔥
jscalzi@threads.net ("John Scalzi") wrote:
Also, so that not every post I've made here today is political, here's Smudge the cat sunning himself on the the porch today.
Boosted by kornel ("Kornel"):
xkcd@mastodon.xyz ("XKCD Bot") wrote:
The discovery of a fully typographical star system comes with a big asterisk.
https://xkcd.com/3203/
Boosted by fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷"):
bonno@mastodon-belgium.be ("Koen 🇺🇦") wrote:
@funnymonkey fascism is very profitable for US big tech
Boosted by fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷"):
funnymonkey@freeradical.zone wrote:
Mark Zuckerberg and his minions at Meta see our slide into fascism -- which Zuckerberg actively funds, and Facebook actively profits from -- as a good product launch opportunity for facial recognition stalkerware goggles.
https://www.nytimes.com/2026/02/13/technology/meta-facial-recognition-smart-glasses.html
fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷") wrote:
Lots of cool websites in this curated list: Weeknote #1987 • Robb Knight https://rknight.me/blog/weeknote-1987/
fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷") wrote:
I really don't get the #Reeder redesign. I've tried to get use it all year. It's just not for me absent an "ah-ha" moment.
Like, when I add a new website, why is all that's holy can't I assign that site to a folder? Why do I have to then scroll to find the newly added site? Ahhh
My point is- anyone have a feed reader recommendation??
fromjason ("fromjason.xyz ❤️ 💻 ✍️ 🥐 🇵🇷") wrote:
The DHS is asking social media companies to hand over personal data of people who have expressed anti-ICE sentiments.
If you block someone on Bluesky, that's public data.
https://micro.fromjason.xyz/2026/02/13/ice-knows-you-blocked-them.html
Boosted by kornel ("Kornel"):
jbz@indieweb.social wrote:
How I Cut My Google Search Dependence in Half | Hister - Web History on Steroids
https://hister.org/posts/how-i-cut-my-google-search-dependence-in-half/
Boosted by kornel ("Kornel"):
emaytch ("margot") wrote:
RE: https://infosec.exchange/@SteveBellovin/116063827229791269
"Meta’s internal memo said the political tumult in the United States was good timing for the feature’s release.
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses."
slightlyoff@toot.cafe ("Alex Russell") wrote:
The review of "Melania" we all needed (via @phae):
Boosted by cstanhope@social.coop ("Your weary 'net denizen"):
aburtch@shakedown.social wrote:
Iconic #scifi series #Babylon5 is slowly being uploaded to YouTube in its entirety for free.
https://cordcuttersnews.com/babylon-5-is-now-free-to-watch-on-youtube/
(via @cos)

![Excert from the linked document: Three copyright registration denials highlighted by the Copyright Office illustrate that, in general, the office will not find human authorship where an AI program generates works in response to user prompts: 1. Zarya of the Dawn: A February 2023 decision that AI-generated illustrations for a graphic novel were not copyrightable, although the human-authored text of the novel and overall selection and arrangement of the images and text in the novel could be copyrighted. 2. Théâtre D’opéra Spatial: A September 2023 decision that an artwork generated by AI and then modified by the applicant could not be copyrighted, since the applicant failed to identify and disclaim the AI-generated portions of the work as required by the AI Guidance. 3. SURYAST: A December 2023 decision that an artwork generated by an AI system combining a “base image” (an original photo taken by the applicant) and a “style image” the applicant selected (Vincent van Gogh’s The Starry Night) could not be copyrighted, since the AI system was “responsible for determining how to interpolate [i.e., combine] the base and style images.”](https://files.mastodon.social/cache/media_attachments/files/116/059/524/179/567/557/original/e74ad01ca1a3c00a.png)
![Four-panel escalating galaxy brain meme People who need text and image: [smol brain image] People who only need text: [no image] [no text]: [Glowing brain] [no text]: [no image]](https://files.mastodon.social/cache/media_attachments/files/116/068/411/678/935/635/original/652af20c345694ae.jpg)





