
Boosted by jsonstein@masto.deoan.org ("Jeff Sonstein"):
mattburgess@infosec.exchange ("Matt Burgess") wrote:
NEW: In a likely first, security researchers have shown how generative AI agents can be hijacked to cause physical consequences.
They tricked Google's Gemini AI into turning off smart home lights, opening windows, and turning on a boiler.
They hid instructions to the AI in a *calendar invitation*
https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/