Be careful with LLM "Agents"
I get it: Large Language Models are interesting... but you really should not give "Agentic AI" access to your computer, accounts or wallet.
To do away with the hype: Open Claw, Antigravity and Claude Code are just LLMs with shell access, and at it's core, an LLM is a weighted random number generator.
You have no idea what it is going to do:
It could post your credit card number on social media.
This isn't a theoretical concern. There are multiple cases of LLMs wiping peoples computers [1] [2], cloud accounts [3], and even causing infrastructure outages [4].
All of these could have been prevented if a human reviewed the output before it was executed, but the whole premise of "Agentic AI" is to not do that.
What's worse, LLMs have a nasty habit of lying about what they did. What should a good assistant say when asked if it did something? "Yes", and did it delete the database? "Of course not."
They don't have to be hacked to ruin your day.
If you want to try these tools out, run them in a virtual machine. Don't give them access to any accounts that you wouldn't want to lose. Read generated code to make sure it didn't do anything stupid like forgetting to check passwords:
[...] // TODO: Validate PDU signature // TODO: Check authorization[...] // TODO: Validate the join event[...] // TODO: Return actual auth chain[...] // TODO: Check power levels[...] // TODO: Check permissions[...]
(These are real comments from Cloudflare's vibe coded chat server)
... and keep an eye on them to make sure they aren't being assholes on your behalf. Ideally, don't give them internet accesses in the first place.