Anthropic's Claude Code Gets 'Auto Mode' — AI Decides Its Own Permissions, With a Safety Net
There’s a spectrum of trust you can give a coding agent. At one end: you approve every file write and bash command manually, one by one. At the other end: you run --dangerously-skip-permissions and let the AI do whatever it judges necessary. Both extremes have obvious problems — the first is slow enough to defeat the purpose, the second is a security incident waiting to happen. Anthropic’s new auto mode for Claude Code is an attempt to find a principled middle ground — not by letting humans define every permission boundary, but by letting the AI classify its own actions in real time and deciding which ones are safe to take autonomously. ...