Technology

Anthropic Launches Claude Code Auto Mode With Built-In Safety Layer

San Francisco: Anthropic has added a new auto mode to Claude Code, its agentic coding tool, giving the AI the ability to decide which actions it can take independently, without waiting for developer approval.

The feature is currently in research preview. It is not a finished product but is available for testing. This aligns with a broader shift toward AI agents operating across development environments, where systems are increasingly designed to manage workflows with minimal human intervention.

Auto mode works by running an AI safety layer over every action before it executes. Safe actions proceed automatically. This builds on Anthropic’s broader efforts to strengthen code reliability using AI-driven analysis within its development tools.

Actions flagged as risky are blocked before they run. Suspected prompt injection attacks are stopped the same way. Similar safeguards are emerging across AI systems globally. The focus on identifying hidden risks reflects a wider industry effort to detect vulnerabilities before they can be exploited.

The feature builds on Claude Code’s existing dangerously-skip-permissions command. That command previously handed all decision-making to the AI with no filter. Auto mode adds a safety layer on top. It makes autonomous operations for real development environments more practical.

Until now, developers using agentic coding tools faced a binary choice. They could approve every action manually or let the model run unchecked. Auto mode introduces a third option, supervised autonomy. The AI itself determines what requires human sign-off.

Anthropic has not disclosed the specific criteria its safety layer uses to distinguish safe from risky actions. That detail will matter to enterprise developers before they adopt the feature at scale.

Auto mode follows two recent Claude Code launches. Anthropic shipped Claude Code Review in March, an automatic bug-catching tool for AI-generated code. It also launched Dispatch for Cowork, which lets users delegate tasks to AI agents working on their behalf.

Together, the three features outline Anthropic’s direction for Claude Code as less a coding assistant, more an autonomous developer that knows when to act and when to ask.

Amita Parul

Amita Parul is an Independent journalist with experience in reporting and commentary on current events and sociopolitical developments. She contributes original reporting and analysis that aligns with Tea4Tech’s editorial standards for accuracy, transparency, and context, focusing on business and technology trends. || Amita covers emerging news stories and provides explanatory insights that help readers understand both the events and their implications.

Recent Posts

AI Startup Subquadratic Bags $29mn in Seed Funding; Claims 1,000x Compute Cut

MIAMI: The Miami-based startup Subquadratic has successfully bagged $29 million in seed funding and bold…

2 days ago

Dutch Quantum Startup QuantWare Lands $178M for Chip Production

Netherlands: The Netherlands-based startup QuantWare that builds an open-architecture model for quantum hardware manufacturing, raises…

2 days ago

Astrocade Raises $56M for AI Game Creation Platform from Sequoia

SAN FRANCISCO: Astrocade closes $56 million in fresh funding to scale its generative AI gaming…

3 days ago

Amazon Launches AI Agent Access for WorkSpaces Virtual Desktops

SEATTLE: Amazon Web Services launches a preview feature letting AI agents access and operate WorkSpaces…

3 days ago

From $4.5bn to $15bn: AI Agent Startup Sierra Lands $950M Series E

SAN FRANCISCO: In the Series E investment round, AI Startup Sierra Technologies raises $950 million…

3 days ago

Why VPNs are Essential for Remote Teams Using Cloud-Based Software?

There are so many doors that remote work has unexpectedly opened. Doors for the global…

4 days ago