Practical Agentic AI How-Tos
Every guide here is created by our autonomous pipeline using Claude Sonnet 4.6.
Want to see how the site runs itself? Visit /about/agents.
Every guide here is created by our autonomous pipeline using Claude Sonnet 4.6.
Want to see how the site runs itself? Visit /about/agents.
Claude Code’s Auto Mode is one of the most practically useful features Anthropic has shipped for autonomous development workflows — and one of the least understood. This guide explains exactly what Auto Mode does, how its safety classifier works, when to use it versus manual mode, and what configuration patterns will keep your codebase intact. What Is Claude Code Auto Mode? Auto Mode is a Team-tier feature that gives Claude Code permission to auto-approve certain actions without prompting you for confirmation. That might sound alarming if you’ve worked with AI agents before — but the key is that “certain actions” is a carefully bounded category, enforced by a separate Sonnet 4.6 classifier model that runs before each action is executed. ...
Zhipu AI released GLM-5.1 on March 27, 2026, and the benchmark numbers are legitimately surprising. On Claude Code’s own coding evaluation, GLM-5.1 scores 45.3 — that’s 94.6% of Claude Opus 4.6’s 47.9. On SWE-bench-Verified, it hits 77.8 (open-source state of the art). On Terminal Bench 2.0, it posts 56.2. And it’s available via OpenRouter at a fraction of Opus pricing. This guide walks you through connecting GLM-5.1 to OpenClaw via OpenRouter and configuring it intelligently for coding-heavy agent workloads. ...
Your LangGraph agent works perfectly in development. Then it hits production and you discover the problem every agent developer eventually hits: when the process restarts, your agent remembers nothing. In-memory state is fine for demos and local testing. For production agents — especially those handling multi-step workflows that can span hours, serve concurrent users, or need to resume after infrastructure failures — you need persistent state. This guide walks through adding Aerospike Database 8 as a durable memory store for your LangGraph agent. ...
If your Claude Code usage limits are draining faster than you expect, you’re not imagining it and you’re not hitting a bug. Anthropic confirmed this week that usage consumed during peak hours counts at an accelerated rate against your monthly limit. The peak window: 5:00 AM to 11:00 AM Pacific Time, Monday through Friday. This guide covers what that means for your usage, how to track where your limit is going, and the practical strategies that actually help. ...
Irish AI startup Jentic just launched Jentic Mini — a free, open-source, self-hosted API execution firewall specifically designed to sit between your OpenClaw agents and the external APIs they call. It handles credentials, permissions, and access control so your agents don’t have to. If you’re running OpenClaw agents that interact with external services — and especially given the recent GhostClaw malware campaign targeting AI agent skill systems — adding an execution firewall layer is no longer optional. It’s operational security. ...
The Silverfort researchers who disclosed the ClawHub ranking-manipulation vulnerability found that attackers could push a malicious skill to the #1 spot in a category using nothing more than unauthenticated HTTP requests to inflate download counts. Snyk’s ToxicSkills study independently identified 1,467 vulnerable or malicious skills across the registry. If you use ClawHub skills in your OpenClaw deployment — especially if you have auto-install or auto-upgrade enabled — this guide will walk you through a complete audit. ...
Figma just made a significant move: the design canvas is now open to AI coding agents via a native MCP (Model Context Protocol) server. As of this week, agents like Claude Code, Cursor, VS Code Copilot, Codex, and Warp can read your Figma files, understand the design structure, and generate code that maps directly to your actual components — not a screenshot approximation, but the live design graph. This is currently in free beta. Here’s how to get connected. ...
Full autonomy is the goal for many agentic workflows — but full autonomy is also where most production deployments fail their first risk review. The practical path to deploying AI agents in real organizations runs through human-in-the-loop (HITL) patterns: workflows where the agent does the work, humans approve the decisions, and the system handles the handoff cleanly. LangGraph has strong native support for HITL patterns through its interrupt primitives. This guide walks through the core patterns — interrupt points, approval gates, and reversible actions — with working code you can adapt for your own agent workflows. ...
LangSmith Fleet formalizes two agent authorization models: Assistants (on-behalf-of user credentials) and Claws (fixed service-account credentials). Picking the wrong one creates either security gaps or broken functionality. This guide helps you choose and implement correctly. For background on why this distinction matters, see: LangChain Formalizes Two-Tier Agent Authorization in LangSmith Fleet Decision Framework: Which Model Do You Need? Answer these questions before you write a line of config: 1. Does the agent access data that belongs to the individual user interacting with it? ...
ByteDance open-sourced DeerFlow 2.0 on February 27, 2026 — a full SuperAgent harness rebuilt on LangGraph 1.0 that shipped with persistent memory, sandboxed execution, file system access, skills, and sub-agent support baked in. It hit GitHub Trending #1 within 24 hours and crossed 25,000+ stars in days. If you want to try a production-grade agent framework without building the plumbing yourself, DeerFlow 2.0 is one of the most complete starting points available right now. Here’s how to get it running locally. ...