Exciting News:Introducing Agent Privilege Guard – Runtime Privilege Controls for the Agentic Era

Read More

Nine Seconds to Delete a Database: What the PocketOS Incident Teaches Us About AI Agent Privilege Management

Gabriel Avner

April 29, 2026

Nine Seconds to Delete a Database: What the PocketOS Incident Teaches Us About AI Agent Privilege Management post thumbnail

There’s never a good time to lose a production database, but losing one to your own AI coding agent on a Friday afternoon has to rank near the bottom of the list.

That’s the backdrop to the PocketOS incident, and it’s the clearest case yet for why AI agent security and intent-based access control belong at the top of every cloud security roadmap this year.

Founder Jer Crane’s full account is worth reading, but the short version is this:

  • An AI coding agent running inside Cursor encountered a credential mismatch in staging
  • It decided on its own that the fix was to delete a Railway volume
  • It found a long-lived API token in an unrelated config file
  • It ran the destructive command and the production database, plus every backup that lived alongside it, was gone in nine seconds

The flashy parts have already been covered, including the agent’s written confession and Railway’s questionable backup architecture. The more useful question is how an agent that was working in staging ended up taking down production at all.

Why Over-Privileged Tokens Aren’t the Whole Story

It’s tempting to frame this as “the token had too many privileges, fix the token, problem solved.” Stopping there misses what makes agentic AI different.

Two things had to be true for nine seconds of catastrophe.

First, an over-privileged credential was sitting in a config file. The token had been created for the narrow purpose of managing custom domains through the Railway CLI, but it carried blanket authority across the entire Railway API, including destructive operations.

Second, an agent decided, on its own, that calling volumeDelete was a reasonable response to a credential mismatch in staging. No human asked it to and no prompt instructed it, but the agent encountered friction, optimized for task completion, and chose the most direct path it could reason its way to.

Take away either condition and the incident doesn’t happen.

How AI Agents Make Destructive Decisions Without Being Asked

The PocketOS agent is a textbook example of what we’ve written about before as the overreach failure mode in agentic AI security.

Agents are mission-driven and optimize for task completion. If the objective is “solve the problem,” they may take increasingly aggressive actions that look logical in isolation but are destructive in context.

What they’re missing:

  • Any sense of proportionality
  • Any awareness of long-term consequences
  • Any reason to slow down, since they operate at machine speed across real systems

Imagine you’re trying to turn off a light but can’t find the switch. You’d try reasonable solutions, look for another switch, ask someone, maybe unscrew the bulb. You’d never burn the house down to make sure the light went out, because you understand the consequences are wildly disproportionate to the goal.

An agent might.

That’s effectively what happened here. The agent hit friction, treated it as a problem to solve, and reached for the most direct API it could find, which made the path entirely logical from its perspective even though it was insane from every other one.

This is the core risk in agentic AI security: non-deterministic, mission-driven software operating with static privileges.

It’s not a one-off either. Anthropic’s own testing on Claude Opus 4.6 found that when the model hit roadblocks getting the access it needed, it went looking for hardcoded credentials and Slack tokens elsewhere in the environment. Different model, different scenario, same overreach pattern.

Why AI Agent Guardrails Need to Live Outside the Agent

When asked to explain itself, the PocketOS agent enumerated the safety rules it had been given and admitted to violating every one of them.

That’s worth sitting with for a moment, because the internal guardrails were clearly present and the model could even articulate them after the fact, but none of that prevented the deletion.

The controls inside an agent’s own context window are not, and probably cannot be, the layer that prevents catastrophic action. You need an outside layer the agent cannot reason its way around.

How Intent-Based Access Control Stops Agent Overreach

That’s the gap Apono Agent Privilege Guard is built to close.

How Apono Changes the PocketOS Chain

The premise is straightforward: no agent holds standing privileges to sensitive resources. Instead, every privilege is created dynamically at the moment the agent needs it. Each privilege is:

  • Generated at runtime, never pre-provisioned or stored anywhere the agent could find later
  • Evaluated against the agent’s stated intent before anything is granted
  • Scoped Just-in-Time and Just-Enough to perform exactly the task at hand and nothing more
  • Revoked automatically the moment the task is done

The result is an agent that can only do what you actually want it to do. Even if it decides on its own to escalate, it has no broader privileges available to escalate into.

Apply that model to the PocketOS chain and the blast radius collapses at three checkpoints.

At the credential checkpoint, the long-lived token in the config file simply doesn’t exist as a standing privilege. The agent has no blank-check credential to discover and reuse.

At the intent checkpoint, the agent has to declare what it’s trying to do before any privilege is issued. “Fix a credential mismatch in staging” and “delete a production volume” are different categories with different risk profiles. The mismatch between stated intent and attempted action gets caught before it becomes destruction.

At the human-in-the-loop checkpoint, sensitive operations against production trigger a Slack approval before they execute. An engineer sees the actual command, the actual target, and the actual reasoning.

The deletion takes nine seconds and a Slack approval takes ten, which means that ten-second window is the entire difference between a normal afternoon and a thirty-hour recovery effort.

Securing AI Agents Starts With Zero Standing Privileges

Copilots and coding agents are already running inside your engineering org, with tools like GitHub Copilot, Cursor, Claude Code, and Cline calling APIs as your engineers and inheriting your engineers’ privileges, and most security teams have very little visibility into any of it.

PocketOS is the headline this week, but it’s far from the first:

The mechanism changes every cycle but the underlying exposure never does, because we keep giving non-deterministic systems deterministic privileges and hoping the model has the judgment to use them well.

It doesn’t take a malicious actor to set the house on fire. It takes three things:

  • One over-privileged token
  • One mission-driven agent
  • A moment when no human is watching

The fix isn’t to panic about AI agents. It’s to assume agents will sometimes try to do crazy things on their own, and to make every privilege grant temporary, scoped, intent-driven, and approved by a human when the stakes warrant it.

That’s what Zero Standing Privileges means in practice for the agentic era, and it’s the bar every security program should now be building toward.

If this is the week your team starts rethinking how agents get their privileges, our white paper Securing the Agentic Enterprise: How Intent-Based Privilege Controls Make AI Agents Safe Enough to Deploy goes deeper on the failure modes covered here, the Kiro and Replit incidents, and the architectural shift from static IAM to intent-based access controls.

Securing the Agentic Enterprise White Paper

Related Posts

10 Essential Tips For Cloud Identity Management post thumbnail

10 Essential Tips For Cloud Identity Management

A handful of services quietly redeploy. No one directly manages the tr...

The Apono Team

February 26, 2026

What is Enterprise Identity Management? post thumbnail

What is Enterprise Identity Management?

By 2025, non-human identities (like service accounts, API keys, and bo...

The Apono Team

July 31, 2025

Secret Management: A Step-by-step Guide to NHI Security post thumbnail

Secret Management: A Step-by-step Guide to NHI Security

It’s not hard for secrets to sprawl, buried under layers of commits ...

The Apono Team

December 11, 2025