Identity is now the most common entry point for attackers. In cloud-native environments, thousands of microservices, containers, and agents request credentials every day, and each one represents a potential weakness. The imbalance between human and non-human identities (NHIs) is growing, but many organizations still devote the bulk of their identity and access governance (IGA) efforts to the former.
Over the past two years, 57% of organizations experienced at least one API-related breach; of those, 73% saw three or more incidents. At the same time, the global IAG market was valued at approximately $8 billion in 2024, driven by compliance frameworks such as SOC 2, GDPR, HIPAA, and CCPA that demand auditable proof of access controls.
The takeaway: static defenses built on logins and standing permissions can’t keep pace with identities that appear and disappear daily. For engineering teams, identity and access governance has shifted from a “nice-to-have” to a baseline requirement for both security and trust.
Identity and access governance (IGA) is the framework your organization can use to decide who should have access to systems, applications, and data, and whether that access is still appropriate. IGA goes beyond the mechanics of logging and instead focuses on oversight, accountability, and policy enforcement.
Most IGA programs are built around a few core practices:
Unlike identity and access management (IAM), which enforces access at runtime, IGA asks the harder question: should this access exist at all? Answering this question is harder today because identities are multiplying. Machine identities outnumber humans by over 80 to 1, making them one of the fastest-growing risk classes in cloud-native environments. Unlike human accounts, NHIs rarely go through onboarding or offboarding, rely on static API keys or long-lived tokens, and are frequently overprivileged—the perfect storm for attackers.

IGA is about ensuring access is both appropriate, accountable, and, most importantly, auditable. To achieve these three pillars, IGA platforms bring together several capabilities.
Crucially, modern IGA extends these capabilities beyond human users to include NHIs, ensuring service accounts and automation agents undergo the same scrutiny as employees.

Identity management has grown into a set of overlapping disciplines, each with its own focus. Many people still use the terms interchangeably, but this approach can blur the lines between strategic governance and privileged account protection.
It’s helpful to understand exactly where each begins and ends. IAM is concerned with authentication and access control at the point of login. IGA adds oversight, certification, and auditability across all identities. To monitor and control their activity, privileged access management (PAM) narrows in on the riskiest accounts, such as administrators and root users. For example, organizations rely on PAM software to enforce controls around these sensitive accounts, ensuring that high-risk permissions are granted only when necessary and closely monitored.
| Discipline | Focus | Typical Scope | Key Purpose |
| IAM | Enforcement | Authentication, MFA, SSO | Prove identity and control access at login |
| IGA | Governance | Human and non-human identities | Define, review, and certify who should have access and why |
| PAM | Privilege | High-risk administrator and root accounts | Control and monitor privileged sessions |
In a cloud-native stack, thousands of containers, pods, and serverless functions may launch and terminate within minutes. Each instance often requires its own token or temporary credential to function. Legacy governance processes that rely on quarterly or monthly reviews cannot track this churn, so permissions are left unchecked. Security teams end up with audit trails that miss most of the short-lived identities, which makes proving compliance or investigating incidents almost impossible. A best practice to overcome this challenge is to use a cloud-native access management solution like Apono, which automates JIT access and generates granular audit logs, so even short-lived identities are governed in real time.
Cloud providers like AWS, Azure, and GCP offer permission systems with thousands of individual actions that can be combined into highly customized roles. Developers frequently over-provision roles because mapping business tasks to such granular entitlements is too time-consuming. Over time, these permission sprawl problems multiply, creating toxic combinations that static governance models don’t properly evaluate.
When engineers need access to a production database or a new cloud service, the request usually goes into a ticket queue. When reviews take too long, teams are forced to delay work or find workarounds such as borrowing credentials.
This bottleneck not only slows delivery but also weakens governance because security becomes seen as a blocker rather than a partner. In some organizations, administrators pre-approve broad entitlements “just in case.” This mistake undermines the entire principle of least privilege and increases the chance of compromised credentials being abused across environments.

Unmonitored NHIs are among the most consistent attack vectors in identity-driven breaches today. Service accounts and automation agents run critical workflows in CI/CD pipelines, monitoring systems, and infrastructure tools. These identities often carry long-lived credentials with powerful permissions. Unlike human users, they rarely leave the organization, so deprovisioning processes don’t catch them.
When one of these accounts is forgotten or left unmonitored, it becomes a permanent backdoor. Attackers frequently target exposed API keys or tokens for this reason, knowing they are less likely to be rotated or reviewed. As we’ve seen with emerging issues like the MCP protocol, unsecured machine-to-machine communications can further amplify the risks of unmanaged NHIs.
Recent examples include Microsoft’s 2023 SAS Token Leak, where researchers inadvertently published a token that exposed 38TB of internal data, and the BeyondTrust API Key Breach in 2024, where attackers exploited an overprivileged, static key to reset passwords and escalate privileges. Both incidents highlight how unmanaged non-human identities can open the door to large-scale compromise.
An essential NHI security best practice is to run a Cloud Access Assessment to uncover risks in your AWS environment, provided by Apono at no cost (for a limited time only). Apono’s platform is built to close this blind spot by enforcing JIT and JEP policies for NHIs just like human accounts, stopping long-lived keys from becoming backdoors.
Most enterprises work across multiple clouds, each with its own identity console and reporting format. Security teams trying to answer “who can access sensitive data” are forced to stitch together incomplete reports. The lack of a unified view leaves gaps for auditors and prevents real-time oversight—a challenge that becomes even more critical in industries like FinTech or government, which are subject to additional compliance requirements like CUI Basic.
Identity governance is moving from periodic checks to continuous oversight. Instead of leaving broad permissions in place and revisiting them months later, newer approaches shift towards:
By enforcing just-in-time access and contextual approvals, IGA reduces the standing permissions that often undermine API security in CI/CD pipelines and cloud workloads.

Cloud-native deployments and the explosion of non-human identities have pushed traditional identity governance past its limits. Static reviews and manual approvals leave too much standing access in environments where roles and permissions change constantly. To reduce risk, governance needs automation, time-bound access, and policies that apply equally to people and non-human accounts.
Apono redefines IGA for cloud-native teams. It eliminates risky standing permissions for both human and non-human identities, while ensuring compliance frameworks increasingly require full visibility into NHI governance. Apono’s platform automates JIT and JEP to eliminate standing permissions, generates granular audit logs for compliance, and applies governance equally to human and non-human identities. Approvals flow directly through Slack, Teams, or CLI—every action logged, every change auditable.
With built-in break-glass and on-call flows, and deployment in under 15 minutes, Apono delivers Zero Trust governance at the speed of modern infrastructure.
Ready to Eliminate Standing Access Risk?
Apono closes the gap by automating JIT and JEP for both human and non-human identities — stopping long-lived keys from becoming backdoors.Download The Security Leader’s Guide to Eliminating Standing Access Risk to see how leading cybersecurity companies are rethinking access control.
New details are emerging in recent weeks on how the Crimson Collective threat group has been conducting a large-scale campaign targeting Amazon Web Services cloud environments. Recent reports highlight how easily the attackers progressed once they obtained valid credentials.
The Crimson Collective claims to have exfiltrated ~570 GB across ~28,000 internal GitLab projects; Red Hat has confirmed access to a Consulting GitLab instance but hasn’t verified the full scope of those claims.
After the breach became public, Bleeping Computer reports that the threat actors partnered with headline-grabbing extortion group, Scattered Lapsus$ Hunters, to increase pressure on Red Hat.
In this post, we’ll break down how the hackers carried out their attack and how to keep your organization protected via a Zero Standing Privileges approach.
According to the report from Rapid 7 in Bleeping Computer, the attackers took a tried but true course of action to compromise their targets and make off with their illicitly obtained data.
This latest attack highlights a tough if not cliche truth in the cloud: attackers don’t need to break in if they can just log in. Once credentials with standing privileges are compromised, it gives them everything they need to move freely across environments.
The reality is that credential compromise is now a matter of when, not if. And as the number of Non-Human Identities (NHIs)—like service accounts, IAM roles, and API keys—continues to explode, the challenge keeps growing. In many organizations, NHIs now outnumber human users by roughly 200 to 1.
Things are getting even more complicated with the rise of Agentic AI tools. These systems operate at massive scale with unpredictable access needs, often without the visibility security teams rely on to monitor what’s actually being accessed.
Protecting against these kinds of attacks means focusing not just on preventing credential theft, but on minimizing what attackers can do after credentials are compromised. That’s why AWS told BleepingComputer that customers should “use short-term, least-privileged credentials and implement restrictive IAM policies.”
That advice perfectly captures the idea behind Zero Standing Privileges (ZSP), reducing the amount of always-on access available in your environment, so even if credentials are stolen, attackers have nowhere to go.
Of course, actually putting that into practice is the hard part. Manual access management is slow and painful, and cutting privileges too aggressively risks hurting productivity. And as cloud environments and NHIs multiply, keeping up manually just isn’t realistic anymore.
Apono makes it simple to put Zero Standing Privileges into action—without slowing anyone down.
Here’s how:
With Apono, security teams can close privilege gaps before attackers can exploit them, while developers and AI systems get access exactly when—and only when—they need it.
If you want a quick way to benchmark where standing privileges still exist in your environment, download our Zero Standing Privileges (ZSP) Checklist: a fast, practical self-assessment to help you identify hidden risks and early indicators of exposure.
Ready to take a smarter approach to cloud access?
See how Apono can help your organization prevent credential-based attacks while keeping teams fast and productive. Visit apono.io/jit-and-jep/ to learn more about our platform or request a demo.
Imagine autonomous agents negotiating and acting on your behalf—no manual hand-offs, just an efficient, policy‑driven communication. That’s the promise of Google’s Agent2Agent (A2A) Protocol, unveiled at Google Cloud Next in April 2025. Developed with input from over 50 partners, A2A is now open-sourced under the Apache 2.0 license and governed by the Linux Foundation.
But excitement quickly collides with reality. Early adopters report compliance blind spots (who approved that token and when?), latency added by cross-agent orchestration, and the operational overhead of adding another standard into pipelines. As agent-based architectures become the backbone of AI-driven automation, the pressure is mounting on engineering teams to enable secure, autonomous interactions between services.
A 2025 global AI survey reveals that 29% of enterprises are already running agentic AI in production, with another 44% planning to join them within a year. Cost-cutting and reducing manual workloads are among the top goals for adoption. Understanding the Agent2Agent Protocol is vital for building secure and scalable systems that can keep up with the next wave of automation.

Google’s Agent-to-Agent (A2A) Protocol is an open, vendor-neutral language that lets independent AI agents discover each other, negotiate how they will talk (text, files, streams), and work together without exposing their private code or data.
Google unveiled the spec on April 9, 2025, at Cloud Next. It is backed by more than 50 technology partners and is now maintained as an open-source project under the Apache 2.0 license.
Google kicked the A2A project off after running large, multi-agent systems for customers and seeing the same pain points repeat:
The four-step flow below illustrates the full A2A handshake from discovery to streaming task updates.

Every agent publishes a tiny JSON file, /.well-known/agent.json, listing its name, endpoint, skills, and supported auth flows. A client agent simply fetches this card (directly or via a registry) to see who can do what and how to connect.
The card also tells the caller which OAuth 2/OIDC method to use. The client obtains a short-lived token (minutes), allowing access to be scoped and automatically expires. This step eliminates hardcoded secrets, marking a shift from static secrets to dynamic machine identity management, where each agent authenticates based on policy, context, and lifespan.
With a token in hand, the client sends a task/send or task/sendSubscribe request via JSON-RPC 2.0 over HTTPS.
Each request/response carries trace IDs, and agents emit structured logs and metrics in OpenTelemetry Protocol (OTLP) format. You can drop A2A traffic straight into existing dashboards without bolting on a separate telemetry layer. This level of observability is essential for identifying anomalies and containing the risks of non-human identities operating in complex, distributed environments.
Many teams adopting A2A have struggled with blind spots, like losing track of which agents initiated sensitive operations or where tokens are reused across flows. Without built-in tracing and structured logs, auditing multi-agent systems becomes a fragmented, manual task. A2A’s observability layer helps reduce that operational burden, but it still requires thoughtful integration with existing security tooling.

At its core, A2A gives every software agent a common language and contract so they can:
By replacing brittle webhooks and custom RPC layers with an open JSON-RPC spec, the Agent2Agent Protocol eliminates glue code and reduces integration overhead across ecosystems.
Because discovery, auth, transport, and telemetry are part of the spec, you don’t waste cycles reinventing service discovery, API gateways, or audit pipelines. You wire agents together (much like microservices), then layer governance tools on top to enforce least-privilege, time-boxed access across your infra. It reduces repetitive integration tasks, which improves developer productivity across teams working in complex environments.
The Agent2Agent Protocol solves real pain points in DevOps and automation by making agent communication smarter and safer. Here’s why it’ll be beneficial in the long run.
Any AI agent that speaks A2A can call, or be called by, any other agent.
Example: If a vulnerability scanner agent discovers a patch management agent during a CI run, it can send a task with the CVE list and stream the fix status back to the build.
Short-lived OAuth/OIDC tokens and signed task IDs keep access scoped and auditable without requiring the hardcoding of secrets.
Example: When a monitoring bot detects a spike, it requests a one-off token to spin up extra pods. The token expires automatically once scaling is complete, aligning with enterprise identity management best practices.
The Agent2Agent Protocol includes built-in support for agent discovery, JSON-RPC 2.0 transport, and SSE streaming. Teams can focus on features instead of writing adapters and polling loops.
Example: A scheduler agent queries rightsizing agents in AWS, GCP, and Azure, aggregates savings, and opens a single cost-cutting PR. No polling scripts are required.
Every request carries trace IDs and standard OTLP metrics, which are dropped straight into Grafana/Prometheus dashboards, regardless of whether those agents are operating in the cloud, across edge services, or in traditional data centers.
Example: A chatbot passes a billing request to a payment agent via A2A; the handoff is fully logged, and the one-time token expires as soon as the charge is completed.

These guiding principles explain why A2A stays flexible, secure, and developer-friendly as the ecosystem expands.
Follow this step-by-step guide to adopt your first A2A agents and weave them safely into your workflow.
Clone Google’s reference repo and drop it into the Python SDK.
git clone https://github.com/a2aproject/a2a-samples.git
cd a2a-samples
python -m venv .venv && source .venv/bin/activate
pip install a2a-python # or a2a-js for Node
The repo includes basic example agents and lightweight helper code for JSON-RPC calls and SSE streaming, but production implementations will need hardening.
Pick one of the ready-made agents (e.g., the “currency” FastAPI service) and run it.
uvicorn samples.python.currency_agent:app --port 10000 --reload
When the server starts, it auto-serves an Agent Card at: https://localhost:10000/.well-known/agent.json, advertising its skills and auth method.
Make that JSON file reachable via a public URL, an internal LB, or a registry entry. Other agents can pull it and learn who you are and how to talk. No extra service-discovery layer is required. For production environments, agents can also publish to a centralized A2A registry, which supports indexed search and simplifies discovery across large infrastructures.
Edit the auth block in the Agent Card to point at your OIDC or token issuer and set the TTL to minutes. Every task call will now carry a scoped, self-expiring token instead of a long-lived secret.
From another agent (or just curl), invoke the first agent:
TOKEN=$(<your_token_here> --ttl 5m --aud currency-agent)
curl -H "Authorization: Bearer $TOKEN" \
-X POST https://currency-agent:10000/tasks/sendSubscribe \
-d '{"input":{"amount":"50","from":"USD","to":"JPY"}}'
The request uses JSON-RPC 2.0 over HTTPS; the sendSubscribe variant opens a Server-Sent Events stream, so you get live status until completed.
The SDK emits OTLP logs/metrics with a shared trace ID. Point OTLP logs and metrics to your backend of choice for unified observability.
The Agent-to-Agent (A2A) Protocol enables software agents to trade tasks and data on the fly, but it truly shines when access is tightly controlled and fully auditable. Apono and the A2A Protocol share a key mission: enabling secure, policy-driven access between non-human identities (NHIs) like service accounts, bots, and APIs. Apono ensures that, even as NHIs interoperate across boundaries, their access is ephemeral, precisely scoped, and compliant.
Apono’s platform is purpose-built to manage access for NHIs by enforcing Just-In-Time (JIT) and Just-Enough-Privilege (JEP) access, thereby reducing standing privileges and misconfigurations. It ensures every service account, bot, or API key gets only the access it needs for exactly as long as it’s needed.
Apono is designed to become the enforcer of orchestrated permissions across infrastructure by automating and right-sizing the lifecycle of access for NHIs—including provisioning, expiration, and auditability—to install least privilege for NHIs and bring zero trust to all of your identities.
With Apono’s auto-expiring tokens and centralized logs, you can narrow the window for misuse and provide security teams with a single source of truth when compliance and auditing questions arise.
Get hands-on with Apono. Request a demo to deploy in under 15 minutes and start eliminating overprivileged access.
Security and engineering teams today face a tough balance: protecting sensitive resources while keeping developers productive. As organizations shift from on-prem to the cloud, access management becomes one of the biggest challenges.
With more identities—human and non-human—gaining access to more resources across hybrid environments, the risks rise. Studies show that over 95% of identities hold excessive privileges, and attackers are exploiting this reality, with 88% of breaches starting from compromised identities.
It’s natural for engineering teams to want to “build” their own Just-in-Time (JIT) access solution. But is that really the best use of resources? Increasingly, organizations are asking themselves:
Should we build an in-house solution or buy a platform that delivers secure, scalable JIT access out-of-the-box?
This article explores the trade-offs of building vs. buying so you can make the right choice for your organization.
Rolling your own JIT solution sounds simple, but in practice, it’s often a patchwork of services, scripts, and ongoing maintenance.
What it takes to build:
The hidden cost:
In short, the challenge isn’t just building. It’s maintaining, its testing, its patching and scanning for vulnerabilities. It’s having a team to support you.
💡 Thinking about building your own solution?
See how leading teams evaluate Cloud PAM platforms before they commit. Download the Access Platform Buyer’s Guide here
| Factor | Build In-House | Buy a Platform (General) | Apono Advantage |
| Speed to Deploy | Months to design, develop, and test, resulting in a slower time-to-value. | Typically faster deployment with vendor-provided integrations and support. | API-first deployment with Terraform, Helm, CloudFormation; Slack/Teams-native workflows for fast adoption. |
| Role Creation Model | Often depends on pre-created roles — slow to adapt, prone to over/under-privilege. | Many solutions offer role management, which may require predefined roles or templates. | Dynamic roles created in real time, scoped to the task, auto-expire, and adapt automatically to business context. |
| Coverage | Limited to your team’s integration work; gaps likely in multi-cloud/SaaS. | Most vendors offer coverage across major cloud and SaaS platforms, but breadth and depth can vary. | Comprehensive support across AWS, Azure, GCP, Kubernetes, SaaS, and NHIs; single-pane-of-glass management. |
| Operational Overhead | Continuous upkeep for API changes, security patches, and policy logic. | Vendor-managed updates and maintenance help reduce the burden on internal teams. | Fully vendor-managed with continuous support for new APIs; automated discovery reduces admin effort. |
| Customization | Fully tailored to unique workflows and niche systems. | Platforms typically offer policy frameworks and workflow flexibility, though some adjustments may be needed. | Granular Access Flows and contextual policies, easily adapted to customer workflows without brittle custom code. |
| Security Posture | Risk of drift if roles aren’t updated quickly; harder to keep least privilege. | Most platforms provide controls for enforcing least privilege, although they are often tied to predefined structures. | Real-time context evaluation ensures least privilege with just-in-time and just-enough access; supports NHI quarantine. |
| Slack / Jira Integration | Requires custom development and ongoing maintenance. | Many platforms offer some integrations, with varying depths. | Deep Slack, Teams, and Jira integrations for request → approve → provision flows. |
| Auto-Expiring Roles | Must be built and maintained manually with custom scripts. | Some vendors provide time-limited role options. | Native auto-expiring, context-aware roles scoped to the task. |
| Audit Logging | Logs are often fragmented across different systems, requiring manual correlation. | Platforms provide centralized logging, but the depth can vary. | Unified session auditing with identity-to-action tracking, SIEM & ticketing integration. |
| Deployment | Complex build-out requiring internal engineering resources. | Vendor platforms usually offer guided setup and professional services. | Fast, API-based deployment with pre-built integrations and self-service rollout. |
They say never roll your own crypto—because with great power comes great responsibility. The same applies to JIT access. It holds the keys to your most sensitive crown jewels, so protecting it must be a top priority.
Whether it’s a Lambda function or another microservice handling provisioning, it carries a lot of permissions. The real question: how are you ensuring it can’t be compromised, thereby handing attackers the keys to the kingdom?
Apono’s patented secure architecture keeps your environment fully in your control. Our platform runs on two lightweight components:
Why it matters:
With Apono, all access stays in your environment—you get secure, reliable, and compliant access management without friction.
Monday.com transitioned from maintenance-heavy in-house workflows to a secure, scalable, and developer-friendly platform—powered by Apono
ROI at Scale
ROI Of Your Internal Resources Is On What You Can Sell
If you’re managing access to a niche or one-off resource, building something in-house might feel tempting. But the reality is that most teams quickly learn the cost is higher than the benefit: ongoing maintenance, constant patching, compliance reviews, and dedicating precious engineering cycles to “plumbing” instead of product.
Modern teams need speed, security, and scalability—not another internal project to babysit. A proven cloud-native JIT access management solution delivers reliability out of the box, reduces risk, and frees your engineers to do what they do best: ship value to customers.
Download the Buyer’s Guide to learn how leading security teams compare Cloud PAM platforms — and why Apono is built for speed, scale, and Zero Standing Privilege.

Today’s man-in-the-middle (MitM) attacks go far beyond coffee-shop Wi-Fi: they target browsers, APIs, device enrollments, and DNS infrastructure. Using automated proxykits and supply-chain flaws, attackers hijack session cookies, tokens, and device credentials—turning one interception into persistent, high-value access.
Concerningly, these are not edge cases. Automated cyber threat activity surged 16.7%, with over 1.7 billion stolen credentials circulating on the dark web—fueling a 42% increase in credential-based targeted attacks. Passwords and simple MFA fail unless access is limited and continually verified.
Security teams can implement best practices, such as cutting token lifetimes and just-in-time elevation, to protect against man-in-the-middle attacks. Let’s review a comprehensive list of security controls you can implement immediately to make intercepted credentials worthless to attackers.
A man-in-the-middle (MitM) attack happens when an attacker secretly intercepts and manipulates communications between two parties. The attacker is positioned in the “middle” of the data exchange, between a user and an app, or between two users or two apps, without anyone noticing. With MitM attacks, the adversary can eavesdrop, steal credentials, alter data, or impersonate one of the parties involved.
Today’s MitM attacks target API calls, machine-to-machine traffic, and even naive agent-to-agent protocols in distributed, cloud-native environments. With stolen tokens or cookies, an attacker gains the same level of visibility and control as a legitimate service account.
Some examples of MitM techniques include:
A successful man-in-the-middle adversary gains the same level of visibility and control as the legitimate user or service. Non-human identities (NHIs)—like service accounts, workloads, and agents—are particularly vulnerable. In fact, machine identities now outnumber human identities by as much as 80:1, multiplying the blast radius of a single interception. Without a strong enterprise identity management strategy, these identities are often left overprivileged and unmonitored, creating an easy path for MitM attackers.

MitM attacks aren’t just theoretical risks; they can be the cause behind real breaches or even large-scale espionage campaigns. Let’s review the most relevant attack types that DevOps and engineering need to watch out for.
Attackers downgrade HTTPS connections to plain HTTP, eliminating the security layer of SSL/TLS. This attack vector leaves communication in plaintext, including login credentials, API keys, and session tokens. Misconfigured certificates, outdated systems, or user dismissal of browser warnings leave room for SSL stripping. DevOps teams are especially concerned about this in CI/CD pipelines and API endpoints, as a single misconfigured connection can become the entry point of a MitM attacker.
Example: The 2015 Superfish adware fiasco showed how software that installed its own root certificate could intercept HTTPS traffic by trusting a single private key. Because those certificates shared a key, anyone with the key could impersonate sites (including banks) without browser warnings.
Security best practices:
DNS hijacks and registrar compromises let attackers redirect entire domains to malicious infrastructure.
Example: Sea Turtle was a sophisticated espionage operation uncovered in 2019. Attackers targeted domain registrars, registries, and other DNS infrastructure to compromise DNS records and surreptitiously redirect traffic for targeted organizations to attacker-controlled servers. It allowed the attacker to intercept web and email traffic, steal credentials, and even serve forged or fraudulently issued TLS certificates to avoid immediate detection.
Security best practices:
So, what would these best practices look like in practice? Let’s look at an example. Caris Life Sciences used Apono to enforce JIT folder-level permissions in AWS S3—so even if DNS traffic were redirected, attackers couldn’t leverage long-lived standing credentials.
Attackers poison ARP tables on local networks to force traffic to flow through a malicious host, enabling sniffing and tampering with internal traffic.
Example: Pentest and tool writeups repeatedly show that cheap implants (like Wi-Fi Pineapple and Raspberry Pi) enable LAN ARP attacks. Effective data center management, such as strict network segmentation, helps reduce exposure to LAN-level MitM attacks.
Security best practices:

Threat actors use evil twin or malicious hotspots to steal users and proxy or intercept their traffic. This type of attack happens frequently in airports, public charging points, cafes, and hotels.
Example: In July 2024, Australian police arrested an individual for operating an “evil twin” hotspot that harvested travellers’ credentials by redirecting victims to spoofed login pages.
Security best practices:
Attackers replay stolen session cookies, tokens, or API keys to impersonate services or users, often without passwords. Stolen cookies and tokens don’t just result from MitM attacks; client-side flaws like cross-site scripting (XSS) can also expose session data and API keys, as seen in CVE-2024-44308.
Example: In the Microsoft SAS Token Leak (2023), researchers inadvertently published a Shared Access Signature token granting full access to an Azure Storage account and exposing 38TB of sensitive data. This NHI breach showed the risks of over-permissive, long-lived tokens.
Security best practices:
An attacker with network access (or who exploits a vulnerability in an agent) can intercept or impersonate the agent to server telemetry and commands, hijacking workflows and observability channels.
Example: The Okta Support System Breach in 2023 saw attackers exploit a compromised NHI (a service account) to steal support artifacts containing customer credentials. Additionally, CVE-2025-1146 (CrowdStrike Falcon Linux component) illustrates how TLS validation bugs can enable MitM of agent to cloud traffic.
A potential MiTM attack exploiting this flaw could trick the vulnerable CrowdStrike sensor into accepting a malicious, non-legitimate server certificate. This attack would allow the attacker to intercept, decrypt, and manipulate the secure communication between the sensor and the CrowdStrike cloud, potentially compromising system confidentiality and integrity.
Security best practices:

Simpler or unrecorded agent-to-agent protocols without mutual authentication or request signing enable MitM between agents and services in distributed systems. Such attacks may include context poisoning, agent impersonation, or exploiting an AI agent’s logic.
Example: Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems warns how impostor agents could intercept agent communications. The research shows that an attacker could introduce an impostor AI agent, such as an impostor “email assistant,” into a network of cooperating agents. This malicious actor would then have the capability to intercept and alter legitimate communication between other actors, such that the attacker can inject new instructions and pilfer sensitive data without intervening directly by any human user.
Security best practices:
Where legacy PAM relies on static roles and vault proxies, leaving windows of opportunity open for MitM actors, Apono operationalizes Zero Standing Privilege. That means every credential, token, or role is short-lived, scoped, and continuously verified—dramatically reducing the blast radius of a single interception.
Man-in-the-middle attacks typically succeed when stolen credentials or tokens remain useful. The fastest prevention isn’t perfect encryption; it’s limiting how much data of value an attacker has. Make sessions transient, bind tokens to device and context, and identify proxying traffic so stolen credentials will decay or be revoked right away.
Operationally, focus on three levers: mandate phishing-immune MFA and device-bound authentication; use short-lived, auto-rotating tokens with per-call authorization and mTLS for traffic to services; and put high-risk activities behind human approvals and quick revocation playbooks. These steps keep attackers from turning a fleeting interception into a sustained breach.
Stolen tokens expire within minutes with Apono, preventing attackers from turning an interception into sustained access. Permissions expire automatically, machine identities are scoped per-call, and sensitive actions require approvals. Apono is built specifically for this approach: JIT access, automatically decaying permissions, scoped control over agents, and full audit trails reduce the blast radius in case of interceptions. See how temporary access turns the tables. Apono operationalizes zero trust by eliminating standing privileges across human and machine identities.
Book a demo and start making stolen credentials useless before they can be weaponized.
Identity-related threats are draining time and resources faster than security teams can keep up. The challenge is no longer just about stopping breaches; it’s about keeping up with the scale of alerts and risks.
On average, organizations spend 11 person-hours investigating each identity-related security alert. Meanwhile, credential theft has soared 160% in 2025, making privileged accounts and non-human identities (NHIs) a prime target for attackers.
Modern Privileged Access Management Software Solutions (PAM) offer a way forward by automating access controls and reducing standing privileges, filling the gaps left by traditional approaches and securing your organization.
Privileged Access Management software secures and controls access to high-value accounts like admin users and NHIs—basically any accounts that hold the keys to critical infrastructure. These solutions enforce the principle of least privilege, ensuring that users and services only get the access they need, for the minimum time required.
PAM software centralizes and automates access workflows, such as vaulting credentials, issuing short-lived tokens, monitoring privileged sessions, and enforcing policies like Just-in-Time (JIT) access. These tools provide many big ticks for security and compliance, such as creating audit trails for frameworks like SOC 2 and GDPR.
The need for PAM solutions is especially critical in today’s cloud environments, where non-human identities outnumber human users by more than 80:1. For example, instead of leaving a cloud service account (an NHI) with standing database or API security permissions, PAM tools can issue time-bound credentials only when that service is actively running a job.

Effective PAM platforms deliver more than protection—they streamline access and ensure that even machine-to-machine credentials are properly governed.
To understand why PAM is critical today, let’s look at what these solutions actually do and how they work.

When comparing PAM tools, it’s important to balance security with usability and scalability. Here are key factors to guide your decision-making.
🔍 Compare PAM Platforms with Confidence
Turn your shortlist into a smart choice. See the capabilities that matter for AI workloads, Zero Standing Privilege, and NHI governance. Download the 2025 Access Platform Buyer’s Guide here

Apono is a cloud-native access management solution built to eliminate standing privileges and reduce identity-based risks without slowing developers down. While most PAM solutions still rely on vaults and manual workflows, Apono eliminates these bottlenecks with a cloud-native, Just-in-Time model built for scale. It deploys in less than 15 minutes and integrates with developer-friendly tools like Slack, Microsoft Teams, and CLI, making secure access simple and scalable.
Main Features:
Price: Tailored pricing depending on team size and infrastructure complexity. A free trial is available, and enterprise-grade plans are available upon request.
Review: “Apono’s product does exactly what it claims to […] it saves me time, and provides value to my users by streamlining the process of granting access to our resources in a precise, auditable way.”

StrongDM is a zero trust PAM platform that centralizes access across infrastructure, such as servers, databases, Kubernetes, cloud, and SaaS. Its key features include access policies and capturing session data for audits and compliance.
Main Features:
Price: Starts at $70/user/month.
Review: “Their platform is intuitive and highly secure, which makes it easy for us to recommend to clients across industries.”

Heimdal Privileged Access Management is a comprehensive PAM module that enables JIT elevation and automatic de-escalation of user rights. It’s embedded within Heimdal’s broader cybersecurity suite.
Main Features:
Price: By inquiry.
Review: “While the solution can be complex to implement and manage, the benefits it provides in terms of enhanced security and improved efficiency are worth the investment.”

The Wallix Bastion PAM platform integrates password vaulting, session management, and access control, including HTML5 web sessions, with full video and metadata audits.
Main Features:
Price: User- or resource-based pricing available, starting at around $103/month for 10-50 users.
Review: “The setup process was simple, and the solution can be implemented within less than one day.”

ARCON PAM is an enterprise-grade solution delivering granular control over privileged identities and environments. It supports various features, from adaptive authentication to session monitoring and secrets management.
Main Features:
Price: By inquiry.
Review: “The UI has improved significantly over the past year, making navigation and policy configuration easier.”

Segura 360° Privilege Platform is an all-in-one PAM suite that spans the entire privileged access lifecycle. It covers password vaulting, DevOps secrets management, session recording, cloud identity governance, and more.
Main Features:
Price: All-inclusive licensing model available by inquiry.
Review: “The standout aspects are ease of use, robust security layers (MFA included), and excellent customer support.”

This option is often categorized as a Privileged Password Management (PMM) tool rather than a full-featured PAM. Still, ManageEngine offers a centralized, AES-256 encrypted vault for privileged credentials and remote session control. It integrates with Active Directory and CI/CD tools for seamless access governance.
Main Features:
Price:
Review: “Manage Engine Password Manager Pro is very user-friendly and easy to manage. [I use the] multi-factor authentication with strong encryption methods.”

Systancia’s PAM solution adapts its control levels based on the task’s criticality, ranging from standard internal administration to high-risk, highly regulated operations. It delivers additional features like contextual session monitoring and secure credential injection.
Main Features:
Price: By inquiry
Review: “Systancia Gate and Systancia Cleanroom allow us to implement these accesses very quickly and manage them very simply.”

Teleport is a cloud-native platform providing PAM through zero trust principles and cryptographic identities. It unifies access across SSH, Kubernetes, databases, web apps, and cloud environments.
Main Features:
Price: Free trial. Pricing is by inquiry.
Review: “Reviewers highlight centralized access management for SSH, Kubernetes, AWS, and RDS as a standout efficiency.”

Netwrix’s offering is a PAM platform that replaces standing privileges with just-in-time, ephemeral access. It delivers privileged account discovery, time-limited credentials, real-time session monitoring, and secure remote access without requiring VPNs.
Main Features
Price: By inquiry
Review: “[Netwrix is] always very responsive and helpful every time we have an issue. The product itself is also very easy to use.”
Identity-based attacks are rising faster than traditional defenses can adapt, and you can’t afford to expose privileged accounts (human or machine). Modern PAM solutions offer an automation lifeline, cutting down investigation time and providing the audit trails needed for compliance.
In a world where machine identities outnumber humans and attackers exploit every overlooked credential, Apono delivers a safer and more scalable way to manage privileged access. Get started with Apono today and see how modern PAM can protect your organization without slowing down your teams.
Unlike legacy PAM platforms that rely on static roles, Apono takes a cloud-native, JIT approach. By automating the issuance and revocation of privileges down to individual databases, APIs, or Kubernetes clusters, Apono eliminates standing access and dramatically reduces attack surfaces. Developers can request access through Slack, Teams, or CLI, while security teams gain full visibility through comprehensive audit logs and compliance-ready reporting.
Before you choose a solution, see how security leaders evaluate Privileged Access Management platforms built for the AI era. Download the Apono Access Platform Buyer’s Guide to learn what differentiates modern, cloud-native PAM from legacy vault-based tools—and how to choose the right platform for your organization.
The Shai‑Hulud worm and the Nx / S1ngularity attacks show how token‑stealing malware, vulnerable workflows, and always‑on elevated permissions allow cascading compromise. Enforcing JIT access on repository, organization owner/admin roles, and team‑based inherited permissions sharply reduces exposure, limits damage, and strengthens audit/compliance posture.
In mid‑August 2025, security researchers revealed the spread of Shai‑Hulud, a self‐replicating worm infecting npm packages to steal cloud service tokens, including GitHub, AWS, and GCP. The malware auto‑injects itself into other packages maintained by compromised accounts, exfiltrates secrets, sometimes exposes private repos, and even publicizes them.
Earlier, the Nx / S1ngularity attack exploited vulnerable GitHub Actions workflows to exfiltrate developer tokens and secrets. Packages belonging to high‑profile maintainers were infected; owner and admin rights were abused via owner accounts or via tokens that had broad permissions.
These incidents underscore how elevated, long‑lived, or inherited permissions are some of the biggest risk multipliers.
Key Risks & What Organizations Are Missing
The Shai Hulud attack showed how quickly compromised tokens in CI/CD can be abused. With the compromised npm tokens, attackers used them to spin up GitHub Actions and automatically publish exfiltrated credentials to newly created public repos. The problem wasn’t just that secrets were exposed — it was that those secrets carried standing permissions that were always available to abuse.
If we think about this attack from a kill-chain perspective focusing on the access privileges perspective, then the component dealing with the GitHub Actions stands out as a key opportunity to reduce the potential harm from having creds published publicly.

With Apono, you can eliminate that risk. Sensitive GitHub Actions permissions — like publishing, pushing, or creating new repositories — are made requestable via Just-in-Time access. Instead of the GitHub Actions being freely available for use, they are temporarily provisioned upon validated request with the exact scope and duration required.
The result:
Apono makes GitHub Actions manageable the same way it does cloud and infrastructure access: least privilege by default, elevated only on-demand.
Why JIT Access Matters More Now
Because of automation and inheritance, the real vulnerabilities multiply faster than humans can audit. Just-in-Time (JIT) access (granting elevated permissions only when needed, for the minimal required time, under controlled policies) helps in several ways:
Here’s how to deploy JIT controls over these high‑risk objects in GitHub:
| Object | JIT Controls / Best Practices | Why It Addresses Shai‑Hulud / S1ngularity Risks |
| Repositories | Require elevated repo‑write or admin roles only for specific tasks and time‑boxed sessions.Monitor postinstall / workflow scripts changes and prevent unreviewed workflows being added.Make repo‑admin write privileges conditional: e.g. dual approval, MFA, etc. | Shai‑Hulud relies on compromised developer accounts injecting malicious code; automation that elevates privileges in repos can be abused. Time bounding helps limit how long a repo is potentially exploitable. Vulnerable workflows were shown to have been exploited in the S1ngularity incident. |
| Organization Roles | Limit the number of owners / admin roles. Use JIT elevation (a user requests elevated privileges, with justification, for a fixed time).Require MFA to secure approval workflows.Maintain active logging, alerts for creation / removal of owners or admin roles. | Owner/admin roles are what attackers used to propagate, exfiltrate, create repos, change visibility. In S1ngularity, tokens with owner/admin or elevated scopes allowed workflows to be abused. |
| Teams & Inherited Permissions | Use temporary-team assignments or request elevated permissions for specific time only.Disallow teams from being owners / admins unless needed; if they must be, audit their membership and actions. | Inherited permissions mean one compromised user in a team can impact many repos; teams with admin rights can act like many owners. The worm & leaks exploit exactly that scale. |
The Shai‑Hulud worm and the Nx / S1ngularity attacks illustrate how access creep, static tokens, vulnerable workflows, and “always on” elevated permissions come together into a perfect storm. To protect against similar supply chain, worm‑style attacks:
When you combine visibility, enforcement, and temporal constraints, even if a breach occurs, its spread and damage are contained — transforming your security from reactive to resilient.Book a demo with Apono to map your current GitHub elevated access and build JIT guardrails.
Want to see where standing privileges might already exist?
Grab our ZSP Checklist for a quick self-assessment.
APIs are the foundation of modern applications, and attackers know it well. A single misconfigured endpoint or exposed token can give adversaries a direct path into sensitive systems and data across your environment. Your already overburdened security teams can’t afford to miss what may be their fastest-growing attack surface.
How fast-growing is the threat? In 2024, researchers catalogued 439 AI-related CVEs (a staggering 1,025% increase over the prior year), and nearly 99% were tied to insecure APIs. In reality, this results in over half of organizations reporting an API-related incident in the past 12 months.
In 2025, having a robust API security checklist isn’t just a formality. It facilitates a step-by-step framework designed to protect your API ecosystem while reducing risk and bringing order to the chaos of API management. Let’s start by defining what an API security checklist is, how it works, and the value it delivers.

An API security checklist is a structured set of instructions designed to help teams manage the risks to their API ecosystem. Much like pre-flight checklists in aviation, the API security checklist ensures critical security measures are never overlooked, even under pressure or at scale. By embedding repeatable and enforceable security controls throughout an API’s development and operations lifecycle, you effectively reduce your API’s attack surface and facilitate better alignment between engineering and infosec teams.
API security checklists are increasingly vital due to the rise of non-human identities (NHIs) like service accounts and machine-to-machine credentials, often with loose permissions and little oversight. Bad actors are quick to exploit this gap, with nearly 1 in 5 organizations admitting to having suffered an NHI-related breach in the past year.
This shift in malefactor tactics is reflected in industry frameworks for API security, like the OWASP API Security Top 10, which highlights broken authentication, misconfigured access controls, and poor asset management as leading causes of API breaches.

A comprehensive API security checklist can help you systematically address common risks like:
Over-privileged service accounts or API keys are a potential treasure trove for attackers, giving them unnecessary access to data and functionality. In the 2024 BeyondTrust breach, a single over-scoped API key exposed a trove of sensitive data from 17 SaaS providers.
Loose auth controls are one of the most exploited vulnerabilities. In the headline-making TeaOnHer, an API launched without authentication exposed personal IDs, selfies, and sensitive user data within minutes.
Even in 2025, developers are still uploading code secrets to GitHub. One prominent example is xAI, Elon Musk’s AI startup, which leaked a private API key on GitHub that granted access to over 50 internal models.
Unmonitored APIs are prime entry points. In August 2024, Avis lost nearly 300,000 customer records when attackers exploited a vulnerable API integration in a business application, highlighting how legacy or hidden APIs can evade security oversight. Centralized tracking of who (or what) is calling which APIs, with what scope, makes it far easier to spot shadow usage before it turns into a breach.

An API security checklist is critical for any business with a public-facing API because it:
A quick Google search for ‘API breach’ shows their ubiquity. A thorough API security checklist aids teams in operationalizing best practices and turning cybersecurity into a repeatable and semi-automatic process that shrinks your API attack surface.
Effective cybersecurity strategies employ the Zero Trust principle, which assumes every request and connection may be malicious. An API security checklist translates this principle into practice by implementing and enforcing robust operational policies like scoped tokens and least-privilege access on every API interaction.
One of the main issues with APIs is that they often lack centralized and documented ownership. An API security checklist makes logging, monitoring, and auditing integral parts of the process, ensuring you always know who (or what) is accessing sensitive resources, when, and why.
Regulatory frameworks like SOC 2, HIPAA, and GDPR are built very much like checklists with requirements for strict access control and auditing. Integrating them helps avoid compliance gaps by enforcing consistent controls across the API lifecycle. Choosing a cloud-native access management platform that generates comprehensive audit logs ensures that compliance reviews are built into daily operations.
In enterprises with large engineering departments, different teams design and operate APIs in silos. With a company-wide API security checklist, you can enforce standardized security practices across DevOps, platform engineering, and InfoSec, reducing the risk of oversight.
The checklist below is designed to address critical security controls and common blind spots, in alignment with best practices and security frameworks (like OWASP API Top 10, SOC 2, and others).
Require verification of identity for all API calls and enforce granular, least‑privilege authorization for human and machine identities. Strong authentication should go hand-in-hand with minimizing exposure: instead of granting broad, long-lived privileges, issue narrowly scoped, time-bound permissions that expire automatically once the task is complete.
Addressed risks: Broken auth, account takeover, data exposure.
Implementation:
Minimize and time-limit privileges for machine identities across automations, services, pipelines, and environments.
Addressed risks: Over‑scoped tokens or long‑lived service accounts
Implementation:
Apono automates ephemeral, scoped permissions on demand (via Slack/CLI), auto‑expires them, supports break‑glass and on‑call flows, and records who/what/why for compliance. With You can automate JIT/JEP approval flows so elevated scopes are granted only when needed and set to auto‑expire.

Centralize code secrets management, make sure no secrets leak into code/repos/configs, and rotate secrets automatically and frequently.
Addressed risks: Key leaks in repos or public tools/workspaces, as well as long-lived keys, are difficult to revoke across complex environments.
Implementation:
Employ gateway and applications to prevent brute‑force, enumeration, and volumetric abuse. Implement strict schema validation to stop mass assignment and injection.
Addressed risks: DoS attacks, credential stuffing, data harvesting, and business‑logic abuse.
Implementation:
Maintain centralized, immutable logs and real‑time monitoring tied to who/what called which API, with what scope, and why.
Addressed Risks: Blind spots that delay detection, resulting in inadequate forensics, and compliance gaps.
Implementation:
Apono correlates the who/what/why for elevated access via JIT/JEP approvals, and auto‑generates audit trails you can join with gateway logs for complete identity‑to‑request traceability.
Implement robust security controls at the edge and mesh, with TLS everywhere, mTLS for service‑to‑service, strict gateway policies, and secure defaults.
Addressed risks: Downgrade attacks, credential stuffing, enumeration, and data exfiltration.
Implementation:
Prepare tested playbooks to quickly contain and recover from API security incidents. This step includes revoking secrets, quarantining identities, and more.
Addressed risks: Long dwell time, cascading outages, and non‑compliant disclosures.
Implementation:
Apono executes one-click revocation of elevated permissions, issues ephemeral emergency auto-expiring access, and provides comprehensive audit logs for forensics and compliance reporting.

All upstream APIs should be treated as untrusted with required input/output validation, egress constraint, and tight scoping of partner credentials.
Addressed Risks: Supply‑chain data leaks, SSRF and injection via upstream responses, and over‑privileged partner integrations.
Implementation:
Maintain a complete and continuously up-to-date catalog of all APIs (internal, external, partner), classified by sensitivity and criticality to business processes.
Addressed Risks: Shadow or forgotten APIs become unmonitored attack surfaces.
Implementation:
Apply “secure by design” principles during API development; minimize exposed endpoints, reduce data returned, and enforce schema validation.
Addressed Risks: Excessive data exposure and mass assignment.
Implementation:
Treat API security testing as a continuous process integrated into development, and not a one-time event.
Addressed Risks: Vulnerabilities slip into production unnoticed, and late fixes are costly and risky.
Implementation:
Apono ensures that any temporary testing credentials or elevated scopes are ephemeral, preventing testers from holding permanent, risky access.

Enforce end-to-end encryption for API traffic and secure sensitive data at rest with strong encryption and key management.
Addressed Risks: Sensitive data interception or theft.
Implementation:
Extend identity governance to all bots, service accounts, API tokens, and workloads, ensuring every machine identity has an owner, lifecycle, and pre-defined scope.
Addressed Risks: NHIs that accumulate standing privileges and static secrets that attackers exploit.
Implementation:
Apono automates JIT/JEP access for NHIs, eliminates standing privileges, and provides a centralized audit trail across all machine identities.
Conduct regular reviews of who or what has access to your APIs in accordance with relevant regulatory or industry-specific requirements, such as GDPR, HIPAA, PCI-DSS, and SOC 2. These reviews should extend beyond APIs themselves to include underlying cloud infrastructure and data center management, where API access often intersects with critical systems and regulatory controls.
Addressed Risks: Drift in access privileges that leads to overexposed data, and failed audits result in fines, lost business, and reputational damage.
Implementation:
Equip developer teams with secure-by-default patterns and ongoing training, so security isn’t bolted on but baked in.
Addressed Risks: Developers under deadline pressure may expose sensitive data or skip controls.
Apono reduces developer friction by streamlining access requests (via Slack/CLI) and ensuring secure defaults (temporary, least-privileged, and auditable) so engineers don’t need to over-grant permissions to maintain velocity.

Treat this checklist as a living document. Integrate feedback and test controls, and add runtime protection to catch the vulnerabilities that may slip through.
Addressed Risks: Evolving threats and architectural changes to your environment that may introduce previously unfamiliar cyber risks.
Implementation:
An API security checklist operationalizes security by standardizing controls, aligning teams, and making protection repeatable. However, securing APIs is an ongoing cycle of auditing, monitoring, and enforcing least privilege, especially for vulnerable non-human identities. Apono steps in to automate Just-In-Time and Just-Enough Permission access, eliminate standing credentials, and provide full audit trails across every API interaction. Ready to close the gaps in your API security posture? Book a demo with Apono or download the checklist to put API security into action today.
We’re excited to announce the launch of our MCP server for end users, designed to boost engineering productivity while keeping security strong.
Engineers often know exactly what they need to do—deploy to a new environment, spin up a workload, investigate logs—but not which permissions translate into those tasks. That leads to two common problems:
The result is wasted time, frustrated teams, and an inflated attack surface from unnecessary standing privileges. On top of that, engineers often spend extra time checking what they already have access to or chasing approval updates.

AI tools like Claude, ChatGPT, Cursor, and CoPilot are changing the way engineers interact with their environments. Instead of bouncing between dashboards, they can ask for what they need in natural language.
Model Context Protocol (MCP) makes this possible by connecting LLMs to enterprise systems so users can query, retrieve, and act without leaving their workflow. Think about them like the USB-C that connects your favorite AI services to the tools you use, simplifying the adoption of AI into your teams’ workflows.
Our Apono MCP Server applies this approach to access requests:
With Apono MCP, engineers can:
So how are users leveraging Apono’s MCP to solve problems? Let’s take a look at a few key examples.
So how are users leveraging Apono’s MCP to solve problems? Let’s take a look at a few key examples.
The Apono MCP Server delivers clear benefits:
Our MCPs integrate with a growing number of the tools engineers already rely on:
Along with our MCP support, we recently launched our AI-powered Apono Assist for engineers on our platform, Teams, and other UIs. Read about it in this blog.
And don’t think that we’ve forgotten about the Apono admins. We will be launching an MCP server for Apono administrators soon so stay tuned for updates.
We’re also building support for securing MCPs as they become a standard part of enterprise workflows alongside the anticipated rise of Agentic AI.
With Apono’s MCP Server, engineers request and manage access faster, admins spend less time translating requests, and security stays strong with least privilege built in.
Reach out to us to learn more about MCPs in Apono check out our docs and reach out to us for a demo today.
The Drift OAuth breach didn’t just expose one SaaS vendor — it exposed a systemic blind spot: the sprawling, ungoverned world of Non-Human Identities.
In case you missed it, in August 2025, attackers from UNC6395 exploited compromised OAuth tokens from Salesloft’s Drift integration—an AI chat tool—to access and exfiltrate data from Salesforce, including credentials like AWS keys and Snowflake tokens.
This breach affected over 700 organizations and extended beyond Salesforce to integrations with Google Workspace and other platforms like Slack, AWS, and Microsoft Azure, just to name a few.
The first line of response has prompted a complete revocation of Drift tokens and disabling of significant numbers of related app integrations.
Since the initial news of the breach, we have learned that the attackers are combing through the exfiltrated stolen data in search of more tokens and credentials that they can use for further criminal activities.

In this blog, we’ll cover why Non-Human Identities like API tokens can cause serious security challenges for organizations and explore how smarter access management approaches can help to reduce risk without compromising on operational efficiency.
API tokens act like digital keys that let SaaS products and business systems talk to each other securely.
Instead of sharing a username and password, a token gives controlled, time-limited access to exactly the data or actions a system needs. This enables automation and collaboration between tools (like a SaaS app pulling data from a business system) while reducing the risk of exposing full credentials.
But as we’ve seen here and in plenty of cases before, these tokens are exceedingly risky if they are compromised. And even more dangerous when they’re not managed properly.
If we think about these tokens like the keys they are, then they are essentially keys to our kingdom with privileges that attackers can use to access our resources.
These powerful tokens come with several significant challenges, including:
All of these problems are amplified by the sheer scale of NHIs. Industry research estimates ratios ranging from 40:1 today to projections of 100:1 or more with AI adoption.
And as organizations adopt more AI, this number is likely to skyrocket. The impact will be a massive expansion of the attack surface, providing even more opportunities for hackers to exploit the situation.
While attribution is far from a hard science, all signs point to this hack being the work of the loose collective of criminals associated with the Com. We usually read about them under names like LAPSUS$, Scattered Spider, and Shiny Hunters.
These hackers have made a name for themselves in focusing on identity as their main point of entry and exploitation. They’ve been behind the MGM, Okta, Snowflake, and other big name hacks. They employ methods such as social engineering and possess a deep understanding of identity and access management (IAM) to compromise identities and infiltrate target systems.
What they have shown in their attacks is that they can exploit the human and non-human identities as part of a successful attack, compromising identities and leveraging their privileges to steal or encrypt targets’ data.
There’s an argument to be made that these crews are far less technical than the hackers of the previous era who spent months looking for ways to exploit a vulnerability or find a zero day.
In many cases, they have been shown to simply buy access from a broker, pay off employees at the phone company for a SIM swap attack, or call up the help desk and ask for a password reset.
But it’s not stupid if it works, and these criminals have the illicit paydays to prove it.
Unfortunately, these groups have discovered that while they can successfully target large enterprises, the path of least resistance is often to attack a vendor in a supply chain attack.
Especially if the vendor is less mature in terms of security, they can exploit it to slither their way up the chain and become a bigger, richer target.
If a vendor finds themselves targeted in a supply chain attack, it can have serious reputational, not to mention financial pains as companies are less likely to trust them with their data and access to their systems moving forward.
In the immediate aftermath of this incident, here’s what security teams can do right now to reduce exposure:
One of the key takeaways from this story is that we shift our mindset. Security must move from protecting only human access to governing every identity that can touch data, human or not.
The targeting of an AI tool here is interesting because it shows us that attackers understand that AI agents require a lot of access and freedom of movement between applications to be effective. That’s a lot of connectivity that can be exploited to gain access to different systems that they can take advantage of and it puts defenders in a bit of a conundrum that is as old as time.
Do we let our AIs run free and maximize the benefits of what they can give us or do we tightly control access to limit damage from abuse?
The challenge with Agentic AI is that it is:
An agent will access whatever it thinks it needs to in order to achieve its goal. In this way it’s like a human user.
But the scale and lack of visibility of Agentic AI is going to be a challenge for security teams moving forward.
So how should security teams think about mitigating risk from Agentic AI and all the rest?
Security teams need to take a flexible approach that breaks down the silos of human, non-human, and now Agentic AI identities, all of which are essentially on the same plane. It should matter less who or what the identity is and focus more on the access and how privileges are used.
Remember that the hackers don’t see your environment as a silo, so you shouldn’t either. Move your human users over to Just-in-Time access for sensitive resources and reduce privileges for all, including your NHIs, based on what they actually use and your risk.
From Apono’s approach, we put the focus on the principals and give admins granular controls over what privileges those principals, like API tokens, have.
We start by providing full visibility and inventory management principles throughout your environment.

In practice, we detect risks like:

There are some distinct advantages to the quarantine option because it allows you to:

Phishing, credential theft, and breaches happen. They will continue to happen because the financial incentives are there.
We are past the stage of assuming breach. Now we need to assume that our identities (human and non-human like API tokens, service accounts, and more) are compromised.
Attackers can now leverage all of their access privileges to not only access resources in your environments, but also to find more tokens, credentials, etc that they can use to continue their attack. This might be pivoting to additional systems or to your customers’ customers.
If your customers trust you to securely handle their data, then you need to make sure that you are taking sufficient precautions to protect them. As more incidents of big companies getting compromised by way of their vendors hit the headlines, we can expect them to demand more from their vendors if they want to do business with them.
As the business world becomes more and more connected with machine identities and AI agents relying on tools like API tokens to communicate with each other across platforms, organizations will have to step up their game to ensure that they are a step ahead of the criminals.
This means being responsible by following best practices and embracing automation to handle the scale, but also not being afraid to embrace the opportunities that AI agents are offering us for greater productivity and growth.
To learn more about how Apono is enabling organizations to confidently embrace the AI-driven future, reach out to us today and start the conversation.
Or, try our Cloud Assessment for NHIs to uncover hidden risks in your AWS environment and explore smart remediation solutions powered by Zero Standing Privileges.