Exciting News:Introducing Agent Privilege Guard – Runtime Privilege Controls for the Agentic Era

Read More

Identity and Access Governance (IGA): Definition & Differentiation Explained

Identity is now the most common entry point for attackers. In cloud-native environments, thousands of microservices, containers, and agents request credentials every day, and each one represents a potential weakness. The imbalance between human and non-human identities (NHIs) is growing, but many organizations still devote the bulk of their identity and access governance (IGA) efforts to the former. 

Over the past two years, 57% of organizations experienced at least one API-related breach; of those, 73% saw three or more incidents. At the same time, the global IAG market was valued at approximately $8 billion in 2024, driven by compliance frameworks such as SOC 2, GDPR, HIPAA, and CCPA that demand auditable proof of access controls.

The takeaway: static defenses built on logins and standing permissions can’t keep pace with identities that appear and disappear daily. For engineering teams, identity and access governance has shifted from a “nice-to-have” to a baseline requirement for both security and trust.

What is identity and access governance (IGA)?

Identity and access governance (IGA) is the framework your organization can use to decide who should have access to systems, applications, and data, and whether that access is still appropriate. IGA goes beyond the mechanics of logging and instead focuses on oversight, accountability, and policy enforcement.

Most IGA programs are built around a few core practices:

  • Identity lifecycle management: Provisioning, modifying, and deprovisioning accounts.
  • Role and entitlement management: Grouping permissions and enforcing least privilege.
  • Access reviews and certifications: Recurring checks to validate appropriateness of access.
  • Compliance reporting: Generating evidence required by auditors and regulators.

Unlike identity and access management (IAM), which enforces access at runtime, IGA asks the harder question: should this access exist at all? Answering this question is harder today because identities are multiplying. Machine identities outnumber humans by over 80 to 1, making them one of the fastest-growing risk classes in cloud-native environments. Unlike human accounts, NHIs rarely go through onboarding or offboarding, rely on static API keys or long-lived tokens, and are frequently overprivileged—the perfect storm for attackers.

Source

Core capabilities of Identity and Access Governance

IGA is about ensuring access is both appropriate, accountable, and, most importantly, auditable. To achieve these three pillars, IGA platforms bring together several capabilities.

  • Access reviews and certification: Periodic checks give managers and system owners the chance to confirm that permissions are still valid. They’re meant to clean up access left behind after job changes, project work, or employee turnover.
  • Role and entitlement management: Permissions are grouped into roles to make administration manageable. This model keeps access consistent across teams and reduces the scatter of exceptions that creep in over time.
  • Separation of Duties (SoD): SoD prevents conflicting privileges so that no single identity has the ability to commit fraud or bypass checks.
  • Audit and compliance reporting: Most frameworks, from SOC 2 to GDPR, require proof that access is being governed. Automated reports provide that evidence and complement broader vulnerability management programs designed to reduce risk. 
  • Delegated administration and approval workflows: Requests can be routed to business or technical owners who best understand whether access makes sense. This step spreads responsibility more evenly, while decisions remain logged centrally.

Crucially, modern IGA extends these capabilities beyond human users to include NHIs, ensuring service accounts and automation agents undergo the same scrutiny as employees.

Source

IGA, IAM, and PAM Compared

Identity management has grown into a set of overlapping disciplines, each with its own focus. Many people still use the terms interchangeably, but this approach can blur the lines between strategic governance and privileged account protection.

It’s helpful to understand exactly where each begins and ends. IAM is concerned with authentication and access control at the point of login. IGA adds oversight, certification, and auditability across all identities. To monitor and control their activity, privileged access management (PAM) narrows in on the riskiest accounts, such as administrators and root users. For example, organizations rely on PAM software to enforce controls around these sensitive accounts, ensuring that high-risk permissions are granted only when necessary and closely monitored.

Table 1: IGA vs IAM vs PAM

DisciplineFocusTypical ScopeKey Purpose
IAMEnforcementAuthentication, MFA, SSOProve identity and control access at login
IGAGovernanceHuman and non-human identitiesDefine, review, and certify who should have access and why
PAMPrivilegeHigh-risk administrator and root accountsControl and monitor privileged sessions

5 Challenges of Implementing IGA in Cloud-Native Environments

1. Scaling Ephemeral Identities

In a cloud-native stack, thousands of containers, pods, and serverless functions may launch and terminate within minutes. Each instance often requires its own token or temporary credential to function. Legacy governance processes that rely on quarterly or monthly reviews cannot track this churn, so permissions are left unchecked. Security teams end up with audit trails that miss most of the short-lived identities, which makes proving compliance or investigating incidents almost impossible. A best practice to overcome this challenge is to use a cloud-native access management solution like Apono, which automates JIT access and generates granular audit logs, so even short-lived identities are governed in real time.

2. Complex Permissions

Cloud providers like AWS, Azure, and GCP offer permission systems with thousands of individual actions that can be combined into highly customized roles. Developers frequently over-provision roles because mapping business tasks to such granular entitlements is too time-consuming. Over time, these permission sprawl problems multiply, creating toxic combinations that static governance models don’t properly evaluate.

3. Friction with Development Teams

When engineers need access to a production database or a new cloud service, the request usually goes into a ticket queue. When reviews take too long, teams are forced to delay work or find workarounds such as borrowing credentials. 

This bottleneck not only slows delivery but also weakens governance because security becomes seen as a blocker rather than a partner. In some organizations, administrators pre-approve broad entitlements “just in case.” This mistake undermines the entire principle of least privilege and increases the chance of compromised credentials being abused across environments. 

4. Non-Human Identities

Source

Unmonitored NHIs are among the most consistent attack vectors in identity-driven breaches today. Service accounts and automation agents run critical workflows in CI/CD pipelines, monitoring systems, and infrastructure tools. These identities often carry long-lived credentials with powerful permissions. Unlike human users, they rarely leave the organization, so deprovisioning processes don’t catch them. 

When one of these accounts is forgotten or left unmonitored, it becomes a permanent backdoor. Attackers frequently target exposed API keys or tokens for this reason, knowing they are less likely to be rotated or reviewed. As we’ve seen with emerging issues like the MCP protocol, unsecured machine-to-machine communications can further amplify the risks of unmanaged NHIs.

Recent examples include Microsoft’s 2023 SAS Token Leak, where researchers inadvertently published a token that exposed 38TB of internal data, and the BeyondTrust API Key Breach in 2024, where attackers exploited an overprivileged, static key to reset passwords and escalate privileges. Both incidents highlight how unmanaged non-human identities can open the door to large-scale compromise.

An essential NHI security best practice is to run a Cloud Access Assessment to uncover risks in your AWS environment, provided by Apono at no cost (for a limited time only). Apono’s platform is built to close this blind spot by enforcing JIT and JEP policies for NHIs just like human accounts, stopping long-lived keys from becoming backdoors. 

5. Fragmented Visibility

Most enterprises work across multiple clouds, each with its own identity console and reporting format. Security teams trying to answer “who can access sensitive data” are forced to stitch together incomplete reports. The lack of a unified view leaves gaps for auditors and prevents real-time oversight—a challenge that becomes even more critical in industries like FinTech or government, which are subject to additional compliance requirements like CUI Basic.

How Modern IGA is Evolving

Identity governance is moving from periodic checks to continuous oversight. Instead of leaving broad permissions in place and revisiting them months later, newer approaches shift towards:

  • Just-in-Time access (JIT): Temporary access that expires automatically and reduces the window of risk while giving auditors a clearer picture of how access is actually being used. JIT access automation and contextual approval workflows are essential for scaling governance without undermining developer productivity.
  • Zero Trust: Assumes no identity should have standing access by default. Every request must be verified in context, regardless of whether it comes from a human developer or a bot in a CI/CD pipeline. 
  • Just-Enough Privileges (JEP): JEP is particularly important for NHIs. JEP grants the minimum rights needed for a task for the shortest possible time. This shift addresses the chronic overprovisioning of machine identities, aligns with Zero Trust, and directly reduces the blast radius of a potential compromise.
  • Workflow integration: Approvals embedded into Slack, Teams, or CLI so governance fits into daily developer workflows.

By enforcing just-in-time access and contextual approvals, IGA reduces the standing permissions that often undermine API security in CI/CD pipelines and cloud workloads.

Bringing Automation to the Center of Governance with Apono

Cloud-native deployments and the explosion of non-human identities have pushed traditional identity governance past its limits. Static reviews and manual approvals leave too much standing access in environments where roles and permissions change constantly. To reduce risk, governance needs automation, time-bound access, and policies that apply equally to people and non-human accounts.

Apono redefines IGA for cloud-native teams. It eliminates risky standing permissions for both human and non-human identities, while ensuring compliance frameworks increasingly require full visibility into NHI governance. Apono’s platform automates JIT and JEP to eliminate standing permissions, generates granular audit logs for compliance, and applies governance equally to human and non-human identities. Approvals flow directly through Slack, Teams, or CLI—every action logged, every change auditable.

With built-in break-glass and on-call flows, and deployment in under 15 minutes, Apono delivers Zero Trust governance at the speed of modern infrastructure.

Ready to Eliminate Standing Access Risk?

Apono closes the gap by automating JIT and JEP for both human and non-human identities — stopping long-lived keys from becoming backdoors.Download The Security Leader’s Guide to Eliminating Standing Access Risk to see how leading cybersecurity companies are rethinking access control.

Inside the Crimson Collective Attack Chain—and How to Break It with Zero Standing Privileges

New details are emerging in recent weeks on how the Crimson Collective threat group has been conducting a large-scale campaign targeting Amazon Web Services cloud environments. Recent reports highlight how easily the attackers progressed once they obtained valid credentials.

The Crimson Collective claims to have exfiltrated ~570 GB across ~28,000 internal GitLab projects; Red Hat has confirmed access to a Consulting GitLab instance but hasn’t verified the full scope of those claims.

After the breach became public, Bleeping Computer reports that the threat actors partnered with headline-grabbing extortion group, Scattered Lapsus$ Hunters, to increase pressure on Red Hat.

In this post, we’ll break down how the hackers carried out their attack and how to keep your organization protected via a Zero Standing Privileges approach.

Breaking Down the Attackers’ Methodology 

According to the report from Rapid 7 in Bleeping Computer, the attackers took a tried but true course of action to compromise their targets and make off with their illicitly obtained data.

  1. Find exposed keys — They used TruffleHog to scan target environments and discover secrets in repos, configs, or other leaks to gain initial access.
  2. Establish persistence — Then they used the leaked keys to call AWS APIs and create highly privileged IAM users/login profiles and new access keys.
  3. Privilege escalation — With their foot firmly in the door, they attached AdministratorAccess to their new users. Boom: full control.
  4. Recon — Privileges in hand, they then hit the cloud running, enumerating users, EC2, S3 buckets, RDS clusters, EBS volumes, regions, and apps to map the prize.
  5. Data collection — Next they started hoovering up data, changing RDS master passwords, taking snapshots of their targets’ DBs and EBS volumes.
  6. Exfiltration — With the targets’ data collected, they moved the snapshots/objects to S3 buckets that they controlled or accessible storage; using EC2s that they spun up and attaching volumes under permissive security groups for faster transfers.
  7. Extortion — Finally, they sent ransom notes from inside the AWS account using SES and to external contacts.

The Cloud Identity Challenge

This latest attack highlights a tough if not cliche truth in the cloud: attackers don’t need to break in if they can just log in. Once credentials with standing privileges are compromised, it gives them everything they need to move freely across environments.

The reality is that credential compromise is now a matter of when, not if. And as the number of Non-Human Identities (NHIs)—like service accounts, IAM roles, and API keys—continues to explode, the challenge keeps growing. In many organizations, NHIs now outnumber human users by roughly 200 to 1.

Things are getting even more complicated with the rise of Agentic AI tools. These systems operate at massive scale with unpredictable access needs, often without the visibility security teams rely on to monitor what’s actually being accessed.

Protecting against these kinds of attacks means focusing not just on preventing credential theft, but on minimizing what attackers can do after credentials are compromised. That’s why AWS told BleepingComputer that customers should “use short-term, least-privileged credentials and implement restrictive IAM policies.”

That advice perfectly captures the idea behind Zero Standing Privileges (ZSP), reducing the amount of always-on access available in your environment, so even if credentials are stolen, attackers have nowhere to go.

Of course, actually putting that into practice is the hard part. Manual access management is slow and painful, and cutting privileges too aggressively risks hurting productivity. And as cloud environments and NHIs multiply, keeping up manually just isn’t realistic anymore.

How Apono Helps

Apono makes it simple to put Zero Standing Privileges into action—without slowing anyone down.

Here’s how:

  • Automatically discovers and remediates standing privileges across both human and non-human identities
  • Delivers Just-in-Time (JIT) access, granting permissions only when needed and revoking them immediately after use
  • Reduces Non-Human Identity (NHI) privileges safely, using automated rightsizing via quarantining and reversible remediation that preserves uptime and avoids breaking integrations
  • Centralizes and automates governance, unifying policies across cloud, on-prem, and AI-driven systems
  • Supports Zero Trust initiatives, enforcing short-lived, least-privileged access without adding friction for engineers

With Apono, security teams can close privilege gaps before attackers can exploit them, while developers and AI systems get access exactly when—and only when—they need it.

If you want a quick way to benchmark where standing privileges still exist in your environment, download our Zero Standing Privileges (ZSP) Checklist: a fast, practical self-assessment to help you identify hidden risks and early indicators of exposure.

Ready to take a smarter approach to cloud access?

See how Apono can help your organization prevent credential-based attacks while keeping teams fast and productive. Visit apono.io/jit-and-jep/ to learn more about our platform or request a demo.

What is Agent2Agent (A2A) Protocol and How to Adopt it?

Imagine autonomous agents negotiating and acting on your behalf—no manual hand-offs, just an efficient, policy‑driven communication. That’s the promise of Google’s Agent2Agent (A2A) Protocol, unveiled at Google Cloud Next in April 2025. Developed with input from over 50 partners, A2A is now open-sourced under the Apache 2.0 license and governed by the Linux Foundation.

But excitement quickly collides with reality. Early adopters report compliance blind spots (who approved that token and when?), latency added by cross-agent orchestration, and the operational overhead of adding another standard into pipelines. As agent-based architectures become the backbone of AI-driven automation, the pressure is mounting on engineering teams to enable secure, autonomous interactions between services. 

A 2025 global AI survey reveals that 29% of enterprises are already running agentic AI in production, with another 44% planning to join them within a year. Cost-cutting and reducing manual workloads are among the top goals for adoption. Understanding the Agent2Agent Protocol is vital for building secure and scalable systems that can keep up with the next wave of automation.

What is the Agent2Agent (A2A) Protocol?

Google’s Agent-to-Agent (A2A) Protocol is an open, vendor-neutral language that lets independent AI agents discover each other, negotiate how they will talk (text, files, streams), and work together without exposing their private code or data. 

Google unveiled the spec on April 9, 2025, at Cloud Next. It is backed by more than 50 technology partners and is now maintained as an open-source project under the Apache 2.0 license.

Google kicked the A2A project off after running large, multi-agent systems for customers and seeing the same pain points repeat:

  • Brittle one-off integrations.
  • Security gaps.
  • No common way for agents from different vendors to “shake hands.”

How does the Agent2Agent Protocol work?

The four-step flow below illustrates the full A2A handshake from discovery to streaming task updates.

1. Discovery with an Agent Card

Every agent publishes a tiny JSON file, /.well-known/agent.json, listing its name, endpoint, skills, and supported auth flows. A client agent simply fetches this card (directly or via a registry) to see who can do what and how to connect.

2. Auth in Micro-slices

The card also tells the caller which OAuth 2/OIDC method to use. The client obtains a short-lived token (minutes), allowing access to be scoped and automatically expires. This step eliminates hardcoded secrets, marking a shift from static secrets to dynamic machine identity management, where each agent authenticates based on policy, context, and lifespan.

3. Task Exchange Over Web Standards

With a token in hand, the client sends a task/send or task/sendSubscribe request via JSON-RPC 2.0 over HTTPS.

  • Synchronous work:task/send returns the answer right away.
  • Long-running work:task/sendSubscribe opens a Server-Sent Events (SSE) stream so the remote agent can push status and partial results. Tasks move through states (submitted → working → completed) and can include messages or artifacts (files, JSON blobs, images).

4. Built-in Observability

Each request/response carries trace IDs, and agents emit structured logs and metrics in OpenTelemetry Protocol (OTLP) format. You can drop A2A traffic straight into existing dashboards without bolting on a separate telemetry layer. This level of observability is essential for identifying anomalies and containing the risks of non-human identities operating in complex, distributed environments.

Many teams adopting A2A have struggled with blind spots, like losing track of which agents initiated sensitive operations or where tokens are reused across flows. Without built-in tracing and structured logs, auditing multi-agent systems becomes a fragmented, manual task. A2A’s observability layer helps reduce that operational burden, but it still requires thoughtful integration with existing security tooling.

What is the Agent2Agent Protocol designed to do?

At its core, A2A gives every software agent a common language and contract so they can:

  • Discover one another without manual registry updates.
  • Exchange tasks securely with scoped tokens and auditable IDs.
  • Stream real-time results (via SSE) from remote agents for both quick jobs and long-running workflows.

By replacing brittle webhooks and custom RPC layers with an open JSON-RPC spec, the Agent2Agent Protocol eliminates glue code and reduces integration overhead across ecosystems.

Because discovery, auth, transport, and telemetry are part of the spec, you don’t waste cycles reinventing service discovery, API gateways, or audit pipelines. You wire agents together (much like microservices), then layer governance tools on top to enforce least-privilege, time-boxed access across your infra. It reduces repetitive integration tasks, which improves developer productivity across teams working in complex environments.

Why is the Agent2Agent Protocol a good thing?

The Agent2Agent Protocol solves real pain points in DevOps and automation by making agent communication smarter and safer. Here’s why it’ll be beneficial in the long run.

Plug-and-Play Interoperability

Any AI agent that speaks A2A can call, or be called by, any other agent.

Example: If a vulnerability scanner agent discovers a patch management agent during a CI run, it can send a task with the CVE list and stream the fix status back to the build.

Built-in Security

Short-lived OAuth/OIDC tokens and signed task IDs keep access scoped and auditable without requiring the hardcoding of secrets.

Example: When a monitoring bot detects a spike, it requests a one-off token to spin up extra pods. The token expires automatically once scaling is complete, aligning with enterprise identity management best practices.

Less Glue Code, Faster Pipelines

The Agent2Agent Protocol includes built-in support for agent discovery, JSON-RPC 2.0 transport, and SSE streaming. Teams can focus on features instead of writing adapters and polling loops.

Example: A scheduler agent queries rightsizing agents in AWS, GCP, and Azure, aggregates savings, and opens a single cost-cutting PR. No polling scripts are required.

Enterprise-Grade Observability

Every request carries trace IDs and standard OTLP metrics, which are dropped straight into Grafana/Prometheus dashboards, regardless of whether those agents are operating in the cloud, across edge services, or in traditional data centers.

Example: A chatbot passes a billing request to a payment agent via A2A; the handoff is fully logged, and the one-time token expires as soon as the charge is completed.

Agent2Agent Protocol Design Principles

These guiding principles explain why A2A stays flexible, secure, and developer-friendly as the ecosystem expands.

  • Agent Cards for zero-config discovery: Every agent publishes a small JSON file at /.well-known/agent.json that lists its endpoint, skills, and auth method.
  • Standard JSON-RPC 2.0 over HTTPS: Requests (tasks/send) and responses travel as JSON-RPC messages on plain HTTPS, so agents in any language interoperate through existing API gateways or mTLS proxies.
  • Built-in auth with short-lived tokens: Tokens scoped per task and expiring in minutes eliminate long-lived secrets while integrating with enterprise SSO, a key cybersecurity best practice for identity-aware systems and zero trust architectures.
  • Flexible interaction patterns: Uses a blocking call for quick answers and tasks/sendSubscribe for long tasks. Gives real-time updates via Server-Sent Events (SSE). Agents can even push webhooks for fully async workflows.
  • Rich, multimodal data exchange: A single task can bundle text, JSON, files, images, or audio as separate “parts,” allowing agents pass artifacts logs, screenshots, CSVs without inventing new MIME schemes.
  • Versioning & vendor-neutral extensibility: The spec includes a compatibility flag so new features roll out without breaking older agents. The A2A Apache-2.0 license prevents any one vendor from locking the rest out. On July 31, 2025, A2A version 0.3 was released—adding gRPC support, signed security cards, and extended Python SDK support; the protocol now counts over 150 supported organizations.

How to Adopt the Agent-to-Agent (A2A) Protocol in 6 Practical Steps

Follow this step-by-step guide to adopt your first A2A agents and weave them safely into your workflow.

Step 1: Install the Sample Toolkit

Clone Google’s reference repo and drop it into the Python SDK.

git clone https://github.com/a2aproject/a2a-samples.git
cd a2a-samples
python -m venv .venv && source .venv/bin/activate
pip install a2a-python            # or a2a-js for Node

The repo includes basic example agents and lightweight helper code for JSON-RPC calls and SSE streaming, but production implementations will need hardening.

Step 2: Launch a Demo Agent

Pick one of the ready-made agents (e.g., the “currency” FastAPI service) and run it.

uvicorn samples.python.currency_agent:app --port 10000 --reload

When the server starts, it auto-serves an Agent Card at: https://localhost:10000/.well-known/agent.json, advertising its skills and auth method. 

Step 3: Expose the Agent Card

Make that JSON file reachable via a public URL, an internal LB, or a registry entry. Other agents can pull it and learn who you are and how to talk. No extra service-discovery layer is required. For production environments, agents can also publish to a centralized A2A registry, which supports indexed search and simplifies discovery across large infrastructures.

Step 4: Hook in Short-Lived Auth

Edit the auth block in the Agent Card to point at your OIDC or token issuer and set the TTL to minutes. Every task call will now carry a scoped, self-expiring token instead of a long-lived secret.

Step 5: Send a Task and Stream the Result

From another agent (or just curl), invoke the first agent:

TOKEN=$(<your_token_here> --ttl 5m --aud currency-agent)
curl -H "Authorization: Bearer $TOKEN" \
     -X POST https://currency-agent:10000/tasks/sendSubscribe \
     -d '{"input":{"amount":"50","from":"USD","to":"JPY"}}'

The request uses JSON-RPC 2.0 over HTTPS; the sendSubscribe variant opens a Server-Sent Events stream, so you get live status until completed.

Step 6: Watch the Traces

The SDK emits OTLP logs/metrics with a shared trace ID. Point OTLP logs and metrics to your backend of choice for unified observability.

Security Automation for Safe A2A Implementation

The Agent-to-Agent (A2A) Protocol enables software agents to trade tasks and data on the fly, but it truly shines when access is tightly controlled and fully auditable. Apono and the A2A Protocol share a key mission: enabling secure, policy-driven access between non-human identities (NHIs) like service accounts, bots, and APIs. Apono ensures that, even as NHIs interoperate across boundaries, their access is ephemeral, precisely scoped, and compliant. 

Apono’s platform is purpose-built to manage access for NHIs by enforcing Just-In-Time (JIT) and Just-Enough-Privilege (JEP) access, thereby reducing standing privileges and misconfigurations. It ensures every service account, bot, or API key gets only the access it needs for exactly as long as it’s needed. 

Apono is designed to become the enforcer of orchestrated permissions across infrastructure by automating and right-sizing the lifecycle of access for NHIs—including provisioning, expiration, and auditability—to install least privilege for NHIs and bring zero trust to all of your identities. 

With Apono’s auto-expiring tokens and centralized logs, you can narrow the window for misuse and provide security teams with a single source of truth when compliance and auditing questions arise.

Get hands-on with Apono. Request a demo to deploy in under 15 minutes and start eliminating overprivileged access.

Build vs. Buy Access Control: Why Apono Is the Smarter Choice for Cloud & Security Teams

The Access Management Dilemma in Hybrid Environments

Security and engineering teams today face a tough balance: protecting sensitive resources while keeping developers productive. As organizations shift from on-prem to the cloud, access management becomes one of the biggest challenges.

With more identities—human and non-human—gaining access to more resources across hybrid environments, the risks rise. Studies show that over 95% of identities hold excessive privileges, and attackers are exploiting this reality, with 88% of breaches starting from compromised identities.

It’s natural for engineering teams to want to “build” their own Just-in-Time (JIT) access solution. But is that really the best use of resources? Increasingly, organizations are asking themselves:

Should we build an in-house solution or buy a platform that delivers secure, scalable JIT access out-of-the-box?

This article explores the trade-offs of building vs. buying so you can make the right choice for your organization.

The Real Costs of Building Your Own JIT Access Management

Rolling your own JIT solution sounds simple, but in practice, it’s often a patchwork of services, scripts, and ongoing maintenance.

What it takes to build:

  • Provisioning logic: Microservices or Lambdas to trigger access grants/revocations.
  • Rules engine: Custom service to decide who can request what.
  • Integrations: Connectors for each cloud, app, and service.
  • Role management: Mining roles, setting up RBAC, auditing usage.

The hidden cost:

  • Every API change or new service means potentially new engineering work, requiring design, development and testing.
  • DIY systems are usually scoped for niche apps. Not broad coverage leaving huge gaps including how to support the rest of the identity fabric and related tools.
  • Continuous upkeep and testing drains developer time and slows agility.

In short, the challenge isn’t just building. It’s maintaining, its testing, its patching and scanning for vulnerabilities. It’s having a team to support you.  

💡 Thinking about building your own solution?
See how leading teams evaluate Cloud PAM platforms before they commit. Download the Access Platform Buyer’s Guide here

Build vs Buy Comparison Table

FactorBuild In-HouseBuy a Platform (General)Apono Advantage
Speed to DeployMonths to design, develop, and test, resulting in a slower time-to-value.Typically faster deployment with vendor-provided integrations and support.API-first deployment with Terraform, Helm, CloudFormation; Slack/Teams-native workflows for fast adoption.
Role Creation ModelOften depends on pre-created roles — slow to adapt, prone to over/under-privilege.Many solutions offer role management, which may require predefined roles or templates.Dynamic roles created in real time, scoped to the task, auto-expire, and adapt automatically to business context.
CoverageLimited to your team’s integration work; gaps likely in multi-cloud/SaaS.Most vendors offer coverage across major cloud and SaaS platforms, but breadth and depth can vary.Comprehensive support across AWS, Azure, GCP, Kubernetes, SaaS, and NHIs; single-pane-of-glass management.
Operational OverheadContinuous upkeep for API changes, security patches, and policy logic.Vendor-managed updates and maintenance help reduce the burden on internal teams.Fully vendor-managed with continuous support for new APIs; automated discovery reduces admin effort.
CustomizationFully tailored to unique workflows and niche systems.Platforms typically offer policy frameworks and workflow flexibility, though some adjustments may be needed.Granular Access Flows and contextual policies, easily adapted to customer workflows without brittle custom code.
Security PostureRisk of drift if roles aren’t updated quickly; harder to keep least privilege.Most platforms provide controls for enforcing least privilege, although they are often tied to predefined structures.Real-time context evaluation ensures least privilege with just-in-time and just-enough access; supports NHI quarantine.
Slack / Jira IntegrationRequires custom development and ongoing maintenance.Many platforms offer some integrations, with varying depths.Deep Slack, Teams, and Jira integrations for request → approve → provision flows.
Auto-Expiring RolesMust be built and maintained manually with custom scripts.Some vendors provide time-limited role options.Native auto-expiring, context-aware roles scoped to the task.
Audit LoggingLogs are often fragmented across different systems, requiring manual correlation.Platforms provide centralized logging, but the depth can vary.Unified session auditing with identity-to-action tracking, SIEM & ticketing integration.
DeploymentComplex build-out requiring internal engineering resources.Vendor platforms usually offer guided setup and professional services.Fast, API-based deployment with pre-built integrations and self-service rollout.

Apono’s Secure by Design Architecture

They say never roll your own crypto—because with great power comes great responsibility. The same applies to JIT access. It holds the keys to your most sensitive crown jewels, so protecting it must be a top priority.

Whether it’s a Lambda function or another microservice handling provisioning, it carries a lot of permissions. The real question: how are you ensuring it can’t be compromised, thereby handing attackers the keys to the kingdom?

Apono’s patented secure architecture keeps your environment fully in your control. Our platform runs on two lightweight components:

  • The Web App – where admins create and manage access flows. It never touches your data or resources.
  • The Connector – deployed inside your cloud, fully under your control, executing only pre-defined actions and never storing secrets.

Why it matters:

  • No data exposure – Apono never reads your files, code, or datasets.
  • Secrets stay secret – Credentials are pulled directly from your cloud’s secret store and never cached.
  • Always available – High-availability design ensures access flows keep running without downtime.
  • Compliance built-in – Password resets and credential rotation are enforced automatically.

With Apono, all access stays in your environment—you get secure, reliable, and compliant access management without friction.

What Engineering Leaders Are Choosing

Monday.com transitioned from maintenance-heavy in-house workflows to a secure, scalable, and developer-friendly platform—powered by Apono

ROI at Scale

  • 14,600+ developer hours saved per year through instant, auto-approved access.
  • 3,800+ DevOps hours saved per year by eliminating manual access handling.
  • 18,000+ hours reclaimed annually while strengthening compliance and reducing risk.

ROI Of Your Internal Resources Is On What You Can Sell

If you’re managing access to a niche or one-off resource, building something in-house might feel tempting. But the reality is that most teams quickly learn the cost is higher than the benefit: ongoing maintenance, constant patching, compliance reviews, and dedicating precious engineering cycles to “plumbing” instead of product.

Modern teams need speed, security, and scalability—not another internal project to babysit. A proven cloud-native JIT access management solution delivers reliability out of the box, reduces risk, and frees your engineers to do what they do best: ship value to customers.

Don’t Build What You Could Buy Smarter.

Download the Buyer’s Guide to learn how leading security teams compare Cloud PAM platforms — and why Apono is built for speed, scale, and Zero Standing Privilege.

PAM Buyer's Guide

7 Man-in-the-Middle (MitM) Attacks to Look Out For

Today’s man-in-the-middle (MitM) attacks go far beyond coffee-shop Wi-Fi: they target browsers, APIs, device enrollments, and DNS infrastructure. Using automated proxykits and supply-chain flaws, attackers hijack session cookies, tokens, and device credentials—turning one interception into persistent, high-value access.

Concerningly, these are not edge cases. Automated cyber threat activity surged 16.7%, with over 1.7 billion stolen credentials circulating on the dark web—fueling a 42% increase in credential-based targeted attacks. Passwords and simple MFA fail unless access is limited and continually verified.

Security teams can implement best practices, such as cutting token lifetimes and just-in-time elevation, to protect against man-in-the-middle attacks. Let’s review a comprehensive list of security controls you can implement immediately to make intercepted credentials worthless to attackers.

What are man-in-the-middle (MitM) attacks?

A man-in-the-middle (MitM) attack happens when an attacker secretly intercepts and manipulates communications between two parties. The attacker is positioned in the “middle” of the data exchange, between a user and an app, or between two users or two apps, without anyone noticing. With MitM attacks, the adversary can eavesdrop, steal credentials, alter data, or impersonate one of the parties involved.

Today’s MitM attacks target API calls, machine-to-machine traffic, and even naive agent-to-agent protocols in distributed, cloud-native environments. With stolen tokens or cookies, an attacker gains the same level of visibility and control as a legitimate service account.

Some examples of MitM techniques include:

  • Eavesdropping/sniffing: Capturing unencrypted traffic (credentials, config).
  • Message tampering: Altering data in transit (API responses, payloads).
  • Session & credential theft: Stealing cookies, tokens, or certs to impersonate users/services.

A successful man-in-the-middle adversary gains the same level of visibility and control as the legitimate user or service. Non-human identities (NHIs)—like service accounts, workloads, and agents—are particularly vulnerable. In fact, machine identities now outnumber human identities by as much as 80:1, multiplying the blast radius of a single interception. Without a strong enterprise identity management strategy, these identities are often left overprivileged and unmonitored, creating an easy path for MitM attackers.

7 Man in the Middle (MitM) Attacks to Look Out For, Plus Security Best Practices 

MitM attacks aren’t just theoretical risks; they can be the cause behind real breaches or even large-scale espionage campaigns. Let’s review the most relevant attack types that DevOps and engineering need to watch out for.

1. Classic HTTPS Spoofing and SSL Stripping

Attackers downgrade HTTPS connections to plain HTTP, eliminating the security layer of SSL/TLS. This attack vector leaves communication in plaintext, including login credentials, API keys, and session tokens. Misconfigured certificates, outdated systems, or user dismissal of browser warnings leave room for SSL stripping. DevOps teams are especially concerned about this in CI/CD pipelines and API endpoints, as a single misconfigured connection can become the entry point of a MitM attacker.

Example: The 2015 Superfish adware fiasco showed how software that installed its own root certificate could intercept HTTPS traffic by trusting a single private key. Because those certificates shared a key, anyone with the key could impersonate sites (including banks) without browser warnings.

Security best practices:

  • Enforce TLS 1.3 across all applications and services.
  • Use HTTP Strict Transport Security (HSTS) to prevent downgrade attempts.
  • Automate certificate renewal and rotation to reduce the risk of expired or misconfigured certificates.
  • Build a structured validation plan to ensure TLS configurations and certificate management are consistently tested across environments.

2. DNS Spoofing (Cache Poisoning)

DNS hijacks and registrar compromises let attackers redirect entire domains to malicious infrastructure.

Example: Sea Turtle was a sophisticated espionage operation uncovered in 2019. Attackers targeted domain registrars, registries, and other DNS infrastructure to compromise DNS records and surreptitiously redirect traffic for targeted organizations to attacker-controlled servers. It allowed the attacker to intercept web and email traffic, steal credentials, and even serve forged or fraudulently issued TLS certificates to avoid immediate detection.

Security best practices:

  • Enforce DNSSEC and monitor DNS records via passive-DNS feeds (alert on unexpected delegations).
  • Lock registrar accounts with MFA and role separation; require multi-person approval for DNS changes.
  • Use certificate transparency and automated cert monitoring to detect fraudulent issuance quickly.
  • Use a cloud-native access management platform that limits what compromised DNS traffic can expose, since JIT access makes sensitive tokens and API keys time-bound.

So, what would these best practices look like in practice? Let’s look at an example. Caris Life Sciences used Apono to enforce JIT folder-level permissions in AWS S3—so even if DNS traffic were redirected, attackers couldn’t leverage long-lived standing credentials.

3. ARP Spoofing in Internal Networks (LAN-level MitM)

Attackers poison ARP tables on local networks to force traffic to flow through a malicious host, enabling sniffing and tampering with internal traffic.

Example: Pentest and tool writeups repeatedly show that cheap implants (like Wi-Fi Pineapple and Raspberry Pi) enable LAN ARP attacks. Effective data center management, such as strict network segmentation, helps reduce exposure to LAN-level MitM attacks. 

Security best practices:

  • Segment east-west traffic using VLANs and microsegmentation.
  • Deploy IDS/IPS rules for ARP anomalies and enable switch port security (sticky MACs, BPDU guard).
  • Encrypt internal service traffic (mTLS) so LAN sniffing yields little usable data.
  • Microsegmentation plus JIT permissions ensures that even if lateral movement is attempted, overprivileged standing access isn’t available.

4. Wi-Fi Eavesdropping & Rogue Access Points (Evil-Twin attacks)

Threat actors use evil twin or malicious hotspots to steal users and proxy or intercept their traffic. This type of attack happens frequently in airports, public charging points, cafes, and hotels.

Example: In July 2024, Australian police arrested an individual for operating an “evil twin” hotspot that harvested travellers’ credentials by redirecting victims to spoofed login pages.

Security best practices:

  • Enforce VPN and device posture scans on all non-trusted networks; disable auto-join for enterprise devices.
  • Educate staff to verify SSIDs and employ certificate-pinned applications on high-value services.
  • Enforce 802.1X/enterprise Wi-Fi with device certificates and scan for duplicate SSIDs on the network.
  • Integrate network posture scanning with JIT access to sensitive assets so access from high-risk networks is denied or further challenged.

5. Session Hijacking/Token Replay (Stolen Cookies & API Keys)

Attackers replay stolen session cookies, tokens, or API keys to impersonate services or users, often without passwords. Stolen cookies and tokens don’t just result from MitM attacks; client-side flaws like cross-site scripting (XSS) can also expose session data and API keys, as seen in CVE-2024-44308

Example: In the Microsoft SAS Token Leak (2023), researchers inadvertently published a Shared Access Signature token granting full access to an Azure Storage account and exposing 38TB of sensitive data. This NHI breach showed the risks of over-permissive, long-lived tokens.

Security best practices:

  • Ensure all tokens and permissions are short-lived, scoped, and auto-expiring. That way, even if an attacker captures a valid token, it becomes useless almost immediately.
  • Use device-bound tokens or certificate-based device auth.
  • Detect impossible travel/concurrent sessions and trigger immediate token revocation.
  • A cloud-native access management platform (like Apono) ensures all permissions are short-lived, scoped, and auto-expiring. A stolen token from an intercepted session becomes useless within minutes.

6. Agent-to-Target Hijacks (Compromised Agents & Telemetry)

An attacker with network access (or who exploits a vulnerability in an agent) can intercept or impersonate the agent to server telemetry and commands, hijacking workflows and observability channels.

Example: The Okta Support System Breach in 2023 saw attackers exploit a compromised NHI (a service account) to steal support artifacts containing customer credentials. Additionally, CVE-2025-1146 (CrowdStrike Falcon Linux component) illustrates how TLS validation bugs can enable MitM of agent to cloud traffic. 

A potential MiTM attack exploiting this flaw could trick the vulnerable CrowdStrike sensor into accepting a malicious, non-legitimate server certificate. This attack would allow the attacker to intercept, decrypt, and manipulate the secure communication between the sensor and the CrowdStrike cloud, potentially compromising system confidentiality and integrity.

Security best practices:

  • Enforce strict TLS validation and mTLS for agent-to-cloud links.
  • Limit agent privileges; require JIT elevation for sensitive operations.
  • Log agent activity and alert on anomalous command sequences.
  • Apono enforces JIT approvals on sensitive agent actions, so even a compromised agent account cannot escalate beyond its narrowly scoped, temporary role.

7. Naive Agent-to-Agent Protocols (Weak Inter-Agent Auth)

Simpler or unrecorded agent-to-agent protocols without mutual authentication or request signing enable MitM between agents and services in distributed systems. Such attacks may include context poisoning, agent impersonation, or exploiting an AI agent’s logic.

Example: Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems warns how impostor agents could intercept agent communications. The research shows that an attacker could introduce an impostor AI agent, such as an impostor “email assistant,” into a network of cooperating agents. This malicious actor would then have the capability to intercept and alter legitimate communication between other actors, such that the attacker can inject new instructions and pilfer sensitive data without intervening directly by any human user.

Security best practices:

  • Mandate mutual TLS and cryptographic signing for agent-to-agent calls; mandate strict key rotation.
  • Adopt centralized identity for services (machine identities), per-call authorization, and least-privilege policies.
  • Validate request provenance and use replay protection (nonces/timestamps) in protocols.
  • Machine identity management should be centralized. JIT per-call permissions prevent overprivileged service accounts from being an ongoing MitM target.
  • A cloud-native access management platform like Apono manages machine identities and issues per-call JIT access, ensuring overprivileged service accounts aren’t a standing MitM target.

Where legacy PAM relies on static roles and vault proxies, leaving windows of opportunity open for MitM actors, Apono operationalizes Zero Standing Privilege. That means every credential, token, or role is short-lived, scoped, and continuously verified—dramatically reducing the blast radius of a single interception.

Short-Lived Access, Long-Lasting Security With Apono

Man-in-the-middle attacks typically succeed when stolen credentials or tokens remain useful. The fastest prevention isn’t perfect encryption; it’s limiting how much data of value an attacker has. Make sessions transient, bind tokens to device and context, and identify proxying traffic so stolen credentials will decay or be revoked right away.

Operationally, focus on three levers: mandate phishing-immune MFA and device-bound authentication; use short-lived, auto-rotating tokens with per-call authorization and mTLS for traffic to services; and put high-risk activities behind human approvals and quick revocation playbooks. These steps keep attackers from turning a fleeting interception into a sustained breach.

Stolen tokens expire within minutes with Apono, preventing attackers from turning an interception into sustained access. Permissions expire automatically, machine identities are scoped per-call, and sensitive actions require approvals. Apono is built specifically for this approach: JIT access, automatically decaying permissions, scoped control over agents, and full audit trails reduce the blast radius in case of interceptions. See how temporary access turns the tables. Apono operationalizes zero trust by eliminating standing privileges across human and machine identities.

Book a demo and start making stolen credentials useless before they can be weaponized.

Top 10 Privileged Access Management Software Solutions

Identity-related threats are draining time and resources faster than security teams can keep up. The challenge is no longer just about stopping breaches; it’s about keeping up with the scale of alerts and risks. 

On average, organizations spend 11 person-hours investigating each identity-related security alert. Meanwhile, credential theft has soared 160% in 2025, making privileged accounts and non-human identities (NHIs) a prime target for attackers. 

Modern Privileged Access Management Software Solutions (PAM) offer a way forward by automating access controls and reducing standing privileges, filling the gaps left by traditional approaches and securing your organization. 

What are privileged access management software solutions?

Privileged Access Management software secures and controls access to high-value accounts like admin users and NHIs—basically any accounts that hold the keys to critical infrastructure. These solutions enforce the principle of least privilege, ensuring that users and services only get the access they need, for the minimum time required.

PAM software centralizes and automates access workflows, such as vaulting credentials, issuing short-lived tokens, monitoring privileged sessions, and enforcing policies like Just-in-Time (JIT) access. These tools provide many big ticks for security and compliance, such as creating audit trails for frameworks like SOC 2 and GDPR. 

The need for PAM solutions is especially critical in today’s cloud environments, where non-human identities outnumber human users by more than 80:1. For example, instead of leaving a cloud service account (an NHI) with standing database or API security permissions, PAM tools can issue time-bound credentials only when that service is actively running a job.

What is Privileged Access Management

Benefits of Privileged Access Management Software Solutions

Effective PAM platforms deliver more than protection—they streamline access and ensure that even machine-to-machine credentials are properly governed.

  • Reduces security risk: Eliminates excessive or standing privileges, protecting against credential theft and identity-based attacks, especially those targeting non-human identities (e.g., API keys and bots).
  • Improves visibility into non-human identities: Discovers, monitors, and governs machine-to-machine credentials and Agent2Agent workflows that are often overlooked but frequently exploited.
  • Improves efficiency: Automates provisioning and revocation, removing ticket queues and giving developers on-demand and time-bound access via familiar tools like Slack.
  • Simplifies compliance: Generates detailed audit logs and automated reports to meet requirements like HIPAA, SOC 2, GDPR, and CCPA. Usually includes governance across critical workloads like data management and storage environments. 
  • Supports scalability: Manages access consistently across thousands of users, apps, and cloud environments without slowing teams down.

Key Features of Privileged Access Management Software Solutions

To understand why PAM is critical today, let’s look at what these solutions actually do and how they work.

  • Just-in-Time (JIT) access: Issues temporary, auto-expiring permissions so users and services only have access when needed.
  • Credential vaulting & rotation: Securely stores privileged credentials and automatically rotates them to prevent reuse or compromise.
  • Session monitoring & auditing: Records privileged activity for visibility, forensic analysis, and compliance reporting.
  • Granular policy enforcement: Applies least-privilege access controls at a fine-grained level—down to databases, APIs, Kubernetes clusters, and environments used for AI code generation and automated builds. 
  • Machine identity management: Discovers, governs, and secures credentials for service accounts, APIs, and other NHIs across cloud and DevOps environments.
Privileged Access Management Software Key Features

How to Choose the Right Privileged Access Management Software Solution

When comparing PAM tools, it’s important to balance security with usability and scalability. Here are key factors to guide your decision-making.

  • Prioritize automation: Look for tools that offer JIT access, automated provisioning, and credential rotation to minimize human error and manual overhead.
  • Check integration coverage: Ensure the PAM solution integrates seamlessly with your cloud providers, CI/CD pipelines, and collaboration tools like Slack or Teams. The solution should also scale governance across both human and machine identities. 
  • Assess compliance support: Verify that the PAM solution provides detailed audit logs, reporting, and policy controls to simplify SOC 2, HIPAA, GDPR, and broader data security compliance.

🔍 Compare PAM Platforms with Confidence
Turn your shortlist into a smart choice. See the capabilities that matter for AI workloads, Zero Standing Privilege, and NHI governance. Download the 2025 Access Platform Buyer’s Guide here

Top 10 Privileged Access Management Software Solutions

1. Apono

Apono Privileged Access Management Software Solutions

Apono is a cloud-native access management solution built to eliminate standing privileges and reduce identity-based risks without slowing developers down. While most PAM solutions still rely on vaults and manual workflows, Apono eliminates these bottlenecks with a cloud-native, Just-in-Time model built for scale. It deploys in less than 15 minutes and integrates with developer-friendly tools like Slack, Microsoft Teams, and CLI, making secure access simple and scalable. 

Main Features:

  • Granular policy enforcement: Enables fine-grained access down to individual databases, APIs, and cloud resources, with flexible approval workflows.
  • Automated JIT access: Issues time-bound, auto-expiring permissions so users and non-human identities (like service accounts and APIs, etc) get access only when needed.
  • Break-glass and on-call flows: Pre-configured emergency access workflows ensure teams can remediate incidents quickly.
  • Comprehensive audit logs and reporting: Delivers full visibility into who accessed what, when, and why to simplify audits.
  • Self-serve access requests: Empowers developers to request and receive access instantly via Slack, Teams, or CLI.

Price: Tailored pricing depending on team size and infrastructure complexity. A free trial is available, and enterprise-grade plans are available upon request.

Review: “Apono’s product does exactly what it claims to […] it saves me time, and provides value to my users by streamlining the process of granting access to our resources in a precise, auditable way.”

2. StrongDM 

StrongDM Privileged Access Management Software Solutions

StrongDM is a zero trust PAM platform that centralizes access across infrastructure, such as servers, databases, Kubernetes, cloud, and SaaS. Its key features include access policies and capturing session data for audits and compliance. 

Main Features:

  • JIT access and credential automation 
  • Records SSH, RDP, Kubernetes, and database sessions for auditing 
  • Utilises a Cedar‑based policy engine for context-aware access control

Price: Starts at $70/user/month. 

Review: “Their platform is intuitive and highly secure, which makes it easy for us to recommend to clients across industries.”

3. Heimdal Privileged Access Management 

Heimdal Privileged Access Management Software Solution

Heimdal Privileged Access Management is a comprehensive PAM module that enables JIT elevation and automatic de-escalation of user rights. It’s embedded within Heimdal’s broader cybersecurity suite. 

Main Features:

  • Zero‑trust execution and threat‑driven session termination
  • Integration with Heimdal’s broader security suite for centralized governance
  • Granular access control via role-based permissions

Price: By inquiry. 

Review: “While the solution can be complex to implement and manage, the benefits it provides in terms of enhanced security and improved efficiency are worth the investment.”

4. Wallix Bastion 

Wallix Privileged Access Management Software Solution

The Wallix Bastion PAM platform integrates password vaulting, session management, and access control, including HTML5 web sessions, with full video and metadata audits.

Main Features:

  • Centralized credential management with automatic rotation
  • Supports agentless, browser-based access (no VPN/fat client needed)
  • Secure machine-to-machine password handling via APIs

Price: User- or resource-based pricing available, starting at around $103/month for 10-50 users. 

Review: “The setup process was simple, and the solution can be implemented within less than one day.”

5. ARCON Privileged Access Management 

ACRON Privileged Access Management Software Solution

ARCON PAM is an enterprise-grade solution delivering granular control over privileged identities and environments. It supports various features, from adaptive authentication to session monitoring and secrets management. 

Main Features: 

  • Auto-discovers and onboards identities across AD, AWS/Azure/GCP
  • Supports SSO, MFA, and microservices-based deployments on-prem or SaaS
  • Securely vaults and rotates credentials (including SSH keys)

Price: By inquiry.

Review: “The UI has improved significantly over the past year, making navigation and policy configuration easier.”

6. Segura 360° Privilege Platform

Segura 360° Privileged Access Management Software Solution

Segura 360° Privilege Platform is an all-in-one PAM suite that spans the entire privileged access lifecycle. It covers password vaulting, DevOps secrets management, session recording, cloud identity governance, and more. 

Main Features: 

  • Fast deployment in as little as seven minutes
  • Full privileged access lifecycle coverage
  • Grants time-limited permissions with JIT access

Price: All-inclusive licensing model available by inquiry. 

Review: “The standout aspects are ease of use, robust security layers (MFA included), and excellent customer support.”

7. ManageEngine Password Manager Pro 

ManageEngine Password Manager Pro Privileged Access Management Software Solution

This option is often categorized as a Privileged Password Management (PMM) tool rather than a full-featured PAM. Still, ManageEngine offers a centralized, AES-256 encrypted vault for privileged credentials and remote session control. It integrates with Active Directory and CI/CD tools for seamless access governance. 

Main Features: 

  • Provides built-in compliance reports (PCI-DSS, ISO 27001, GDPR)
  • Integrates with AD, LDAP, REST APIs, and ticketing platforms
  • Records remote sessions (SSH, RDP) for forensic auditing

Price: 

  • Standard Edition: Starting at $595/year (2 admins)
  • Premium Edition: Around $1,395/year
  • Enterprise Edition: Approximately $3,995/year

Review: “Manage Engine Password Manager Pro is very user-friendly and easy to manage. [I use the] multi-factor authentication with strong encryption methods.”

8. Systancia

Systancia Privileged Access Management Software Solution

Systancia’s PAM solution adapts its control levels based on the task’s criticality, ranging from standard internal administration to high-risk, highly regulated operations. It delivers additional features like contextual session monitoring and secure credential injection. 

Main Features: 

  • Adaptive control levels by context, from routine tasks to high-security 
  • Enables automated protective actions to halt suspicious activities
  • Offers hardened virtual or terminal-based access

Price: By inquiry 

Review: “Systancia Gate and Systancia Cleanroom allow us to implement these accesses very quickly and manage them very simply.”

9. Teleport

Teleport Privileged Access Management Software Solution

Teleport is a cloud-native platform providing PAM through zero trust principles and cryptographic identities. It unifies access across SSH, Kubernetes, databases, web apps, and cloud environments.

Main Features: 

  • Supports SSH, Kubernetes, databases, Windows desktops, and cloud apps under one policy plane. 
  • JIT, short-lived credentials
  • Enforces least-privilege access controls via identity-based policies

Price: Free trial. Pricing is by inquiry. 

Review: “Reviewers highlight centralized access management for SSH, Kubernetes, AWS, and RDS as a standout efficiency.”

10. Netwrix (formerly SecureONE) 

Netwrix Privileged Access Management Software Solution

Netwrix’s offering is a PAM platform that replaces standing privileges with just-in-time, ephemeral access. It delivers privileged account discovery, time-limited credentials, real-time session monitoring, and secure remote access without requiring VPNs.

Main Features

  • Enables task automation (e.g., password resets, patch deployments)
  • Deploys in under a day
  • Automatically creates, enables, and cleans up privileged accounts on demand

Price: By inquiry

Review: “[Netwrix is] always very responsive and helpful every time we have an issue. The product itself is also very easy to use.”

Why Apono is Built for Modern Enterprises

Identity-based attacks are rising faster than traditional defenses can adapt, and you can’t afford to expose privileged accounts (human or machine). Modern PAM solutions offer an automation lifeline, cutting down investigation time and providing the audit trails needed for compliance. 

In a world where machine identities outnumber humans and attackers exploit every overlooked credential, Apono delivers a safer and more scalable way to manage privileged access. Get started with Apono today and see how modern PAM can protect your organization without slowing down your teams.

Unlike legacy PAM platforms that rely on static roles, Apono takes a cloud-native, JIT approach. By automating the issuance and revocation of privileges down to individual databases, APIs, or Kubernetes clusters, Apono eliminates standing access and dramatically reduces attack surfaces. Developers can request access through Slack, Teams, or CLI, while security teams gain full visibility through comprehensive audit logs and compliance-ready reporting.

Want to Compare PAM Platforms Side-by-Side?

Before you choose a solution, see how security leaders evaluate Privileged Access Management platforms built for the AI era. Download the Apono Access Platform Buyer’s Guide to learn what differentiates modern, cloud-native PAM from legacy vault-based tools—and how to choose the right platform for your organization.

Shai‑Hulud worm and the Nx / S1ngularity attacks: How-to use JIT Access to Stop the Chain Reaction

TL;DR

The Shai‑Hulud worm and the Nx / S1ngularity attacks show how token‑stealing malware, vulnerable workflows, and always‑on elevated permissions allow cascading compromise. Enforcing JIT access on repository, organization owner/admin roles, and team‑based inherited permissions sharply reduces exposure, limits damage, and strengthens audit/compliance posture.

What We Know

In mid‑August 2025, security researchers revealed the spread of Shai‑Hulud, a self‐replicating worm infecting npm packages to steal cloud service tokens, including GitHub, AWS, and GCP. The malware auto‑injects itself into other packages maintained by compromised accounts, exfiltrates secrets, sometimes exposes private repos, and even publicizes them.

Earlier, the Nx / S1ngularity attack exploited vulnerable GitHub Actions workflows to exfiltrate developer tokens and secrets. Packages belonging to high‑profile maintainers were infected; owner and admin rights were abused via owner accounts or via tokens that had broad permissions.

These incidents underscore how elevated, long‑lived, or inherited permissions are some of the biggest risk multipliers.

What We Learned from Shai‑Hulud worm and the Nx / S1ngularity attacks

  • Token theft + workflow abuse
    Shai‑Hulud uses postinstall scripts to inject malicious code into packages owned by compromised accounts. It steals tokens and uses GitHub repos (created by attackers) to exfiltrate data. In Nx / S1ngularity, vulnerable GitHub Actions workflows gave attackers a foot in the door. Attackers then leveraged workflows to get secrets and elevate privileges.
  • Propagation via inheritance and automation
    Because developers often maintain multiple packages, compromised accounts allow auto‑spreading of the worm across many codebases with minimal manual action. 
  • Exposed always‑on elevated roles
    Sensitive roles like owner, admin, and write permissions belonging to a compromised account provide attackers broad control: publishing new versions, creating workflows that trigger on pushes, exfiltrating secrets, and making private repos public.
  • Lack of visibility & delayed detection
    Many of these actions happen automatically or via machine‑oriented workflows. It’s often only weeks later when someone notices a repo has changed unexpectedly, or secrets have been published and the damage has been done.

Key Risks & What Organizations Are Missing

  • Assuming “token theft” is only a dev risk. It can be an org risk: org owner/admin tokens get misused system‑wide.
  • Over‑reliance on static access / always‑on elevated permissions. Even seldom‑used owner accounts are dangerous.
  • Not validating workflows / postinstall scripts that could be malicious. S1ngularity / Shai‑Hulud used those entry points.
  • Teams accumulating more permissions over time, with weak reviews. Inherited access becomes a multiplier.

Securing GitHub Actions with JIT

The Shai Hulud attack showed how quickly compromised tokens in CI/CD can be abused. With the compromised npm tokens, attackers used them to spin up GitHub Actions and automatically publish exfiltrated credentials to newly created public repos. The problem wasn’t just that secrets were exposed — it was that those secrets carried standing permissions that were always available to abuse.

If we think about this attack from a kill-chain perspective focusing on the access privileges perspective, then the component dealing with the GitHub Actions stands out as a key opportunity to reduce the potential harm from having creds published publicly.  

Shai‑Hulud worm

With Apono, you can eliminate that risk. Sensitive GitHub Actions permissions — like publishing, pushing, or creating new repositories — are made requestable via Just-in-Time access. Instead of the GitHub Actions being freely available for use, they are temporarily provisioned upon validated request with the exact scope and duration required.

The result:

  • Attack surface reduced — no standing privileges for attackers to hijack.
  • Abuse blocked — compromised identities’ privileges cannot be outside their short JIT window.
  • Productivity preserved — engineers still run their workflows seamlessly, but with guardrails that adapt in real-time.

Apono makes GitHub Actions manageable the same way it does cloud and infrastructure access: least privilege by default, elevated only on-demand.

Why JIT Access Matters More Now

Because of automation and inheritance, the real vulnerabilities multiply faster than humans can audit. Just-in-Time (JIT) access (granting elevated permissions only when needed, for the minimal required time, under controlled policies) helps in several ways:

  • Shrinks the window of compromise
    Even if someone gains access to a token or account, its elevated powers are only valid for a short time. The chance of them being abused is much lower.
  • Limits automated spread
    Worm‑style propagation depends on unattended elevation and persistent privileges. If you force ephemeral/admin roles or write access to be explicitly requested and time‑bounded, automation breaks.
  • Increases detection & accountability
    When elevated permissions require approval, logging, and automatic expiry, abnormal or malicious behaviour becomes easier to spot quickly (think of CI runs that request unusual permissions, branches or workflows being created, etc.).
  • Improves compliance & risk posture
    Regulated industries require strong access controls and audit trails. JIT access supports least privilege and produces data to show compliance, reduces blast radius, and helps with incident response.

Securing the Three Critical Objects from a JIT Perspective

Here’s how to deploy JIT controls over these high‑risk objects in GitHub:

ObjectJIT Controls / Best PracticesWhy It Addresses Shai‑Hulud / S1ngularity Risks
RepositoriesRequire elevated repo‑write or admin roles only for specific tasks and time‑boxed sessions.Monitor postinstall / workflow scripts changes and prevent unreviewed workflows being added.Make repo‑admin write privileges conditional: e.g. dual approval, MFA, etc.Shai‑Hulud relies on compromised developer accounts injecting malicious code; automation that elevates privileges in repos can be abused. Time bounding helps limit how long a repo is potentially exploitable. Vulnerable workflows were shown to have been exploited in the S1ngularity incident.
Organization RolesLimit the number of owners / admin roles. Use JIT elevation (a user requests elevated privileges, with justification, for a fixed time).Require MFA to secure approval workflows.Maintain active logging, alerts for creation / removal of owners or admin roles.Owner/admin roles are what attackers used to propagate, exfiltrate, create repos, change visibility. In S1ngularity, tokens with owner/admin or elevated scopes allowed workflows to be abused.
Teams & Inherited PermissionsUse temporary-team assignments or request elevated permissions for specific time only.Disallow teams from being owners / admins unless needed; if they must be, audit their membership and actions.Inherited permissions mean one compromised user in a team can impact many repos; teams with admin rights can act like many owners. The worm & leaks exploit exactly that scale.

Stop the Worm at the Workflow

The Shai‑Hulud worm and the Nx / S1ngularity attacks illustrate how access creep, static tokens, vulnerable workflows, and “always on” elevated permissions come together into a perfect storm. To protect against similar supply chain, worm‑style attacks:

  • Enforce Just‑in‑Time permissions over repositories, org roles, and teams.
  • Treat elevated permissions as rare, conditional, auditable events.
  • Shrink the blast radius by limiting time, requiring MFA, approvals, and automatic revocation.

When you combine visibility, enforcement, and temporal constraints, even if a breach occurs, its spread and damage are contained — transforming your security from reactive to resilient.Book a demo with Apono to map your current GitHub elevated access and build JIT guardrails.

Want to see where standing privileges might already exist?

Grab our ZSP Checklist for a quick self-assessment.

The Required API Security Checklist [XLS download]

APIs are the foundation of modern applications, and attackers know it well. A single misconfigured endpoint or exposed token can give adversaries a direct path into sensitive systems and data across your environment. Your already overburdened security teams can’t afford to miss what may be their fastest-growing attack surface.

How fast-growing is the threat? In 2024, researchers catalogued 439 AI-related CVEs (a staggering 1,025% increase over the prior year), and nearly 99% were tied to insecure APIs. In reality, this results in over half of organizations reporting an API-related incident in the past 12 months.

In 2025, having a robust API security checklist isn’t just a formality. It facilitates a step-by-step framework designed to protect your API ecosystem while reducing risk and bringing order to the chaos of API management. Let’s start by defining what an API security checklist is, how it works, and the value it delivers.

What is an API security checklist, and what is its goal?

An API security checklist is a structured set of instructions designed to help teams manage the risks to their API ecosystem. Much like pre-flight checklists in aviation, the API security checklist ensures critical security measures are never overlooked, even under pressure or at scale. By embedding repeatable and enforceable security controls throughout an API’s development and operations lifecycle, you effectively reduce your API’s attack surface and facilitate better alignment between engineering and infosec teams.

API security checklists are increasingly vital due to the rise of non-human identities (NHIs) like service accounts and machine-to-machine credentials, often with loose permissions and little oversight. Bad actors are quick to exploit this gap, with nearly 1 in 5 organizations admitting to having suffered an NHI-related breach in the past year. 

This shift in malefactor tactics is reflected in industry frameworks for API security, like the OWASP API Security Top 10, which highlights broken authentication, misconfigured access controls, and poor asset management as leading causes of API breaches.

4 API Security Risks That a Checklist Can Overcome

A comprehensive API security checklist can help you systematically address common risks like:

1. Excessive Permissions

Over-privileged service accounts or API keys are a potential treasure trove for attackers, giving them unnecessary access to data and functionality. In the 2024 BeyondTrust breach, a single over-scoped API key exposed a trove of sensitive data from 17 SaaS providers. 

2. Weak Authentication and Authorization

Loose auth controls are one of the most exploited vulnerabilities. In the headline-making TeaOnHer, an API launched without authentication exposed personal IDs, selfies, and sensitive user data within minutes. 

3. Static or Hard-Coded API Keys

Even in 2025, developers are still uploading code secrets to GitHub. One prominent example is xAI, Elon Musk’s AI startup, which leaked a private API key on GitHub that granted access to over 50 internal models. 

4. Shadow APIs and Misconfigurations

Unmonitored APIs are prime entry points. In August 2024, Avis lost nearly 300,000 customer records when attackers exploited a vulnerable API integration in a business application, highlighting how legacy or hidden APIs can evade security oversight. Centralized tracking of who (or what) is calling which APIs, with what scope, makes it far easier to spot shadow usage before it turns into a breach.

Why an API Security Checklist is Essential for Your Organization

An API security checklist is critical for any business with a public-facing API because it:

1. Lowers Cyber Risk 

A quick Google search for ‘API breach’ shows their ubiquity. A thorough API security checklist aids teams in operationalizing best practices and turning cybersecurity into a repeatable and semi-automatic process that shrinks your API attack surface.

2. Enforces Zero Trust

Effective cybersecurity strategies employ the Zero Trust principle, which assumes every request and connection may be malicious. An API security checklist translates this principle into practice by implementing and enforcing robust operational policies like scoped tokens and least-privilege access on every API interaction.

3. Enhances Visibility and Accountability

One of the main issues with APIs is that they often lack centralized and documented ownership. An API security checklist makes logging, monitoring, and auditing integral parts of the process, ensuring you always know who (or what) is accessing sensitive resources, when, and why.

4. Strengthens Compliance Readiness

Regulatory frameworks like SOC 2, HIPAA, and GDPR are built very much like checklists with requirements for strict access control and auditing. Integrating them helps avoid compliance gaps by enforcing consistent controls across the API lifecycle. Choosing a cloud-native access management platform that generates comprehensive audit logs ensures that compliance reviews are built into daily operations. 

5. Promotes Cross-team Consistency

In enterprises with large engineering departments, different teams design and operate APIs in silos. With a company-wide API security checklist, you can enforce standardized security practices across DevOps, platform engineering, and InfoSec, reducing the risk of oversight.

The Essential API Security Checklist

The checklist below is designed to address critical security controls and common blind spots, in alignment with best practices and security frameworks (like OWASP API Top 10, SOC 2, and others). 

1) Use Strong Authentication & Authorization for Every Endpoint

Require verification of identity for all API calls and enforce granular, least‑privilege authorization for human and machine identities. Strong authentication should go hand-in-hand with minimizing exposure: instead of granting broad, long-lived privileges, issue narrowly scoped, time-bound permissions that expire automatically once the task is complete.

Addressed risks: Broken auth, account takeover, data exposure.

Implementation:

  • Segment authorization
  • Enforce Role-Based Access Control/ABAC at the resource/method level and perform object‑level checks (BOLA) in code
  • Require mTLS or signed requests for service‑to‑service traffic
  • Automate client certificate rotations
  • Deny by default and explicitly allow endpoints/claims/scopes
  • Replace standing privileges with scoped, temporary access that auto-expires

2) Enforce Least Privilege Principles for Non-Human Identities (NHIs)

Minimize and time-limit privileges for machine identities across automations, services, pipelines, and environments.

Addressed risks: Over‑scoped tokens or long‑lived service accounts

Implementation:

  • Inventory all NHIs (service accounts, bots, API keys, etc.)
  • Assign owners to all NHIs
  • Tag environment and purpose for all NHIs
  • Replace static credentials with short‑lived, scoped tokens
  • Issue per‑task credentials with automatic revocation
  • Add break-glass roles for extra approvals with strict time limits
  • Log all NHI activity
  • Review NHI access logs on a regular basis (weekly/monthly)
  • Automatically disable dormant accounts

Apono automates ephemeral, scoped permissions on demand (via Slack/CLI), auto‑expires them, supports break‑glass and on‑call flows, and records who/what/why for compliance. With You can automate JIT/JEP approval flows so elevated scopes are granted only when needed and set to auto‑expire.

3) Code Secret Management & Rotation

Centralize code secrets management, make sure no secrets leak into code/repos/configs, and rotate secrets automatically and frequently.

Addressed risks: Key leaks in repos or public tools/workspaces, as well as long-lived keys, are difficult to revoke across complex environments.

Implementation:

  • Store credentials in a secrets manager
  • Inject at runtime via env/sidecar
  • Never commit secrets to VCS or public workspaces
  • Enable pre‑commit and CI secret‑scanning
  • Add organization‑wide repository protections and real‑time code secret detection

4) Abuse Prevention Guardrails

Employ gateway and applications to prevent brute‑force, enumeration, and volumetric abuse. Implement strict schema validation to stop mass assignment and injection.

Addressed risks: DoS attacks, credential stuffing, data harvesting, and business‑logic abuse.

Implementation:

  • Enforce client-based quotas and PRS limits
  • Employ burst buffers and backoff
  • Return 429 with Retry-After
  • Add IP/ASN reputation, device fingerprints, and geo policies for public APIs
  • Apply WAF/API firewall rules for known bad patterns

5) Identity and Request Level Monitoring, Logging, and Auditing

Maintain centralized, immutable logs and real‑time monitoring tied to who/what called which API, with what scope, and why.

Addressed Risks: Blind spots that delay detection, resulting in inadequate forensics, and compliance gaps.

Implementation

  • Collect structured logs (JSON) from gateway and app layers
  • Include request IDs, subject identity (user vs. service account), scopes/roles, decision (allow/deny), and data classification tags
  • Ship collected logs to a central SIEM/observability stack
  • Enable API inventory and runtime discovery to spot shadow endpoints and drift

Apono correlates the who/what/why for elevated access via JIT/JEP approvals, and auto‑generates audit trails you can join with gateway logs for complete identity‑to‑request traceability.

6) Configuration Hardening

Implement robust security controls at the edge and mesh, with TLS everywhere, mTLS for service‑to‑service, strict gateway policies, and secure defaults.

Addressed risks: Downgrade attacks, credential stuffing, enumeration, and data exfiltration.

Implementation:

  • Enforce TLS version 1.2+
  • Pin modern ciphers
  • Set HSTS on public endpoints
  • Require mTLS or signed requests internally
  • Lock down CORS and allowed origins
  • Prefer deny‑by‑default routing
  • Apply gateway policies for authn/authz, RPS quotas, request size limits, schema validation, and WAF/WAAP signatures

7) Incident Response & Recovery

Prepare tested playbooks to quickly contain and recover from API security incidents. This step includes revoking secrets, quarantining identities, and more.

Addressed risks: Long dwell time, cascading outages, and non‑compliant disclosures.

Implementation: 

  • Maintain runbooks for: global token/key revocation (“kill switch”), scope reduction, policy rollback, and credential re‑issuance
  • Pre‑stage scoped, short‑lived emergency roles
  • Practice blue/green rollout of rotated secrets
  • Rehearse comms and regulatory timelines
  • Snapshot and preserve logs

Apono executes one-click revocation of elevated permissions, issues ephemeral emergency auto-expiring access, and provides comprehensive audit logs for forensics and compliance reporting.

8) Safe Usage of Third-Party and Partner APIs

All upstream APIs should be treated as untrusted with required input/output validation, egress constraint, and tight scoping of partner credentials.

Addressed Risks: Supply‑chain data leaks, SSRF and injection via upstream responses, and over‑privileged partner integrations.

Implementation:

  • Terminate egress through a controlled gateway with DNS/IP allowlists
  • Enforce timeouts, circuit breakers, and retries with jitter
  • Validate and sanitize all upstream responses against strict schemas
  • Block unexpected fields
  • Filter PII at boundaries
  • Use per‑partner, per‑environment credentials with minimal scopes
  • uncheckedRotate credentials automatically

9) API Inventory and Classification

Maintain a complete and continuously up-to-date catalog of all APIs (internal, external, partner), classified by sensitivity and criticality to business processes.

Addressed Risks: Shadow or forgotten APIs become unmonitored attack surfaces.

Implementation:

  • Employ automated API discovery tools in gateways, service meshes, and CI/CD pipelines.
  • Tag APIs by environment (dev/stage/prod), data type (PII, PCI, PHI), and compliance requirements.
  • Record ownership for every API and require registration before deployment
  • Update API inventories with drift detection and runtime monitoring tools.

10) Secure API Design and Data Minimization

Apply “secure by design” principles during API development; minimize exposed endpoints, reduce data returned, and enforce schema validation.

Addressed Risks: Excessive data exposure and mass assignment.

Implementation:

  • Design APIs with least privilege in mind and expose only what is necessary
  • Implement schema validation for requests and responses
  • Reject unexpected fields
  • Mask or tokenize sensitive data fields (like SSNs and credit cards) wherever possible

11) Security Testing Throughout the SDLC

Treat API security testing as a continuous process integrated into development, and not a one-time event.

Addressed Risks: Vulnerabilities slip into production unnoticed, and late fixes are costly and risky.

Implementation:

  • Embed API security testing (SAST, SCA, fuzzing, pen testing) directly into CI/CD
  • Use contract tests and automated linting to catch insecure design early.
  • Continuously scan for leaked secrets and misconfigurations in code and IaC.
  • Re-test APIs after every significant change or deployment.

Apono ensures that any temporary testing credentials or elevated scopes are ephemeral, preventing testers from holding permanent, risky access.

12) Data Encryption in Transit and At Rest

Enforce end-to-end encryption for API traffic and secure sensitive data at rest with strong encryption and key management.

Addressed Risks: Sensitive data interception or theft.

Implementation

  • Require TLS 1.2+ everywhere and disable weak ciphers.
  • Apply mTLS for internal service-to-service API calls.
  • Encrypt sensitive data at rest with strong cryptographic algorithms (AES-256).
  • Rotate encryption keys regularly and enforce least-privilege key access.

13) Governance & Ownership of Non-Human Identities (NHIs)

Extend identity governance to all bots, service accounts, API tokens, and workloads, ensuring every machine identity has an owner, lifecycle, and pre-defined scope.

Addressed Risks: NHIs that accumulate standing privileges and static secrets that attackers exploit.

Implementation:

  • Require owner assignment for every service account and token.
  • Define lifecycle processes for provisioning, rotation, and decommissioning of NHI accounts.
  • Automate access reviews for over-privileged or dormant NHIs.
  • Apply to NHIs the same zero-trust, authentication, authorization, continuous monitoring, and least privilege that are used for user accounts.

Apono automates JIT/JEP access for NHIs, eliminates standing privileges, and provides a centralized audit trail across all machine identities.

14) Compliance Alignment & Continuous Access Reviews

Conduct regular reviews of who or what has access to your APIs in accordance with relevant regulatory or industry-specific requirements, such as GDPR, HIPAA, PCI-DSS, and SOC 2. These reviews should extend beyond APIs themselves to include underlying cloud infrastructure and data center management, where API access often intersects with critical systems and regulatory controls.  

Addressed Risks: Drift in access privileges that leads to overexposed data, and failed audits result in fines, lost business, and reputational damage.

Implementation:

  • Schedule regular (weekly, monthly, or quarterly) access certification campaigns across all your APIs.
  • Map API access to relevant compliance controls.
  • Document all review outcomes and remediations for audit readiness.

15) Secure Defaults and Engineer Security Training

Equip developer teams with secure-by-default patterns and ongoing training, so security isn’t bolted on but baked in.

Addressed Risks: Developers under deadline pressure may expose sensitive data or skip controls.

  • Establish secure API templates and SDKs with built-in auth, schema validation, and logging.
  • Conduct regular training and gamified workshops.
  • Integrate secure design patterns and secret scanning into CI/CD (“shift left”).
  • Set “secure by default” configurations in infrastructure and gateway tooling.

Apono reduces developer friction by streamlining access requests (via Slack/CLI) and ensuring secure defaults (temporary, least-privileged, and auditable) so engineers don’t need to over-grant permissions to maintain velocity.

16) Runtime Protections and Continuous Improvement

Treat this checklist as a living document. Integrate feedback and test controls, and add runtime protection to catch the vulnerabilities that may slip through.

Addressed Risks: Evolving threats and architectural changes to your environment that may introduce previously unfamiliar cyber risks.

Implementation:

  • Regularly review and update the checklist with newly published API security advisories.
  • Add runtime defenses like API anomaly detection and inline policy enforcement.
  • Run red-team simulations targeting APIs and integrate the conclusions into checklist updates.
  • Integrate threat intel to anticipate emerging API attack vectors.

Turning the API Security Checklist Into Action

An API security checklist operationalizes security by standardizing controls, aligning teams, and making protection repeatable. However, securing APIs is an ongoing cycle of auditing, monitoring, and enforcing least privilege, especially for vulnerable non-human identities. Apono steps in to automate Just-In-Time and Just-Enough Permission access, eliminate standing credentials, and provide full audit trails across every API interaction. Ready to close the gaps in your API security posture? Book a demo with Apono or download the checklist to put API security into action today.

Apono Releases MCP Server for End Users

We’re excited to announce the launch of our MCP server for end users, designed to boost engineering productivity while keeping security strong.

Engineers often know exactly what they need to do—deploy to a new environment, spin up a workload, investigate logs—but not which permissions translate into those tasks. That leads to two common problems:

  • Over-requesting: “Just give me admin.”
  • Workflow stalls: repeated Slack pings and ticket loops.

The result is wasted time, frustrated teams, and an inflated attack surface from unnecessary standing privileges. On top of that, engineers often spend extra time checking what they already have access to or chasing approval updates.

Why MCPs Matter

AI tools like Claude, ChatGPT, Cursor, and CoPilot are changing the way engineers interact with their environments. Instead of bouncing between dashboards, they can ask for what they need in natural language. 

Model Context Protocol (MCP) makes this possible by connecting LLMs to enterprise systems so users can query, retrieve, and act without leaving their workflow. Think about them like the USB-C that connects your favorite AI services to the tools you use, simplifying the adoption of AI into your teams’ workflows.

How Apono’s MCP Server Works

Our Apono MCP Server applies this approach to access requests:

  1. Interpret Intent – Understand the user’s goal.
  2. Guide the Request – Prompt for missing info such as justification or duration.
  3. Leverage Apono – Fetch context and match it with the right access flows.
  4. Deliver Outcome – Grant access, initiate flows, or return structured answers.

What Users Can Do

With Apono MCP, engineers can:

  • Visibility into what they already have access to
  • Clarity on request status without chasing admins
  • Simpler resource discovery, even without the exact name
  • Frictionless access requests without clicking through dashboards
  • Faster resolution of permission errors
  • Less time wasted filling out forms

So how are users leveraging Apono’s MCP to solve problems? Let’s take a look at a few key examples.

So how are users leveraging Apono’s MCP to solve problems? Let’s take a look at a few key examples.

Value Across the Lifecycle

The Apono MCP Server delivers clear benefits:

  • Accelerate Engineer Velocity: No more Slack threads or guesswork, just fast access.
  • Reduce Admin Load: Security and DevOps teams spend less time interpreting vague requests.
  • Promote Least Privilege at Scale: Rightsized access lowers risk and improves governance.
  • Boost Adoption: A simple, frictionless experience encourages teams to use Apono the right way.

Where You Can Use It

Our MCPs integrate with a growing number of the tools engineers already rely on:

  • IDEs: Cursor, CoPilot, Claude Code
  • Chat: ChatGPT, Claude, Gemini
  • Amazon Q: supported

Along with our MCP support, we recently launched our AI-powered Apono Assist for engineers on our platform, Teams, and other UIs. Read about it in this blog.

And don’t think that we’ve forgotten about the Apono admins. We will be launching an MCP server for Apono administrators soon so stay tuned for updates. 

We’re also building support for securing MCPs as they become a standard part of enterprise workflows alongside the anticipated rise of Agentic AI.

Get Started

With Apono’s MCP Server, engineers request and manage access faster, admins spend less time translating requests, and security stays strong with least privilege built in.

Reach out to us to learn more about MCPs in Apono check out our docs and reach out to us for a demo today.

Beyond the Drift Breach: Securing Non-Human Identities with Zero Standing Privileges

The Drift OAuth breach didn’t just expose one SaaS vendor — it exposed a systemic blind spot: the sprawling, ungoverned world of Non-Human Identities.

In case you missed it, in August 2025, attackers from UNC6395 exploited compromised OAuth tokens from Salesloft’s Drift integration—an AI chat tool—to access and exfiltrate data from Salesforce, including credentials like AWS keys and Snowflake tokens. 

This breach affected over 700 organizations and extended beyond Salesforce to integrations with Google Workspace and other platforms like Slack, AWS, and Microsoft Azure, just to name a few. 

The first line of response has prompted a complete revocation of Drift tokens and disabling of significant numbers of related app integrations.  

Since the initial news of the breach, we have learned that the attackers are combing through the exfiltrated stolen data in search of more tokens and credentials that they can use for further criminal activities.

In this blog, we’ll cover why Non-Human Identities like API tokens can cause serious security challenges for organizations and explore how smarter access management approaches can help to reduce risk without compromising on operational efficiency.

Why API Tokens are Risky

API tokens act like digital keys that let SaaS products and business systems talk to each other securely.

Instead of sharing a username and password, a token gives controlled, time-limited access to exactly the data or actions a system needs. This enables automation and collaboration between tools (like a SaaS app pulling data from a business system) while reducing the risk of exposing full credentials. 

But as we’ve seen here and in plenty of cases before, these tokens are exceedingly risky if they are compromised. And even more dangerous when they’re not managed properly. 

If we think about these tokens like the keys they are, then they are essentially keys to our kingdom with privileges that attackers can use to access our resources. 

These powerful tokens come with several significant challenges, including:

  • Lack of visibility – Lots of people in the organization spin up tokens like all principals and NHIs, but nobody is really doing a sufficient job of tracking them. This means that they can sit around hidden in environments with their standing privileges, and nobody knows they are still there or how they’re being used.
  • Poorly managed – When you don’t know what you have, it’s hard to manage them. Best practices call for rotating credentials and tokens but because of the lack of visibility and good processes, this can fall between the cracks. 
  • Excessive privileges – Usually out of convenience, NHIs and principals are given way more privileges than they really need. This overprivilege unnecessarily expands the blast radius and can make an attack way worse if the principal is compromised.   
  • Remediation is risky – Reducing risk for principals isn’t as straightforward as simply removing them or their privileges. Because principals, such as tokens, are built into infrastructure or processes, removing them can break workflows and impact the organization. The result is that many security teams prefer to risk an incident than break their infrastructure.
  • Legacy tools haven’t caught up – IGA and PAM tools that were built for the on-prem era, when privileges were far more static and NHIs hadn’t really come on the scene yet, don’t provide sufficient solutions for principals. They cannot detect them, let alone manage them. More modern, dedicated NHI tools have improved visibility, but are less effective in reducing risk through effective access management. 

All of these problems are amplified by the sheer scale of NHIs. Industry research estimates ratios ranging from 40:1 today to projections of 100:1 or more with AI adoption.

And as organizations adopt more AI, this number is likely to skyrocket. The impact will be a massive expansion of the attack surface, providing even more opportunities for hackers to exploit the situation.

Attackers Targeting Identity in the Supply Chain

While attribution is far from a hard science, all signs point to this hack being the work of the loose collective of criminals associated with the Com. We usually read about them under names like LAPSUS$, Scattered Spider, and Shiny Hunters. 

These hackers have made a name for themselves in focusing on identity as their main point of entry and exploitation. They’ve been behind the MGM, Okta, Snowflake, and other big name hacks. They employ methods such as social engineering and possess a deep understanding of identity and access management (IAM) to compromise identities and infiltrate target systems.

What they have shown in their attacks is that they can exploit the human and non-human identities as part of a successful attack, compromising identities and leveraging their privileges to steal or encrypt targets’ data. 

There’s an argument to be made that these crews are far less technical than the hackers of the previous era who spent months looking for ways to exploit a vulnerability or find a zero day.

In many cases, they have been shown to simply buy access from a broker, pay off employees at the phone company for a SIM swap attack, or call up the help desk and ask for a password reset

But it’s not stupid if it works, and these criminals have the illicit paydays to prove it.

Unfortunately, these groups have discovered that while they can successfully target large enterprises, the path of least resistance is often to attack a vendor in a supply chain attack. 

Especially if the vendor is less mature in terms of security, they can exploit it to slither their way up the chain and become a bigger, richer target. 

If a vendor finds themselves targeted in a supply chain attack, it can have serious reputational, not to mention financial pains as companies are less likely to trust them with their data and access to their systems moving forward.

Actionable Takeaways – How to Protect Against the Salesloft Drift Incident

In the immediate aftermath of this incident, here’s what security teams can do right now to reduce exposure:

  • Audit and revoke stale OAuth tokens.
  • Rotate embedded secrets immediately.
  • Enforce least privilege across OAuth scopes.
  • Treat AI agents as first-class identities in your IAM model.

Moving Towards a Unified Approach for Human, Non-Human, and Agentic AI Identities

One of the key takeaways from this story is that we shift our mindset. Security must move from protecting only human access to governing every identity that can touch data, human or not.

The targeting of an AI tool here is interesting because it shows us that attackers understand that AI agents require a lot of access and freedom of movement between applications to be effective. That’s a lot of connectivity that can be exploited to gain access to different systems that they can take advantage of and it puts defenders in a bit of a conundrum that is as old as time.

Do we let our AIs run free and maximize the benefits of what they can give us or do we tightly control access to limit damage from abuse?

The challenge with Agentic AI is that it is: 

  • Very much goal oriented
  • Has no ability to think about if something is a good idea (like deleting your production DBs and then lying about it later) 
  • Doesn’t behave like NHIs that have very predictable and repetitive actions 

An agent will access whatever it thinks it needs to in order to achieve its goal. In this way it’s like a human user. 

But the scale and lack of visibility of Agentic AI is going to be a challenge for security teams moving forward. 

So how should security teams think about mitigating risk from Agentic AI and all the rest?

How Apono Enables Secure NHI Access Management

Security teams need to take a flexible approach that breaks down the silos of human, non-human, and now Agentic AI identities, all of which are essentially on the same plane. It should matter less who or what the identity is and focus more on the access and how privileges are used. 

Remember that the hackers don’t see your environment as a silo, so you shouldn’t either. Move your human users over to Just-in-Time access for sensitive resources and reduce privileges for all, including your NHIs, based on what they actually use and your risk. 

From Apono’s approach, we put the focus on the principals and give admins granular controls over what privileges those principals, like API tokens, have. 

We start by providing full visibility and inventory management principles throughout your environment.

NHI Assessment/Discovery

In practice, we detect risks like:

  • Dormant Principals: Identities unused for 90+ days
  • Unused Privileges: Granted but unexercised access
  • Overprivileged Permissions: Permissions beyond actual usage needs

We then enable you to take remediation actions like :

  • Quarantine: Isolate risky access with pre-built guardrails or ready-to-use JSON deny policies for your cloud
  • Rightsize: Automatically adjust access to fit actual use
  • Delete: Revoke access or delete principals that are no longer needed

There are some distinct advantages to the quarantine option because it allows you to:

  • Take immediate mitigative action to eliminate risk without tearing out whole principals or NHIs, which can be highly disruptive to active workflows and infrastructure
  • Implement deny policies within a principal to block usage of specific unused or otherwise risky privileges while leaving the others active
  • These policies are managed in our Access Flow guardrails that are easy to manage and are quickly revertible, meaning that security teams can confidently take protective action

Embracing the Opportunities of AI in your Organization

Phishing, credential theft, and breaches happen. They will continue to happen because the financial incentives are there.  

We are past the stage of assuming breach. Now we need to assume that our identities (human and non-human like API tokens, service accounts, and more) are compromised. 

Attackers can now leverage all of their access privileges to not only access resources in your environments, but also to find more tokens, credentials, etc that they can use to continue their attack. This might be pivoting to additional systems or to your customers’ customers.

If your customers trust you to securely handle their data, then you need to make sure that you are taking sufficient precautions to protect them. As more incidents of big companies getting compromised by way of their vendors hit the headlines, we can expect them to demand more from their vendors if they want to do business with them.

As the business world becomes more and more connected with machine identities and AI agents relying on tools like API tokens to communicate with each other across platforms, organizations will have to step up their game to ensure that they are a step ahead of the criminals. 

This means being responsible by following best practices and embracing automation to handle the scale, but also not being afraid to embrace the opportunities that AI agents are offering us for greater productivity and growth. 

Ready to Take Action?

To learn more about how Apono is enabling organizations to confidently embrace the AI-driven future, reach out to us today and start the conversation.

Or, try our Cloud Assessment for NHIs to uncover hidden risks in your AWS environment and explore smart remediation solutions powered by Zero Standing Privileges.