ai agent

Trust Standards Evolve: AI Agents, the Next Chapter for PKI

As AI agents begin to act autonomously across networks, the question is no longer just what they can do — but whether we can trust them.

By 2026, over 40% of enterprise workflows will involve autonomous agents such as AI copilots, workflow bots or digital assistants (Gartner, 2025). Already, organizations are deploying AI-driven assistants to generate code, write reports and manage supply chains. According to AWS, more than half of enterprises now see “agentic AI” as a strategic priority for digital transformation.

This transformation forces a fundamental shift in digital trust. As these agents make decisions, interact and transact on behalf of humans or systems, the question becomes: how do we verify who they are, what they’re authorized to do and under whose authority they act?

Just as Public Key Infrastructure (PKI) once secured the web and the IoT era, a new generation of standards is now emerging to secure the agentic internet — the next chapter in identity and trust.

The Trust Problem for AI Agents

Humans rely on passwords, biometrics and multi-factor authentication (MFA) to prove identity. AI agents instead depend on cryptographic credentials — certificates and key pairs that assert trust between machines.

Yet most enterprises lack a unified method for discovering, verifying and governing these agent identities. Without such mechanisms, organizations risk impersonation, manipulation and loss of control.

Consider a rogue “procurement agent” impersonating a legitimate workflow bot to issue unauthorized purchase orders within a supply chain system. Or a rogue assistant accessing confidential data through a compromised integration. Without verifiable identity and authorization, these scenarios quickly move from hypothetical to inevitable.

Industry experts are now recognizing the need for a global trust model for AI — one that can verify identity, authority and provenance for autonomous agents.

Introducing ANS: DNS for AI Agents

One promising approach comes from the Agent Name Service (ANS), a proposed Internet Engineering Task Force (IETF) standard (draft-narajala-ans).

Just as DNS maps human-readable names to IP addresses, ANS maps agent identities to verified capabilities, cryptographic keys and endpoints. It functions as a universal, PKI-backed directory for secure agent discovery and communication.

An ANS record might look like this:

mcp://sentimentAnalyzer.textAnalysis.ExampleCorp.v1.0

Each part encodes information about the agent’s type, function, provider and version, backed by a verifiable certificate chain issued through trusted authorities.

The vision: a global trust fabric where every AI agent can be identified, authenticated and authorized — securely and consistently across systems.

PKI’s Next Evolution: From Machines to Agents

PKI has been the invisible trust layer of the modern internet — securing people, devices and services through certificates. But as autonomous agents proliferate, PKI must evolve again.

AI agents differ from traditional systems in three ways:

  • Dynamic identity lifecycles — Agents can spawn, evolve and retire in seconds
  • Capability attestation — It’s not enough to know who an agent is — we must also know what it’s allowed to do
  • Cross-protocol interaction — Agents operate across diverse ecosystems such as Model Context Protocol (MCP), Agent to Agent (A2A) and Agent Communication Protocol (ACP), requiring interoperability of trust.

According to the HID PKI Market Study, the adoption of digital certificates for AI agents is emerging as one of the most significant shifts in enterprise PKI strategy:

  • 15% of organizations have already begun deploying certificates for AI agents
  • The study found that AI agent certificates represent a stronger trend and change driver than even post-quantum cryptography (PQC) and crypto-agility initiatives

The takeaway: enterprise PKI programs are no longer just about securing endpoints or servers — they are preparing to authenticate autonomous intelligence.

Security and Governance Implications

As autonomous agents become part of business-critical workflows, they also expand the attack surface. Emerging threats include:

  • Sybil attacks — Where malicious actors create numerous fake agents to distort trust or overwhelm registries
  • Registry poisoning — Tampering with agent directories to redirect interactions or inject false trust relationships
  • Man-in-the-middle impersonation — Exploiting unsigned discovery or communication exchanges

ANS addresses these risks by applying cryptographic integrity and life cycle governance to every agent interaction. Its model mirrors DNS governance under ICANN, ensuring name uniqueness and trust transparency through federated authorities.

Long-term success will depend not only on cryptography, but also on policy, oversight and auditability — extending governance frameworks such as ICAM and eIDAS into the world of autonomous AI.

Why This Matters for Enterprises

AI agents are already reshaping operations — from copilots drafting financial summaries to bots managing infrastructure or responding to customers. Yet many organizations haven’t adapted their identity and access management (IAM) frameworks to govern these new digital actors.

Imagine a finance bot executing trades based on market signals. Without verifiable certificates or signed authorization, a spoofed or compromised agent could create massive financial exposure — in seconds.

According to IT Brief News, only 15% of enterprises have deployed fully autonomous agents, but most cite trust and governance as their primary barrier to broader adoption. Meanwhile, 81% of executives say they would entrust AI with critical operations — provided trust frameworks are in place.

To prepare, enterprises should:

  • Extend IAM and PKI strategy to include non-human identities such as AI agents
  • Establish certificate policies and trust anchors specific to autonomous systems
  • Monitor and audit agent-to-agent interactions within Zero Trust architectures

Closing Vision — Trust at Machine Speed

The next chapter of digital trust will include humans, machines, and now intelligent agents.
Standards like ANS represent a significant step toward a future where every digital actor — human or AI — can prove its identity cryptographically and operate within a verifiable trust framework.

PKI is once again at the center of this transformation — evolving from securing communication to securing cognition itself.

Learn more about how HID supports PKI >>

References

  1. AWS Blog: The Rise of Autonomous Agents
  2. IETF Draft: Agent Name Service (ANS)
  3. IT Brief News: Slow Adoption of Autonomous AI Agents in Enterprises
  4. Diginomica: Trust in Autonomous Agents Soars