Anthropic and National Institute of Standards and Technology (NIST)
Why AI agents need a legal identity?
The era of the autonomous AI agent has officially arrived, but it is operating in a legal vacuum. NIST’s Center for AI Standards and Innovation (CAISI) has just launched the AI Agent Standards Initiative to address the fact that AI agents are becoming economic actors with almost no legal infrastructure in place.
As NIST puts it, AI agents can now “work autonomously for hours, write and debug code, manage emails and calendars, and shop for goods.” We require human-led businesses to register and follow strict protocols. Why should we expect less of AI agents that can do the same?
The three pillars of the NIST initiative
CAISI is focusing on three critical areas to ensure these agents are secure and interoperable:
- International leadership: facilitating industry-led development of agent standards to ensure U.S. leadership in global standards bodies.
- Open source protocols: fostering community-led protocol development so that agents can function smoothly across the entire digital ecosystem.
- Security and identity: advancing research into AI agent identity and authorisation to promote trusted adoption across all sectors of the economy.
Without action, NIST warns, “innovators may face a fragmented ecosystem and stunted adoption.”
Why this matters for Anthropic
This initiative intersects directly with the debates inside frontier AI labs. Anthropic’s Responsible Scaling Policy (RSP), recently updated in February 2026, grapples with exactly these questions of agent autonomy and misuse risk. As AI agents gain the ability to perform agentic reasoning and tool use, they become targets for industrial-scale distillation attacks and misuse.
Notably, Anthropic’s RSP v3.0 acknowledges a harder truth: “If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe.” Without shared identity and security frameworks, the same logic applies to agents, the weakest actor sets the standard for everyone.
The looming deadline
NIST is currently seeking public input on two fronts:
- RFI on AI Agent Security responses due March 9
- NCCoE Concept Paper on AI Agent Identity and Authorization - responses due April 2
This is the moment for the tech community to decide whether AI agents will be treated as unaccountable software or as responsible digital entities.
Should AI agents be required to have a verifiable digital identity before they can participate in the economy?
Sources:
-
NIST / CAISI AI Agent Standards Initiative (main page) - https://www.nist.gov/caisi/ai-agent-standards-initiative
-
Official launch announcement - https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure
-
RFI on AI Agent Security (due March 9) - https://www.nist.gov/news-events/news/2026/01/caisi-issues-request-information-about-securing-ai-agent-systems
-
NCCoE Concept Paper on AI Agent Identity & Authorization (due April 2) - https://www.nccoe.nist.gov/projects/software-and-ai-agent-identity-and-authorization
-
Anthropic Responsible Scaling Policy v3.0 (current, Feb 24 2026) - https://anthropic.com/responsible-scaling-policy/rsp-v3-0
-
RSP v3.0 announcement blog post - https://www.anthropic.com/news/responsible-scaling-policy-v3
-
Original RSP (Sept 2023) - https://www.anthropic.com/news/anthropics-responsible-scaling-policy

