Episode 168 - GluuFederation/identerati-office-hours GitHub Wiki

Title: SAFE-MCP for Agentic AI

Channels

Description

Frederick Kautz is an early architect of Zero Trust principles for cloud native systems. He served on the SPIFFE steering committee, co authored the official SPIFFE book, and led one of the earliest large scale SPIFFE implementations, including its integration into Network Service Mesh as early as 2019. He is a co author of the National Institute of Standards and Technology Special Publication 800-204D, Strategies for the Integration of Software Supply Chain Security in DevSecOps CI/CD Pipelines, which offers guidance for integrating software supply chain security measures into continuous integration and continuous delivery processes for cloud native applications.  He also co chaired the Consumer Technology Association Cybersecurity and Privacy Management Committee’s CTA-2114 project, Mitigating Cybersecurity Threats in ML Based Systems, which identifies methods for addressing cybersecurity and privacy concerns in machine learning based systems.  His work focuses on operationalizing workload identity, trust boundaries, and policy enforcement in real world Kubernetes environments, as well as securing agentic systems. In this episode, we explore what Kubernetes developers still get wrong about OAuth, SPIFFE, and FedRAMP; the practical challenges of sovereignty; what real world incidents reveal about agent failure modes; how SAFE MCP could catalyze a new security ecosystem; and whether GovOps could emerge as a foundation for enterprise trust in agent driven AI processes in 2026.

Homework

Takeaways

  • ⚡ At a high level, SAFE-MCP asks "how can software be attacked" and "what can you do about it?"

  • ⚡ If businesses cannot quantify the risk of an AI software agent, they can't deploy it to production. SAFE-MCP was designed to help quantify that risk.

  • ⚡ We need more Agentic AI best practices that will eventually shape the controls used in compliance frameworks. SAFE-MCP seems to be helping the industry document and flush out those best practices.

  • ⚡ In 1979, IBM dogma professed that "a computer can never be held accountable". But today, if something goes wrong, we must do exactly that--we need to know the identity of the software (i.e. "computers") that were involved in the event. We also need to know about the people, organizations and possibly business units involved. So holding software accountable seems in scope today, especially vis-a-vis autonomous agents and MCP services.

  • ⚡ Don't conflate LLM with agentic AI or MCP. While an LLM might inform the action of an agent, the agent acts on it. And that might involve invoking a remote MCP service that takes additional autonomous actions--for which it is responsible. In a way the LLM is just a new kind of web search or database result. The LLM results themselves aren't in control, they just inform.

Livestream Audio Archive

here