Identity as the Foundation of AI Governance

Presented by Ping Identity & Carahsoft

As artificial intelligence becomes more deeply embedded in government operations, identity and access management will play an increasingly important role in ensuring AI systems operate securely.

Kelvin Brewer, Field Chief Technology Officer for U.S. Public Sector at Ping Identity, believes identity governance must evolve to address a future where both humans and digital workers interact with government systems.

Traditionally, identity systems were designed to manage access for people—employees, contractors, and citizens. But AI agents are now capable of performing tasks, accessing information, and making decisions within digital environments.

Screenshot 2026-03-05 at 9.40.36 PMBrewer says governments must begin treating AI agents as identities in their own right.

That means establishing policies that determine how AI systems obtain access to data and services, just as human users do.

Without those controls, agencies risk creating a “Wild West” environment where AI tools operate without clear oversight.

A key principle Brewer recommends is adopting a least-privilege model from the start. Under this approach, AI systems begin with no access to resources. Permissions are granted only after formal approval processes determine what data or services the system requires.

This ensures organizations maintain control as AI capabilities expand.

Identity governance also plays a critical role in combating emerging forms of fraud.

Brewer points to several trends that are becoming more common as AI tools grow more sophisticated. Automated fraud campaigns can now be executed by AI rather than human attackers. Deepfake technology can impersonate individuals during video calls. In some cases, AI-generated applicants have even appeared in job interviews for government positions.

These threats highlight the need for stronger identity verification systems capable of detecting suspicious activity.

Screenshot 2026-03-05 at 9.40.19 PMModern identity platforms can help identify anomalies such as bot-driven attacks, synthetic identities, or unusual authentication patterns. Combined with strong governance policies, these tools help agencies maintain trust in digital systems.

Identity frameworks also support broader AI governance strategies.

As AI agents become more common within government workflows, agencies must determine what actions those systems are authorized to perform. Whether processing benefits applications or assisting employees with administrative tasks, AI systems must operate within clearly defined boundaries.

Brewer believes identity solutions provide the foundation for responsible AI scaling.

By combining governance policies, approval processes, and authentication technologies, agencies can ensure AI systems operate safely and transparently.

As state and local governments increasingly rely on AI to deliver services more efficiently, identity governance will become a critical enabler.

With the right policies and technologies in place, agencies can unlock AI’s potential while maintaining strong protections for sensitive data and public trust.

Key Takeaways

  • AI agents should be governed using identity frameworks similar to those used for human users.
  • Least-privilege access models help control what AI systems can access or do within government environments.
  • Strong identity verification tools are essential to combat emerging AI-driven fraud threats.