From Policy to Practice: Governing AI in State Government

Written by George Jackson | Mar 6, 2026 7:06:24 PM

Original Broadcast March 11, 2026

Presented by Palo Alto Networks, Ping Identity & Carahsoft

Artificial intelligence is rapidly moving from experimentation to real-world implementation across state governments. In this episode of State Gov Today, leaders from New York and Oklahoma share how they are approaching AI adoption with a balance of innovation, governance, and public trust. Executive Deputy CIO Jenson Jacob and Chief AI Officer Eleonore Fournier-Tombs describe how New York is building frameworks that encourage responsible AI use while protecting citizen data and improving service delivery. Eric Trexler of Palo Alto Networks explores how security leaders can deploy AI technologies while maintaining transparency and strong cyber defenses. Oklahoma’s Chief AI and Technology Officer Tai Phan and Chief Information Security Officer Daniel Langley discuss how AI supports their broader modernization strategy and helps make government services simpler and more responsive to citizens. Finally, Ping Identity’s Kelvin Brewer explains how identity and access management will play a critical role in governing both human and digital workers as AI becomes more embedded across state operations. Together, these conversations highlight how state governments are moving from policy to practice in governing AI while ensuring innovation delivers real value to residents.

Governing AI in the Empire State

New York State is taking a structured approach to artificial intelligence by focusing on governance frameworks that encourage innovation while maintaining strong oversight. Executive Deputy Chief Information Officer Jenson Jacob and Chief AI Officer Eleonore Fournier-Tombs explain how the state is building a governance model based on accountability, transparency, data stewardship, and responsible deployment. Their work centers on ensuring that every AI-driven decision has a human owner, while agencies are equipped with the tools and guidance needed to evaluate risk and deploy AI safely.

Fournier-Tombs brings global experience in AI policy from her work at the United Nations, where she focused on governance models designed to ensure emerging technologies benefit society. In New York, she is helping agencies adopt practical frameworks for assessing AI risks, implementing shared services, and ensuring the technology improves outcomes for residents.

Jacob highlights how New York is already applying AI to improve government services, including enhanced chatbot capabilities and efforts to create a unified digital experience for residents interacting with multiple agencies. By combining strong data governance with workforce education and agency collaboration, the state hopes to expand AI adoption while maintaining trust with citizens.

Key Takeaways

  • Effective AI governance requires balancing innovation with clear accountability, transparency, and data stewardship.
  • States must focus on workforce education and agency collaboration to build trust and accelerate responsible AI adoption.
  • AI is already improving government services through tools like enhanced digital assistants and integrated service platforms.

Security, Transparency, and the AI Era

As government agencies adopt AI at scale, cybersecurity leaders are working to ensure that innovation does not outpace security safeguards. Eric Trexler, Senior Vice President for Public Sector at Palo Alto Networks, explains that AI deployments must follow the same security discipline applied to any other IT system—visibility, analysis, and control.

Trexler notes that the biggest challenge introduced by AI is not entirely new types of cyberattacks, but the dramatic increase in speed, scale, and sophistication. AI tools enable adversaries to automate attacks, generate more convincing phishing campaigns, and operate at volumes that overwhelm traditional defenses.

Another concern is the rise of “shadow AI,” where employees adopt third-party AI tools without the knowledge or oversight of security teams. Without visibility into how these tools interact with sensitive data, agencies risk privacy violations or unintended consequences from automated decision-making systems.

To address these risks, Trexler emphasizes the importance of transparency in AI systems and strong security frameworks such as zero trust architectures. By integrating AI into existing cybersecurity strategies rather than treating it as a separate technology category, agencies can innovate while maintaining strong defenses.

Key Takeaways

  • AI accelerates cyber threats primarily through speed, scale, and automation rather than entirely new attack methods.
  • Visibility into how AI tools are used across an organization is essential to prevent shadow AI risks.
  • Integrating AI into existing cybersecurity frameworks such as zero trust helps agencies deploy the technology safely.

Making Complexity Invisible in Oklahoma

The State of Oklahoma is pursuing a modernization strategy designed to simplify government services and prepare the workforce for the AI era. Chief AI and Technology Officer Tai Phan and Chief Information Security Officer Daniel Langley describe how their strategy focuses on making complexity “invisible” to agencies and citizens alike.

Phan explains that decades of legacy systems and fragmented data create barriers to delivering seamless digital services. By modernizing infrastructure, centralizing platforms, and improving data governance, Oklahoma aims to make AI a powerful tool for unlocking insights and improving service delivery.

Langley emphasizes that cybersecurity must scale alongside modernization efforts. With more than 180 state organizations operating across Oklahoma, real-time monitoring and centralized data visibility are essential for maintaining a strong security posture.

Both leaders highlight the importance of balancing innovation with risk management. While AI can reduce technical debt and accelerate analysis, it also introduces new risks such as deepfakes, automated phishing, and expanded attack surfaces. By aligning AI adoption with strong governance and cybersecurity principles, Oklahoma is working to build a future-ready technology environment that serves agencies and residents more effectively.

Key Takeaways

  • Modernizing infrastructure and improving data governance are essential foundations for successful AI adoption.
  • Centralized platforms and real-time monitoring help scale cybersecurity across large state organizations.
  • Balancing innovation with accountability ensures AI investments deliver real value without introducing unnecessary risk.

Identity as the Foundation of AI Governance

As artificial intelligence becomes embedded in government operations, identity and access management will play a central role in controlling how AI systems interact with sensitive data and services. Kelvin Brewer, Field CTO for U.S. Public Sector at Ping Identity, explains that agencies must begin treating AI agents similarly to human users when it comes to identity governance.

Traditional identity systems were designed primarily to manage human users, but AI systems are increasingly acting as digital workers capable of accessing data, executing tasks, and interacting with government systems. Without clear policies governing how these agents obtain permissions, agencies risk creating uncontrolled access environments.

Brewer recommends adopting a least-privilege model from the start, where AI systems are granted access only after a defined approval process. This approach ensures agencies maintain visibility and control as AI capabilities scale.

He also warns that AI-driven fraud is becoming more sophisticated. Automated attacks, synthetic identities, and even AI-generated job applicants are emerging threats that require stronger identity verification and monitoring systems.

By combining policy frameworks with modern identity platforms, agencies can safely scale AI adoption while protecting citizens and government systems from emerging risks.

Key Takeaways

  • AI systems should be governed through identity frameworks similar to those used for human users.
  • A least-privilege model ensures AI agents only access the data and services they truly need.
  • Strong identity verification and monitoring tools are essential to combat emerging AI-driven fraud threats.