AI and Cybersecurity in State Government: Opportunity, Risk, and Readiness

Written by State Gov Today | Apr 28, 2026 1:31:23 AM

Presented by Elastic and Carahsoft

Artificial intelligence is rapidly reshaping cybersecurity across state and local government, creating both significant opportunity and new layers of risk. In this episode of State Gov Today, recorded at the Billington State and Local Cybersecurity Summit, leaders from across government and industry share how they are navigating that balance. George Jackson speaks with James Saunders, Chief Information Security Officer for the State of Maryland; Jake Hammock, CISO and Assistant CTO for Security and Infrastructure for the City of Seattle; Andrew Alipanah, CISO for Orange County, California; and Bobby Suber, Senior Manager of Solutions Architecture for SLED at Elastic. Together, they explore how agencies are adopting AI as a force multiplier, building governance frameworks, managing risk, and preparing for the next phase of AI-driven cybersecurity.


Balancing Innovation and Risk: How CISOs Are Operationalizing AI in Cybersecurity

Artificial intelligence is no longer a future concept for state and local governments—it is an immediate priority. James Saunders, Jake Hammock, and Andrew Alipanah outline how their organizations are approaching AI adoption while maintaining strong cybersecurity postures.

Saunders explains that Maryland has implemented a structured responsible AI policy, requiring all use cases to go through a formal intake process and be categorized by risk. This ensures that AI is deployed in a way that aligns with security, privacy, and ethical standards. He also frames AI through three critical layers: using AI within cybersecurity operations, securing AI systems themselves, and ensuring the responsible use of AI by the workforce.

Hammock expands the conversation by highlighting how AI is forcing a broader transformation across government environments. In Seattle, cybersecurity must account for both critical infrastructure and core operational systems, many of which are aging. Integrating AI into this landscape requires modernization, updated policies, and strong collaboration with private sector partners and community stakeholders. He emphasizes that agencies must rethink how systems connect and how services are delivered while maintaining consistent security controls.

Alipanah focuses on adoption challenges, noting that while there is strong demand for AI across departments, government constraints and leadership risk tolerance can slow deployment. This gap can lead to “shadow AI,” where employees seek out their own tools. However, he also points out that slower adoption can be beneficial, giving agencies time to build policies, train staff, and ensure readiness. He highlights the clear return on investment, particularly in automating repetitive cybersecurity tasks, reducing analyst fatigue, and improving overall efficiency.

Across the discussion, a consistent message emerges: AI is essential to keeping pace with modern threats, but success depends on thoughtful implementation, strong governance, and a clear understanding of risk.

From Data Overload to Actionable Intelligence: AI’s Role in Modern Cyber Defense

Bobby Suber brings an industry perspective to the conversation, focusing on how AI is helping state and local governments better utilize their data to strengthen cybersecurity.

He explains that many organizations have spent years collecting vast amounts of data but struggle to extract meaningful insights. AI, combined with strong context and data management, allows agencies to transform this “data swamp” into actionable intelligence. By ensuring that the right data is available for the right use case, organizations can significantly improve threat detection and response.

Suber emphasizes that AI should augment—not replace—human decision-making. While AI can handle data processing, correlation, and analysis at scale, human oversight remains critical for making final decisions and ensuring accuracy. He also highlights the importance of focusing on data quality and context, rather than solely on the AI models themselves, to avoid issues like poor outputs or misinterpretation.

In terms of cybersecurity frameworks, Suber suggests that agencies do not need entirely new standards for AI. Instead, they should integrate AI into existing models like zero trust, using it to automate processes such as continuous verification and improve overall efficiency.

Looking ahead, he points to the emergence of agentic AI—systems capable of taking autonomous action—as the next major shift. While promising, these capabilities introduce new challenges around trust, visibility, and control. Organizations will need to closely monitor how these systems operate and ensure safeguards are in place before fully embracing automation.

The segment reinforces a key takeaway: the future of cybersecurity will be driven by how effectively organizations can harness data, apply AI responsibly, and maintain the right balance between automation and human oversight.