Presented by Palo Alto Networks & Carahsoft
Artificial intelligence is transforming how governments operate, but it is also introducing new cybersecurity challenges. For public sector organizations adopting AI technologies, security leaders say the key is ensuring innovation never outpaces protection.
Eric Trexler, Senior Vice President for Public Sector at Palo Alto Networks, believes government agencies must approach AI with the same discipline they apply to any other technology deployment.
“Like any IT technology, we should deploy it to make the world a better place,” Trexler says. “But we also have to deploy it safely.”
Trexler describes three forces that are reshaping the cyber threat landscape: speed, scale, and sophistication.
AI allows attackers to automate many aspects of cyber operations, from reconnaissance to exploitation. Tasks that once required human effort—such as writing convincing phishing emails or scanning systems for vulnerabilities—can now be executed rapidly by machines.
This automation dramatically increases the volume of attacks organizations must defend against.
Phishing campaigns, for example, have historically relied on poorly written emails that often contained obvious warning signs. AI tools can now generate highly convincing messages that mimic the tone and writing style of trusted contacts.
The result is a more dangerous threat environment where even experienced professionals may struggle to identify malicious activity.
At the same time, AI adoption within organizations creates new security challenges of its own.
Trexler warns that many organizations are experiencing a surge in what he calls “shadow AI.” Employees may experiment with external AI tools without informing their IT or security teams. These tools may interact with sensitive data or make automated decisions without appropriate oversight.
This lack of visibility can expose agencies to privacy violations, compliance issues, or operational risks.
For governments responsible for protecting citizen data, the stakes are especially high.
Trexler argues that the solution lies in applying well-established cybersecurity principles to AI deployments. Visibility into how AI tools are used across the organization is essential. Security teams must be able to monitor activity, analyze risks, and apply controls at appropriate points within the system.
Trexler also emphasizes the importance of transparency in AI systems. Organizations must understand how AI tools operate and what data they interact with. Without that visibility, agencies risk deploying technologies they cannot fully control.
Despite these challenges, Trexler remains optimistic about AI’s potential to improve public services and accelerate scientific discovery.
But realizing those benefits requires responsible deployment.
Governments must ensure security is built into AI strategies from the beginning, rather than added later as an afterthought. By integrating AI into existing cybersecurity frameworks and maintaining strong oversight, agencies can harness the power of AI while protecting the citizens they serve.
Key Takeaways