Presented by Palo Alto Networks & Carahsoft
Artificial intelligence is transforming how governments operate, but it is also introducing new cybersecurity challenges. For public sector organizations adopting AI technologies, security leaders say the key is ensuring innovation never outpaces protection.
Eric Trexler, Senior Vice President for Public Sector at Palo Alto Networks, believes government agencies must approach AI with the same discipline they apply to any other technology deployment.
“Like any IT technology, we should deploy it to make the world a better place,” Trexler says. “But we also have to deploy it safely.”
For cybersecurity leaders, AI presents both opportunity and risk. The technology can dramatically improve threat detection, automate security operations, and help analysts process vast amounts of data. But it also empowers adversaries to launch more sophisticated attacks at unprecedented speed.
Trexler describes three forces that are reshaping the cyber threat landscape: speed, scale, and sophistication.
AI allows attackers to automate many aspects of cyber operations, from reconnaissance to exploitation. Tasks that once required human effort—such as writing convincing phishing emails or scanning systems for vulnerabilities—can now be executed rapidly by machines.
This automation dramatically increases the volume of attacks organizations must defend against.
Phishing campaigns, for example, have historically relied on poorly written emails that often contained obvious warning signs. AI tools can now generate highly convincing messages that mimic the tone and writing style of trusted contacts.
The result is a more dangerous threat environment where even experienced professionals may struggle to identify malicious activity.
At the same time, AI adoption within organizations creates new security challenges of its own.
Trexler warns that many organizations are experiencing a surge in what he calls “shadow AI.” Employees may experiment with external AI tools without informing their IT or security teams. These tools may interact with sensitive data or make automated decisions without appropriate oversight.
This lack of visibility can expose agencies to privacy violations, compliance issues, or operational risks.
For governments responsible for protecting citizen data, the stakes are especially high.
Trexler argues that the solution lies in applying well-established cybersecurity principles to AI deployments. Visibility into how AI tools are used across the organization is essential. Security teams must be able to monitor activity, analyze risks, and apply controls at appropriate points within the system.
Zero trust architectures are particularly important in the AI era. By requiring continuous verification and strict access controls, zero trust models limit the damage attackers can cause—even if they gain access to a system.
Trexler also emphasizes the importance of transparency in AI systems. Organizations must understand how AI tools operate and what data they interact with. Without that visibility, agencies risk deploying technologies they cannot fully control.
Despite these challenges, Trexler remains optimistic about AI’s potential to improve public services and accelerate scientific discovery.
But realizing those benefits requires responsible deployment.
Governments must ensure security is built into AI strategies from the beginning, rather than added later as an afterthought. By integrating AI into existing cybersecurity frameworks and maintaining strong oversight, agencies can harness the power of AI while protecting the citizens they serve.
Key Takeaways
- AI is increasing the speed, scale, and sophistication of cyber threats.
- Shadow AI deployments pose serious risks when organizations lack visibility into how tools are used.
- Applying established cybersecurity frameworks such as zero trust helps agencies deploy AI safely.
