On Point: Q&A With Aparna Achanta
Q: What are the top cybersecurity challenges with generative AI?
A: The disconnect between enthusiasm and security is alarming. 
The most pressing challenge is data integrity.
GenAI models require massive training data sets, and any compromise leads to biased or harmful outputs. I’m particularly concerned about how malicious actors now use GenAI to craft phishing emails that are virtually indistinguishable from legitimate ones—they’re correcting grammar, mimicking writing styles and creating personalized content that bypasses traditional filters.
Employees use public GenAI applications with good intentions, trying to boost productivity, but they’re unknowingly exposing sensitive data. This creates compliance nightmares and potential for hefty fines that organizations aren’t prepared for.
Q: How does AI strengthen zero-trust architectures?
A: Through my experience implementing zero, I’ve witnessed AI transform it from a static framework into something truly dynamic. AI enables what I call “continuous intelligence.” It’s constantly learning and adapting.
The key is behavioral analytics. AI establishes baselines for normal user behavior, then instantly detects anomalies. When we implemented this, we could spot unusual access patterns—unexpected data access attempts or irregular network traffic—and automatically respond by restricting permissions or isolating systems. The speed and accuracy surpass anything humans could achieve.
What really excites me is adaptive access controls. AI analyzes user behavior, device posture and location in real time to enforce just-in-time and just-enough-access principles. Users get exactly the permissions they need, when they need them. Nothing more, nothing less.
Most importantly, predictive AI keeps us ahead of threats. It analyzes connections between suspicious activities across systems, identifying attacks before they materialize. This is crucial because attackers use the same AI technologies to develop malware that evades conventional security.
Q: What challenges do departments and agencies face in deploying AI at the edge?
A: Edge AI presents challenges I haven’t seen in traditional deployments.
The distributed nature creates unprecedented vulnerabilities. These devices operate in uncontrolled environments where physical tampering or theft is possible. Unlike cloud systems with centralized security, edge devices are scattered across locations with minimal oversight. In my experience with critical sectors, this physical vulnerability keeps security teams awake at night.
Health care illustrates the challenge perfectly. Wearable sensors monitoring vital signs must process data instantly—they can’t wait for cloud verification when detecting life-threatening anomalies. Yet they must maintain ironclad security without compromising the millisecond responses that save lives.
Manufacturing faces similar constraints. Edge AI detecting defects or predicting failures operates in real-time. Traditional zero-trust approaches requiring constant cloud connectivity simply don’t work when every second of downtime costs thousands.
Q: How would you rate cybersecurity preparedness for AI programs?
A: Based on my decade working across both government and industry, I see concerning gaps everywhere. Many organizations still prohibit GenAI entirely, which doesn’t stop usage—it just drives it underground. Employees use these tools secretly, creating invisible risks that security teams can’t monitor or control.
The federal agencies I’ve worked with benefit from frameworks like NIST’s [National Institute of Standards and Technology’s] AI Risk Management Framework, but struggle with implementation. Legacy systems weren’t designed for AI integration, and bureaucratic cycles slow adoption of necessary security measures.
In the private sector, I see more agility but less consistency. Companies rush to implement without establishing proper governance. The fundamental challenge remains: balancing innovation with security. We need comprehensive strategies that protect AI systems while enabling the transformation these technologies promise. It’s not easy, but after seeing the risks firsthand, I know it’s essential.
This column has been edited for clarity and concision.
					
Comments