Everyone is saying AI changes the fundamentals of cybersecurity, but I wholeheartedly disagree. In many ways it just exposes are biggest weaknesses.
Ethical Threat Insight: Securing AI Workflows
There’s a lot of hype around “AI security risks,” but most of it boils down to two things:
Data leakage (sensitive data, PII, internal docs)
Hands-on-keyboard attacks (ransomware, crypto mining, APT activity)
If you’re an IT admin or IT/Security leader I have a question for you.
How do you currently protect against those two threats?
AI hasn’t changed the fundamentals at all. Such as:
Threat modeling
Governance and acceptable use policies
Security awareness and training
Identity protection
Endpoint protection
Application and network controls
Threat detection
When it comes to data leakage this has always been a difficult and persistent problem.
It’s almost impossible to stop an attacker (or frustrated user) from using file sharing sites or heck even their phone to get data and documents out of the environment.
Where things get interesting is AI-assisted development tools like Cursor, Claude Code, and similar agents that can run commands or install packages.
A realistic attack path might look like this:
A developer visits a repo or webpage with prompt injection instructions
The AI tool is tricked into running a command or installing a package
That package looks normal but contains a malicious implant
Preventing prompt injection entirely is extremely hard. Just like preventing data exfiltration.
I’d also argue it’s not the most important area to focus.
Because attackers still have a problem:
They eventually have to execute code.
And once that happens, they still need to perform post-exploitation actions on the endpoint or network.
That’s where your existing defenses matter:
EDR detecting suspicious behavior
Least privilege limiting blast radius
Application controls preventing unknown binaries
Network monitoring catching recon & c2 beaconing activity
In other words:
You don’t need Fort Knox for AI.
You need to do the fundamentals well and consistently.
One practical step you can take today: Treat AI coding agents like privileged automation tools.
If they can run commands, install packages, or modify code, they should operate under the same security controls as any other automation:
constrained environments
limited privileges
strong endpoint monitoring
Lastly, remember, there’s no silver bullets in security. There’s no such thing as 100% secure.
Secure AI like you secure everything else.
If you’re curious how else threat actors are using AI, check out these two episodes
All the best
Spencer Alessi
PS - Anyone else feel like this when they hear a company whose existed for years is now “AI native”… 🙄

