Let's be honest — the thought of using AI-generated code is pretty slick. Tools like OpenAI's Codex can crank out functional code quickly, which can help development teams generate more ROI.
For your cybersecurity team, however, the thought of using AI-generated code might feel a little bit like giving car keys to a ten-year-old. And cybersecurity experts aren’t just being overly cautious. In this blog, I’ll explore what they’re concerned about and what can be done about it.
AI Security Issues Surrounding Code
Here's the thing about AI-generated code — it's kind of like getting a beautifully wrapped gift where you can't see what's inside until you open it. No one actually understands how AI works or how it makes complex decisions. It also tends to produce a lot of content that looks great at first, but upon deeper examination, reveals a few errors, either in judgment or in accuracy, and sometimes both.
Existing AI Security Vulnerabilities Get Repackaged
According to a Stanford University study, AI coding tools have a tendency to generate insecure code, which makes perfect sense. After all, AI models learn by consuming massive amounts of code from the internet, and that includes open-source repositories. The upside is that they pick up tons of great coding patterns. The downside is that they can also pick up a ton of bad habits, including insecure coding practices and known vulnerabilities.
Without that human judgement that can identify safe, secure code from its alternative, AI will often repackage the bad along with the good.
What Was Once Secure Code Doesn’t Always Stay That Way
AI models learned from the code that was available during their training, which creates a kind of "security time capsule" problem. Think of it this way: if a piece of code looked perfectly safe when the AI was being trained — say, a year ago — the AI will still suggest similar code today, even if we've since discovered that the approach has serious security flaws.
Security Vulnerabilities Get Multiplied Too Quickly
Think about having a super-efficient assembly line that produces pills. And occasionally, it produces a pill with the wrong dosage — a dosage so high it could make someone sick. No problem, right? You’ve got quality control people in place who can spot the problematic pills.
But what if that assembly line suddenly goes at double, triple, or even ten times the speed?
That’s kind of what’s happening with AI-generated code. AI models can introduce problematic code into your systems faster than your cybersecurity team can keep up.
And if that isn’t scary enough, there’s a giant accountability problem …
Accidents Happen, But They Can Also Be Engineered
What’s more attractive to a hacker than knowing that old security vulnerabilities are getting repackaged?
How about knowing you could create more and simply … feed them into AI tools like little poisoned (but delicious) cupcakes. After all, AI models can’t tell the difference between legitimate and malicious training data. So if a hacker slips flawed logic or insecure coding patterns into an AI model's training data, those vulnerabilities could quietly spread across countless projects as more and more developers accept the AI's suggestions.
When Everyone’s Responsible, No One Is
With human-written code, you know exactly who to talk to if something goes wrong. But with AI-generated code, the accountability gets fuzzy fast. Is it the developer who used the AI? The team that trained the model? The data it learned from? Without real accountability if something goes wrong, and every incentive to take the easy route, it’s not surprising that we’ve already seen a few AI code snafus.
AI Code Security Recommendations
AI is new, exciting, and yes, a little bit scary. But the humans can (and should) absolutely remain in control here. There are plenty of practical steps you can take to get the benefits of AI-generated code without compromising security.
Don’t Skimp on AI Code Review
Hopefully, your existing code review process is a great foundation. However, it will need some upgrades for the AI era. Think of it as adding new tools to your toolkit rather than throwing out everything you know.
Start by training your security team to spot AI-specific issues. Consider using an automated analysis tool that can catch common security problems, as well as regular peer reviews.
Test Everything (And Then Test It Again)
While you gain some time on the creation side with AI code, you’ll need to add more for testing. It’s still an overall win, but make sure you make time for static analysis, dynamic testing, fuzz testing, regular security audits, and penetration testing so you can catch any issues early.
Stay Current
Keep your AI tools updated and patched, so you can minimize their tendency to perpetuate older security vulnerabilities. You’ll also keep your teams educated and informed about the potential and the pitfalls of AI-generated code.
Create Clear Guidelines
Without defined AI policies to govern usage and development, users and development teams may operate without clear boundaries or accountability, increasing the risk of misuse or unintended consequences.
Here’s what I recommend:
- Create an Acceptable Use Policy (AUP) for AI technology that includes an approved list of AI tools, acceptable and unacceptable use cases that fit the org's usage (especially for dev teams), and restrictions on exporting company code or sensitive data to public LLMs.
- Have a Secure AI-Integrated Development Policy in place outlining the secure use of AI tools in development, testing, and deployment. This policy can either be a stand-alone or integrated into your Software Development Lifecycle (SDLC) policy.
Getting Expert Help With AI Code Analysis and Usage Guidelines
Unlike traditional security consultants, a skilled vCISO will understand both the strategic security landscape and the practical realities of modern development workflows. They can also help organizations establish AI governance frameworks that don't slow down innovation but enhance it, creating policies for safe AI tool usage, implementing code review processes specifically designed to catch AI-generated vulnerabilities, and training development teams to recognize the unique security patterns that emerge from AI-assisted coding.
Perhaps most importantly, a vCISO brings that crucial outside perspective and continuous monitoring that internal teams often lack.
If you’re curious about what an IT consultant would bring to the table, click the link below to read up on how our consultants work with clients at Marco.