You don't have an AI problem. You have a process problem.
Last week, something happened that started out completely innocuous. A package was shipped with something in it that shouldn't have been there. No sophisticated hack, no obscure exploit.
Just a source map.
In this particular case, it was the source map for Claude Code, Anthropic's new tool. The kind of file you normally don't think twice about — until someone opens it and suddenly has the complete original source code right in front of them.
Not a small snippet. Everything.
The pipeline trap
If you've ever built software that goes through a pipeline — whether frontend or backend — you'll recognise this. You build something nice, you add a step to your build process, then another. You quick-fix something along the way. At some point, you just trust that the process "is fine".
Until it isn't.
The interesting part? The AI did nothing wrong. The models worked perfectly. In fact, AI played virtually no role in the mistake itself. And yet it immediately feels like an "AI incident".
AI as a magnifying glass
What I see more often is that AI doesn't so much introduce new mistakes, but makes existing gaps in your process more visible. Or rather: more tangible.
Because AI agents now sit right in the middle of your workflow, they touch everything. They write code, execute commands, make decisions. As a result, they inevitably come into contact with the things we've been doing on autopilot for years:
- Pipelines that "more or less" work.
- Permissions that are "temporarily" wide open.
- Build scripts that copy files a little too enthusiastically.
The "AI" label
There's nothing futuristic about this problem. If you strip away the AI component, you'd simply say: "Someone deployed a bad build." That's it.
But the moment the AI label gets slapped on, it suddenly feels heavier. More dramatic. Even though the root cause is entirely mundane.
That's not to say nothing changes. AI accelerates everything. Not just your output, but also the speed at which a mistake propagates. Where a manual error used to stay local, a mistake in an automated AI flow can now have an impact in ten places at once.
You can't outsource discipline
AI agents give you a sense of control. You ask for something, you get a result, and it works. That feels tight. But under the hood, nothing has changed about the foundation of your system. The shortcuts and the "we'll fix that later" mentality are still there. You just notice them less quickly.
AI makes many things better, faster, and sometimes even cleaner. But it doesn't improve one thing: your discipline. That's still something you have to bring yourself.
Claude Code's source code shouldn't have ended up on the street because of a misconfigured setting in a package. That's the whole story. No complex analysis needed. But it's a good reminder that the real challenges aren't in what AI does — they're in the foundation we build around it.