Today, you would be hard-pressed to find a software developer who isn’t using AI.
With AI tools like GitHub Copilot, Cursor, and Claude Code, being able to draft functions, suggest fixes, write documentation, and debug failing tests in seconds. It is simply too promising not to explore the possibilities.
And because of this, it can look like machines have completely taken over the craft.
However, even a quick look at what actually happens between an AI suggestion and a shipped commit paints a very different picture.
What AI Use in Software Development Looks Like Now
According to the 2025 Stack Overflow Developer survey, which surveyed more than 49,000 developers across 177 countries, 84% of developers are using or plan to use AI in some way or another.
Most developers—thankfully—do not blindly trust AI and LLMs. The Stack Overflow survey showed that 46% mistrust AI suggestions, while 29.6% only “somewhat” mistrust them.
In fact, GitHub Copilot reports that only 30% of its AI-generated suggestions were actually accepted by developers. The rest are dismissed, reworked, or rewritten.
Meanwhile, GitClear reports a 4x increase in code cloning since the proliferation of AI in software development—meaning more copy-pasted logic and less thoughtful, reusable code.
Context Is King, and AI Lacks It
These statistics tell us just why AI is still a bit away from replacing human developers: AI can understand logic, but not always context.
AI doesn’t always know that:
- A module is being retired, making a suggested refactor a waste of time.
- The team settled on a certain naming rule.
- A client is mid-migration away from a stack, so some improvements are simply not worth making.
- A function is intentionally slow because of a hard constraint from a downstream dependency.
- A shortcut someone just suggested was already tried and ditched months ago.
AI can read the code, yes, but it has no idea why it was written that way. It doesn’t understand business logic or institutional knowledge—these things are unwritten, live in people’s heads, in chat threads, and in old commit messages. Not in code.
AI paired with human oversight remains the best practice. And it always will be. It’s not just a stepping stone until AI improves and can theoretically write perfect code without humans.
Good software developers use AI the way a good editor uses a rough draft: as a starting point that still needs a careful read. The editing is the work.
Beyond the Code
And a developer’s job was never confined to the editor anyway.
For one, compliance. Deciding whether a product meets data protection requirements in a given region means reading regulations, understanding the business, and maybe even talking to a lawyer.
Or what if, for example, something breaks in production? Which things should you roll back? Who should you notify? How should you talk to affected users? Unscrewing these nitty-gritty problems requires both technical knowledge and precise judgement.
Security is another. AI can flag potential vulnerabilities, yes. But developers still need to have a keen understanding of which threats are relevant to a project, what data needs protecting, and how to protect it. Here’s an information source on tools that can help developers encrypt their connections during their work.
And when it comes to working with clients, open source contributors, or internal stakeholders, the ability to manage expectations, deliver bad news, and navigate disagreement is something no AI assistant can reliably handle.
AI Works Best as a Partner, Not a Boss
All this is to say that while AI is a powerful assistant, it’ll never be a replacement for developers.
After all, pitting AI against developers has always been the wrong way to frame things. AI is here, and its benefits are powerful and undeniable. Any developer who outright refuses to use it in any capacity is likely shooting themselves in the foot.
But how to use AI is the real question: what does good collaboration look like?
In a nutshell, AI should handle repetitive, low-context work—boilerplate, autocomplete, basic test scaffolding. Developers handle the thinking, the trade-offs, and the accountability.
And again, this is not a temporary arrangement until AI “catches up.” We all know that the tools will keep getting better, sharper. But the review, the judgment, and the final call? Those still belong to the person at the keyboard.
Because the basic truth about software is that code is not just instructions for a machine. It is a record of decisions made by people, in a specific context, for myriad reasons.
The developers are still the ones running the show, while LLMs are still in the early stages of grasping the true consequences and nuances of contextual problem-solving

















































































