Most discussions about coding agents focus on code quality and generation.
While these tools can write production-ready code, they miss a bigger point: **writing code was never the main bottleneck in software development**.
## Where Time Actually Goes
Building production systems shows the same pattern everywhere: projects that take months often have only days or weeks of actual coding. The rest of the time goes into:
- Understanding what's actually needed
- Figuring out how it fits with existing systems
- Finding edge cases that will break in production
- Planning how to migrate without breaking things
This happens at companies of all sizes. The bottleneck isn't how fast you can type. It's how fast your team can agree on what to build and how it should work with what you already have.
## What Coding Agents Actually Need
If the real bottleneck is reaching agreement on what to build, what does that mean for coding agents? It means every task, before any code gets written, needs to go through several steps that current agents struggle with:
**Task decomposition**: A high-level request like "add user authentication" needs to be scoped, expanded into concrete requirements, then broken down into smaller implementable pieces. Is this OAuth, email/password, or magic links? Does it need 2FA? What about session management? Each answer creates new subtasks with dependencies between them.
**Simplification and prioritization**: Not everything needs to be built. Which parts are essential for the first version? What can wait? An experienced engineer knows that "add authentication" might start with just email/password, deferring OAuth and 2FA until they're actually needed. This judgment comes from understanding user needs and business constraints, not just technical capability.
**Repository context**: Before writing any code, you need to understand what already exists. Where does the current code handle user data? What patterns does this codebase follow for API endpoints? Are there existing middleware layers to hook into? What testing patterns should new code follow? This context determines whether your solution fits or creates friction.
**External library knowledge**: Most features use existing libraries rather than building from scratch. But which library? The npm registry has dozens of authentication libraries: some well-maintained, some abandoned, some with security issues. Choosing correctly requires knowing the ecosystem, understanding which libraries work well together, and what the community actually uses in production.
**Code reuse**: The codebase likely already has patterns and utilities that should be reused. Maybe there's an existing validation pattern, error handling approach, or database access layer. Using these keeps the codebase consistent and maintainable. Not using them creates technical debt.
Current coding agents handle the final step (writing the actual code) reasonably well. But they lack the context and judgment for everything before that. They can implement a solution once you've done the decomposition, understood the codebase, chosen the libraries, and identified what to reuse. The agent accelerates the last 20% while the first 80% remains manual.
## Planning Requires Context
That missing 80% is fundamentally about planning, but not in the way most people think. Good planning means translating between different types of constraints. When someone asks for "real-time updates," they might mean latency matters, or costs are tight, or competitors have it. Each of these needs a different solution. An elegant architecture might be technically correct but impossible to maintain with your current team and resources.
The challenge is balancing what's technically possible with organizational, financial, and operational realities. This requires context that lives in your team's collective knowledge: past decisions, deployment risks, what your team can actually handle, and business priorities.
## Knowledge Lives Across Teams
But here's the fundamental problem: that context doesn't live in one place. Current AI tools only see what one developer sees: their chat history and the codebase in front of them. But good architectural decisions need knowledge from across the team: infrastructure constraints from backend engineers, user impact from frontend developers, business priorities from product managers.
The best design processes bring all this knowledge together. Someone proposes an approach, others point out constraints, and the solution evolves. The final design often looks different from the first idea, not because the code changed, but because everyone's understanding of the problem improved.
Single-developer tools miss this process. The knowledge stays scattered across different people, which leads to technically correct solutions that don't actually work for the organization.
## What's Still Missing
This brings us back to the original insight: code generation is already good enough. AI can write production-quality code for well-defined problems. The gap is in defining the problem itself.
Tools that can work with context from across the team (past architectural decisions, dependencies between teams, operational constraints, unwritten knowledge) would solve a different problem. Instead of making implementation faster, they would help teams figure out what to implement and how it should work with existing systems.
This is the real opportunity. Current AI tools make implementation faster but leave coordination, understanding requirements, and gathering context as manual work. The productivity gains are real, but they don't change the main constraint on software development timelines. The bigger opportunity is in tools that help with planning and coordination, not just implementation. Whether that's possible, and what it would take to make it work, is still an open question.