Clear logo

The Mythical Agent Month

27 Jun 2025
Ankit Solanki
Co-founder at Clear. Exploring all possibilities of AI.
Agents

The Mythical Man Month famously had the observation that adding manpower to a project that's behind schedule will often delay it even further. Additionally, in 'No Silver Bullet' Fred Brooks further states that:

There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

How does this change with the advent of AI coding agents? Can coding agents give us the mythical 10x speedup?

My thoughts below. I've divided this blog post into sections which argue both for and against transformative change.

My position: coding agents are a fundamental shift in how we'll build software over the next few decades, and we're overestimating the impact in the short term while underestimating the impact in the long term.

Headwinds

Vibe Coding vs Writing Production Software

We should differentiate between 'vibe coding' as a way to experiment / prototype, and using AI to write production software. Simon Willison has an excellent post on this.

For production quality software, all basic tenets of software engineering apply. Code reviews, tests, architecture designs, software design reviews, etc.

Most of a senior engineer's time is often spent in these activities, not just writing code.

Coding agents help a great deal here, but you won't see the same speedup as pure vibe coding — where you build software without reviewing the code your agent writes

At least for now: coding agents aren't good enough to fully autonomously build features and ship them to production without human review.

Essential Complexity vs Incidental Complexity

As Fred Brooks pointed out in the above essay, software development consists of both essential complexity and incidental complexity.

Incidental complexity could be things like figuring out how to write a Dockerfile, or learning how a specific library works, or dealing with framework specific issues. Coding agents can be a huge help here.

Essential complexity is the core problem you're trying to solve. Coding agents can definitely help here, but you still need to pay close attention — humans will remain the bottleneck here.

Amdahl's Law basically gives a ceiling for the performance gain that automation / parallelisation gives for any given task. You are only as fast as your bottleneck.

Decision Fatigue and Time Compression

Faster coding actually compresses timeframes and lets you focus on the hard decisions, on the essential complexity. Coding agents let you focus on substance of your problem.

But human capacity for deep thought is limited!

So now, your day to day working with AI coding tools is going to be a series of hard-decisions that you need to think deeply upon, decisions that require high amount of mental effort.

Decision fatigue is real. If you have to make a weeks' worth of hard decisions in a day, your decision quality will suffer.

AI coding will exhaust you if you're not careful. Human beings need to be able to step back and think about problems. We need to go for walks, ruminate on ideas and just wander through a problem space.

Effective Communication & User Skill

AI agents need engineers to be effective communicators, and this is a problem. Most engineers aren't the best communicators. Every great coder isn't automatically great at delegation.

Effective communication is a skill. Writing clearly is a skill. And using coding agents effectively is a skill.

For example, here are two recent articles that go in-depth about the craft of using AI agents to code:

Craftsmanship takes time. Skills take time to build. People are great engineers today won't automatically be great at using coding agents. Getting better will require deliberate practice, and approaching this problem with a beginner's mindset.

Headwinds Summary

Given that:

  • Production software (currently) requires human oversight
  • Essential complexity remains
  • It will take time to learn how to use the coding agents effectively

Is an immediate 10x improvement in velocity possible? It seems there is truly no silver bullet.

Tailwinds

AI Scaling will continue

Agents will keep getting better. Underlying models will keep getting better. We've learned to not bet against scaling.

According to one recent viral benchmark, the length of tasks that AI can do uninterrupted is doubling every 7 months.

From my personal experience, I know that each recent big model release (eg: Sonnet 3.5, Sonnet 3.7, Sonnet 4.0) has made building agents easier. The LLMs are getting better at following instructions, at using tools, at planning, and just at showing agency.

It's hard to predict the future, but it's definitely possible that soon, a large majority of code written won't need human review and oversight.

Most work isn't 'Deep Work'

While my arguments above hold (essential complexity remains, decision fatigue is real) — let's be real and acknowledge the fact that most of us don't do deep work 100% of the time.

A lot of time goes into glue work, into getting various subsystems to behave, dealing with broken tools, etc.

AI can be a huge accelerator for these types of work. It's possible that this itself can be a 10x improvement for many organisations!

Quantity has its own Quality

If you're working in deep tech, if you're building something complex — AI coding allows you to try more approaches. You can build quick and dirty throwaway prototypes, and validate more ideas.

Quantity has a quality of its own. If you can do more iterations, you can get to better decisions. If you can actually build multiple candidate systems, you can make more informed choices — architecture / design decisions become easier with data.

I have personally seen this pay off: while building a zero to one product, I have been able to do many parallel experiments and actually test 10s of ideas before deciding upon a plan. This has enabled me to make bold system design bets with high confidence.

Ambition & Moonshots

AI agents allow you to be more ambitious. On the margin, with higher productivity it's possible to devote more time and resources towards building something better than you would have previously.

You can think of a productivity gain as either:

  • Work 10x faster
  • Build something 10x better

Either option is fine! In fact, it may be the case that building something 10x better is actually more impactful.

I suspect one of the impacts of ubiquitous coding agents will be the rising baseline quality of software!

Tailwinds Summary

If you consider the facts that:

  • LLMs will continue to improve
  • We'll all get better at using AI agents
  • We'll be able to automate away low impact work
  • We'll be able to try many more iterations
  • We'll be able to build more impactful, more meaningful software

How can you doubt the impact that coding agents will have?

Conclusions

I've argued both sides of this. My position is:

  • We're radically underestimating coding agents
  • Most of us are not ready to adopt agents at scale

I think impact and adoption will not be uniform. People will have different lived experiences with AI tools, with some dismissing AI coding as a fad, and some enthusiastically thinking of these agents as a panacea for all their problems.

I think coding agents will have a huge impact that is overrated in the short term, but underrated in the long term.

And I think that today, to get the most out of current generation agents, you have to really dive deep and uncover their limits yourself.