From a recent New York times article
Yanacek peered at the A.I.’s suggested fix, pondered for a moment, then hit “enter” to approve it. The A.I. took about eight minutes to figure things out, he told me. “By the time I’d opened my laptop, it’s ready.” One customer recently told him that Amazon’s A.I. agent fixed a problem in only 15 minutes; when a similar problem occurred months before, it had taken a full team of engineers eight hours to debug.
Think about that for a second. That company had the eight hours to spend and fix the issue. Now with a coding agent it takes 32 times less to deal with a similar problem. Besides debugging, creating and updating software takes ten or even twenty times less. [1]
That’s fine at first, but you’re setting yourself up for a situation in which you use these tools as much as possible – you have the time now after all – and you bring to life a new reality where you have lots more code, you know it less well or not at all, and it’s probably less coherent.
Parkinson’s Law: Work expands to fill the time available. The uncharitable reading is that make-work appears when there’s no real pressing work. But it’s equally true that with more time, more necessary work can get done. Whatever the case, we know we haven’t gone to a four hour work week and aren’t likely to any time soon. If we’re sitting around managing agents instead of writing code, we’ll do it all day long. [2]
New problems that arise won’t feel bad – so long as you have access to your coding agents. For instance, imagine it takes two hours for the AI agent to untangle some new issue in your software and solve it. Feels all right. But taking seriously the time scales of AI vs human troubleshooting and patching reported in the article, that would mean that two AI hours are 64 hours on the part of multiple human engineers if your AI isn’t available. That’s assuming you have access to the same number and quality of engineers you did before adopting AI.
There are all sorts of qualifications to that analysis; what’s hard for AI can be obvious for humans and vice versa; time spent by an AI doesn’t always translate very linearly to more code (same goes for humans.)
Still, to a first approximation, it looks like by relying on AI coding agents to troubleshoot and build new things, you’re probably locking yourself into always relying on them afterward. And that assumes no loss of skill by the human engineers. In reality they probably get rusty and are less nimble during debugging than they would have been in the non-AI alternative world, making the difference even greater.
All I’m really suggesting here is that we could get trapped into AI coding tool use much faster than a lot of people might expect.
[1] This is an impirical question probably this ratio isn’t correct, I’m just going with the claims by the people in the article. AI coding tools may not be there yet, but also they are getting more powerful almost on a monthly basis so who really knows.
[2] Jevons paradox might apply equally well. From a certain point of view software engineering will become more efficient, like mining coal became more efficient, meaning more gets used so even though its cheaper per unit the total spending doesn’t go down. Instead production goes up. New, previously uneconomical uses for it now make sense.