This is the crux of the whole conversation. What percentage of software is "critical"? My guess is 50%. And AI will soon be able to play in that space as well. So in the future, maybe 25% of "critical" software will require real humans in the loop?
I agree with the other comment that measuring productivity is pointless, as there has never been a good way to do this.
But the closest answer I can give you (without detailed examples of work projects) is I can prototype things faster than my team of 5 devs + 1 BA + 1 Manager before AI / Covid. The speed isn't just the faster code generation, but a fundamental paradigm shift from the commonly accepted project management philosophies. Agile and scrum are (in my experience) meant to protect developers from "wasted work" or "throwaway code" and also placate this non technical stakeholder fantasy that they know the best about product and can micromanage their way into a predictable timeline.
I have effectively been working as a team of 1 and I have been able to prototype things in days or weeks that would of taken months before. 95% of the code generated by claude is throwaway but the goal is to discover the real requirements faster. In the old model every step and possible risk needs to survive 3 meetings. If the story points are arbitrarily high then we have to split the tasks into more tasks.
Ironically, the obsession of quantifying productivity is what killed the productivity. People that live through spreadsheets would rather have 10 units of measurable productivity vs 50 units of unmeasurable productivity.
These kinds of comments are so spectacularly useless. It was almost impossible to measure productivity gains from _computers_ for nearly two decades after they started being deployed to offices in the 1980s.
There were articles as late as the late 1990s that suggested that investing in IT was a waste of money and had not improved productivity.
You will not see obvious productivity gains until the current generation of senior engineers retires and you have a generation of developers who have only ever coded with AI, since they were in school.
It was not impossible to measure them. It is just that you dont like the result of the measurement - early adopters often overpaid and endes up with less efficient processes for more money.
Eventually companies figured out how to use them effectively and eventually useful software was created. But, at the start of the whole thing, there was a lot of waste.
Quite a lot of people are now paying a lot for ai that makes them produce less and lower quality. Because it feels good and novel.
While engineering managers are enamored with the thing, it will go on. But, the agentic development will ruin multiple companies and become unfashionable. Bet on your skills, taste, and intuition. Don’t fall into fomo.
reply