> The role change has been described by some as becoming a sort of software engineering manager, where one writes little or no code oneself but instead supervises a team of AI coding agents as if they are a team of human junior software engineers....
> In reality, though, the code review load for software engineers will gradually increase as fewer and fewer of them are expected to supervise an ever-growing number of coding agents, and they will inevitably learn to become complacent over time, out of pure necessity for their sanity. I’m a proponent of code review...but even I often consider it a slog to do my due diligence for a large code review (just because I think it’s important doesn’t mean I think it’s fun). If it’s your full-time job to review a swarm of agents’ work, and experience tells you they are good enough 95%+ of the time, you’re not going to pay as much attention as you should and bad changes will get through.
Another way to look at this is that AI coding agents take the fun out of a software engineer's job. The machine takes many of the fun parts and leaves the human with more of the unenjoyable parts.
Under our new ways of working, you are required to be excited an curious about this evolution three times per day.
Sounds a lot like "self-driving" cars - "they are good enough 95%+ of the time, you’re not going to pay as much attention as you should".
Same thing happens here, you get complacent and miss critical failures or problems.
It's also similar in that it "take[s away] many of the fun parts". When I can focus on simply driving it can be engaging and enjoyable - no matter the road or traffic or whatever.
>Sounds a lot like "self-driving" cars - "they are good enough 95%+ of the time, you’re not going to pay as much attention as you should".
That might be an issue for supervised "self-driving" cars (eg. tesla FSD), but not really applicable to self driving cars as a whole. Waymo seems to be doing just fine for instance.
>Aren’t Waymo‘s 5% covered by some people in the Philippines? [1]
If you look at waymo's prior blog posts, you'll realize that the people in Philippines aren't making split second decisions that was implied by "you’re not going to pay as much attention as you should".
Exactly why I put "self-driving" in quotes. Right now AI assisted coding might generally be at the equivalent of Level-2 or -3 self-driving. Getting to autonomous coding agents will be like the step change that is Level-4 or -5 driving.
I think it depends on what you find enjoyable. I think people who like the tinkering and the actual act of coding, debugging, etc. will find it less and less fun to be in this area, but people who like to look at the big picture, and solve problems, will see that they will now be better at both getting overview of larger and larger codebases and that technical debt that was never attainable to solve before can now be “outsourced” to LLM’s.
I find that fun. I work in a 50 year old IT company, with lots of legacy code and technical debt which we have never been able to address - suddenly it’s within reach to really get us to a better place.
The best way to have a big picture view of a project is to build a mental model of that project in your head. Coding with LLMs removes that ability, and replaces it with an illusion.
Well if you have experience reviewing other people’s code, it is not that different than finding an idea, asking copilot to do it, and then review just as if you had a ton of junior engineers to write code for you, which also can go too far in one direction before asking for feedback.
So it really depends on your reviewing ability how maintainable code you will get. It is a bit of effort to review something “you have done” as thoroughly as something a colleague have done. Somehow I still feel sense of ownership even though the LLM did it.
I like reviewing using GitHub’s interface, so I often do a thorough review in that familiar interface while the PR is still draft, and before I have invited others to review. If I review my own code directly in my editor when the agent is done, my brain isn’t in the right context and can get distracted or skip over something.
Does the thing work like I want it in the end? Is it fast, reliable, enjoyable to use, maintainable, cheap, efficient, resilient, etc?
If so, I don't care if I wrote it by hand or with an LLM. People who think that building something with an LLM somehow dooms the something to mediocrity are engaging in magical thinking. I can simply use as much or as little LLM as will allow me to meet my quality criteria.
You listed "maintainable", but how do you know your project is maintainable, if you yourself have no understanding of the code base? Presumably the reason is that the AI has managed to maintain the project so far, so it follows that it will be able to do so in the future. But that's not a given. It's more of a prayer.
Exactly this. I use agents every day to either produce tests for code I've written according to the guidelines I set out for it, or to produce the boilerplate code (which is seldom enjoyable) before I get to add the cool stuff.
Furthermore, when I inevitably get stuck on a thornier section of new code, or revisiting a codebase which I've not investigated for some time, I can use the agent to provide ideas and suggestions of where/how to start/get unstuck.
Like any tool - it's how you apply it to the job in hand (and ensuring the job is relevant) that counts.
One way of framing this is that people that prefer to solve problems are actually bad at tinkering and writing good code. Hence the existence of terrible codebases written by devs thet “liked to solve problems for the customers”.
It is not that clear cut that problem-solvers have that in addition to the tinkering part nor it is guaranteed that tinkerers don’t like to solve problems.
Two independent axis!
I'm as nerdy as they come (my current project is the fourth compiler I've worked on), and I absolutely love this new way of working. There's a lot more time spent in discussion with the agent (an extremely frustrating discussion, to be fair). All of a sudden, there's an extremely high payoff to investing in good fundamentals (namely, clarity of requirements, good tools, etc.), which are the things I want to invest in anyway! If you get these fundamentals right, you can let the agent rip and produce hundreds of PRs that are correct, or create workflows that are actually not slop or ship code that is, while not yet as high quality as if you wrote it manually, quite close, at easily five times the speed.
And throughout this, if I'm ever curious about how the ideas relate to some other topic, I can just ask the agent, "Are we designing XYZ right now? Categorically, is it this?" Lots of really cool discussions to be had.
I might be less enthusiastic if I was just shipping CSS changes and the like.
But isn't that what Linux admins said when Cloud and Platform Engineering became a "thing".
Puppet, Chef, Ansible they are taking the fun out of system administration, Cloud it's going to take away my job. But what happened is roles changed, SRE, Platform Engineering, dare I say DevOps engineers (whatever those are) all emerged.
Software engineering is going through he same transformation with the same knee jerk reactions from both sides of the argument. Either AI run rampant with no guardrails like OpenClaw people burning 1000s in tokens, OpenClaw calling them randomly at 2am, etc.. then on the other side of the argument we have a petition asking the Node.js Technical Steering Committee to ban AI-assisted code from Node.js core.
AI is here, it's here to stay, I believe that those of us that will be successful will find the middle ground.
And even then - I still read the code it generates, and if I see a better way of doing something I just step in, write a partial solution, and then sketch out how the complete solution should work.
i could be wrong, but i'm pretty sure that end-users get upset when a change takes a long time or it ends up breaking something for them.
just because people are finding that agents or whatever are speeding changes up now doesn't necessarily mean they won't encounter a slow-down later when the codebase becomes an un-maintainable mess. technical debt is always a thing, even with machines doing the work (the agent/machine still has to parse a codebase to make changes).
What makes you think that AI couldn’t make the same changes without breaking it whether you modify the code or not? And you do have automated unit tests don’t you?
Right now I have a 5000 line monolithic vibe coded internal website that is at most going to be used by 3 people. It mixes Python, inline CSS and Javasript with the API. I haven’t looked at a line of code. My IAM permissions for the Lambda runtime has limited permissions (meaning the code can’t do anything that the permissions won’t allow it to). I used AWS Cognito for authorization and validated the security of the endpoints and I validated the permissions of the database user.
Neither Claude nor Codex have any issues adding pages, features and API endpoints without breaking changes.
By definition, coding agents are the worse they will be right now.
i have a rule of thumb based on past experience. circa 10k per developer involved, reducing as the codebase size increases.
> 5000 line
so that's currently half a developer according to my rule of thumb.
what happens when that gets to 20,000 lines...? that's over the line in my experience for a human who was the person who wrote it. it takes longer to make changes. change that are made increasingly go out in a more and more broken state. etc. etc. more and more tests have to be written for each change to try and stop it going out in a broken state. more work needs to be done for a feature with equal complexity compared to when we started, because now the rest of the codebase is what adds complexity to us making changes. etc. etc. and that gets worse the more we add.
these agent things have a tendency and propensity to add more code, rather than adding the most maintainable code. it's why people have to review and edit the majority of generated code features beyond CRUD webapp functionality (or similar boilerplate). so, given time and more features, 5k --> 10k --> 20k --> ... too much for a single human being if the agent tools are no longer available.
so let's take it to a bit of a hyperbolic conclusion ... what about agents and a 5,000,000 line codebase...? do you think these agents will take the same amount of time to make a change in a codebase of that size versus 5,000 lines? how much more expensive do you think it could get to run the agents at that size? how about increases in error rate when making changes? how many extra tests need to be added for each feature to ensure zero breakage?
do you see my point?
(fyi: the 5 million LoC is a thought experiment to get you to critically think about the problem technical debt related to agents as codebase size increases, i'm not saying your website's code will get that big)
(also, sorry i basically wrote most of this over the 20 minutes or so since i first posted... my adhd is killing me today)
20K lines of code is well within the context window of any modern LLM. But just like no person tries to understand everything and keep the entire context in their brain, neither do modern LLMs.
Also documentation in the form of MD files becomes important to explain the why and the methodology.
Generally speaking, I try to ensure that the LLM is using core abstractions throughout the codebase in a consistent manner. This makes it easier for me to review any changes it makes.
Sort of a devils advocate question. If you write and review your tests and the functional and non functional requirements
and the human tests for usability pass, why does the code matter?
Non functional requirements: performance, security, reliability, logging etc?
I like programming for fun, but professional software engineering has never been more than very occasionally fun to me. I do it because it pays well.
Most companies use some variant of the sprint/"agile" methodology, which means that you as the programmer are similar to an assembly line worker. You don't control the pace, you rarely get the chance to actually finish anything as much as you would like to so you don't get the satisfaction of a finished product, you get little downtime in between tickets or projects to feel a sense of satisfaction at what you have done before you move on to something else.
I totally understand why businesses operate this way. It's simple: if you try not to operate this way, you increase the likelihood that your competitors will release more rapidly and take all your market share. It's the basic evolutionary logic. If you can release a decent but buggy product six months faster than the competitor can release a better and less buggy product, there is a good chance that you will drive them out of business. It all makes sense, but it doesn't result in a pleasant experience for me as a programmer.
The job is also very sedentary and it puts stress on your eyes and your hands. Of course I'm not going to compare myself to a coal miner, but the fact remains that in its own ways the job is more rough on the body than some people might expect. Meanwhile the intellectual and constantly changing nature of the field means that you can never rest on your laurels - the job requires constant mental engagement. It's hard to do it well while multitasking and thinking about other interesting things at the same time.
If jobs in this field did not pay so well, I don't think I'd ever even consider doing this as a career. It just doesn't have nearly enough upsides for me. The only upside I can think of besides the money is that you get to spend time interacting with intelligent people. But one can get that in other places.
Coding with the help of AI is a big improvement for me because just automating away the boilerplate and greatly reducing the time that needs to be spent in doing reading, research and experimentation already takes away some of the main time-sinks of the job. And while I appreciate code's deterministic and logical elegance, I've written more than enough code in my life to not miss writing it directly. I also enjoy writing English, after all. It's refreshing for me to spend my time composing precise English instructions for the AI instead of writing in a programming language. My English skills are above average, so this also helps me to leverage one of my talents that had previously been underutilized. Well, my English skills always came in handy when I needed to talk to coworkers or managers, but I meant that they had been underutilized in the act of actually creating software.
> Another way to look at this is that AI coding agents take the fun out of a software engineer's job.
Completely backwards - the fun in the job should be to solve problems and come up with solutions. The fun in the job is not knowing where to place a semicolon.
>> Another way to look at this is that AI coding agents take the fun out of a software engineer's job.
> Completely backwards - the fun in the job should be to solve problems and come up with solutions.
Aren't the coding agents supposed to be doing that too? You give them the problem, they code up a solution, then the engineer is left with the review it to see if it's good enough.
> The fun in the job is not knowing where to place a semicolon.
That's like such a minor and easy-to-do thing that I'm surprised you're even bringing it up.
Eh, that’s not at all how I do it. I like to design the architecture and spec and let them implement the code. That is a fun skill to exercise. Sometimes I give a little more leeway in letting them decide how to implement, but that can go off the rails.
imho “tell them what you want and let them come up with a solution” is a really naive way to use these tools nearly guaranteed to end up with slopware.
the more up front design I’ve given thought to, they are usually very accurate in delivering to the point I dont need to spend very much time reviewing at all. and, this is a step I would have had to do anyway if doing it by hand, so it feels natural, and results in far more correct code more often than I could have on my own, and allows multitasking several projects at once, which would have been impossible before.
I think i'm going to let people decide for themselves what they enjoy in their job rather than pretending I know better than they do what they should and should not enjoy.
> the fun in the job should be to solve problems and come up with solutions
Who are you to tell anyone what the fun "should" be?
Personally, I find writing code very fun, because building the solution is also very gratifying.
Besides which, in my experience until you actually write the code you haven't proven that you solved anything. It's so easy to think you have solved a problem when you haven't, but you won't figure that out until you actually try to apply your solution
> The fun in the job is not knowing where to place a semicolon.
This can be solved with simple linters, no need for LLMs
Except you kind of do -- understanding data structures, understanding software engineering concepts, all of the things that you learn as a good engineer, those are ways that you help guide the LLM in its work.
I don't think kids are learning those things in 2026, they just ask an LLM.
Someone posted on here the other day about how they were taking a non-credit writing class in college so as to improve their writing, that was the reason the course existed. 90% of the class was kicked out because they were using LLMs to write for them, when the entire purpose of the class was to improve ones own writing.
Why do you think it will be any different with programming?
> Except you kind of do -- understanding data structures, understanding software engineering concepts, all of the things that you learn as a good engineer,
Companies aren't investing in AI because they want to solve the problem of semicolon placement. They want AI to solve problems and come up with solutions. Then they want to fire most of their programmers and force the rest to do nothing but check over and fix the slop their marketing departments are churning out.
I don't know why they'd stop at most programmers instead of all programmers. And the marketing department will also be AI. Companies want AI to remove the need for any labor so they can more directly gain money based on already having money.
They'll need at least a few programmers because AI doesn't actually work very well and fixes will be required. The marketing department may end up replaced by AI but so far marketers have convinced companies that they're so essential that even the most popular and well known brands in the world feel the need to spend billions on more and more marketing. If anyone can talk their way into staying employed it'll be marketers.
Exactly, the fun part is when the code works and does what you wanted it to do. Writing code itself is not fun. People forget this because they get small wins / dopamine hits along the way, a clever function, an elegant few lines of code, a bug fix, but the majority of that time coding is just a grind until the end where you get the big dopamine hit.
Fun is not measured objectively. Different people find different things fun. I enjoy writing code very much (in addition to solving big problems; one can enjoy both).
> In reality, though, the code review load for software engineers will gradually increase as fewer and fewer of them are expected to supervise an ever-growing number of coding agents, and they will inevitably learn to become complacent over time, out of pure necessity for their sanity. I’m a proponent of code review...but even I often consider it a slog to do my due diligence for a large code review (just because I think it’s important doesn’t mean I think it’s fun). If it’s your full-time job to review a swarm of agents’ work, and experience tells you they are good enough 95%+ of the time, you’re not going to pay as much attention as you should and bad changes will get through.
Another way to look at this is that AI coding agents take the fun out of a software engineer's job. The machine takes many of the fun parts and leaves the human with more of the unenjoyable parts.
Under our new ways of working, you are required to be excited an curious about this evolution three times per day.