Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I continue to jump into these discussions because I feel like these upvoted posts completely miss what’s happening…

- guardrails are required to generate useful results from GenAI. This should include clear instructions on design patterns, testing depth, and iterative assessments.

- architecture decision records are one useful way to prevent GenAI from being overly positive.

- very large portions of code can be completely regenerated quickly when scope and requirements change. (skip debugging - just regenerate the whole thing with updated criteria)

- GenAI can write thorough functional and behavioral unit tests. This is no longer a weakness.

- You must suffer the questions and approvals. At no time can you let agents run for extended periods of time on progressive sets of work. You must watch what is generated. One thing that concerns me about the new 1mm context on Claude Code is many will double down on agent freedom. You can’t. You must watch the results and examine functionality regularly.

- No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.

My experience is rolled up in https://devarch.ai/ and I know I get productive and testable results using it everyday on multiple projects.

 help



No one should care about actual code ever again. It’s ephemeral.

Caveat: it still works best in a codebase that is already good. So while any one line of code is ephemeral, how is the overall codebase trending? Towards a bramble, or towards a bonsai?

If the software is small and not mission critical, it doesn’t matter if it becomes a bramble, but not all software is like that.


I think it works great in codebases that are good, but I think it will degrade the quality of the codebase compared to what it was before.

A good codebase depends on the business context, but in my case its an agile one that can react to discovered business cases. I’ve written great typed helpers that practically allow me to have typed mongo operators for most cases. It makes all operations really smooth. AI keeps finding cretaive ways of avoiding my implementations and over time there are more edge cases, thin wrappers, lint ignore comments and other funny exceptions. Whilst I’m losing the guarantees I built...


> No one should care about actual code ever again. It’s ephemeral.

> very large portions of code can be completely regenerated quickly when scope and requirements change.

This is complete and utter nonsense coming from someone who isn't actually sticking around maintaining a product long enough in this manner to see the end result of this.

All of this advice sounds like it comes from experience instead of theoretical underpinning or reasoning from first principles. But this type of coding is barely a year old, so there's no way you could have enough experience to make these proclamations.

Based on what I can talk about from decades of experience and study:

No natural language specification or test suite is complete enough to allow you to regenerate very large swaths of code without changing thousands of observable behaviors that will be surfaced to users as churn, jank, and broken workflows. The code is the spec. Any spec detailed enough to allow 2 different teams (or 2 different models or prompts) to produce semantically equivalent output is going to be functionally equivalent to code. We as an industry have learned this lesson multiple times.

I'd bet $1,000 that there is no non-trivial commercial software in existence where you could randomly change 5% of the implementation while still keeping to the spec and it wouldn't result in a flood of bug reports.

The advantage of prompting in a natural language is that the AI fills in the gaps for you. It does this by making thousands of small decisions when implementing your prompt. That's fine for one offs, and it's fine if you take the time to understand what those decisions are. You can't just let the LLM change all of those decision on a whim, which is the natural result of generating large swaths of code, ignoring it, and pretending it's ephemeral.


> - No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.

I think this has always been the case. "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Perhaps you mean that they shouldn't worry about structures & relationships either but I think that is a fools errand. Although to be fair neither of those need to be codified in the code itself, but ignore those at your own peril...


Data structures are still conversational items. I come from the DDD community and adamantly push back on data first architectures. Modules or Bounded Contexts reveal their relationships and data over time.

Perhaps you can think of the modules or "Bounded Contexts" as a type of data structure and the relationships between them. Idk. I don't have a particularly great view of DDD fwiw.

The post is about using LOC as a metric when making any sort of point about AI. Nowhere do I suggest someone shouldn't use it, nor that they should expect negative results if they opt to.

No one I’ve ever worked with in 40 years has ever seriously used loc as a measurement of progress or success. I honestly don’t know where this comes from.

That’s the odd psychosis here. Everyone knew loc was a terrible measure. But perhaps the instinctual pull was always there, and now that you can generate halfway coherent tens of thousands of loc in hours, our sensibilities are overwhelmed.

Yes, but it comes up in conversations of LLMs a lot. Thus, the rant in question. I think we are in agreement, or at least we lack disagreement, because that is the only stance I endeavored to take in the post.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: