Hacker Newsnew | past | comments | ask | show | jobs | submit | unlikelymordant's commentslogin

In your experience, can you take the tech debt riddled code, and ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified? Presumably there's a set of tests that you'd keep the same, but you could leverage the power of ai in greenfield scenarios to just do a rewrite (while letting it see the old code). I dont know how well this would work, i havn't got to the heavy tech debt stage in any of my projects as I do mostly prototyping. I'd be interested in others thoughts.


I built an inventory tracking system as an exercise in "vibe coding" recently. I built a decent spec in conversation with Claude, then asked it to build it. It was kind of amazing - in 2 hours Claude churned out a credible looking app.

It looked really good, but as I got into the details the weirdness really started coming out. There's huge functions which interleave many concepts, and there's database queries everywhere. Huge amounts of duplication. It makes it very hard to change anything without breaking something else.

You can of course focus on getting the AI to simplify and condense. But that requires a good understanding of the codebase. Definitely no longer vibe-coded.

My enthusiasm for the technology has really gone in a wave. From "WOW" when it churned out 10k lines of credible looking code, to "Ohhhh" when I started getting into the weeds of the implementation and realising just how much of a mess it was. It's clearly very powerful for quick and dirty prototypes (and it seems to be particularly good at building decent CRUD frontends), but in software and user interaction the devil is in the details. And the details are a mess.


At the moment, good code structure for humans is good code structure for AIs and bad code structure for humans is still bad code structure for AIs too. At least to a first approximation.

I qualify that because hey, someone comes back and reads this 5 years later, I have no idea what you will be facing then. But at the moment this is still true.

The problem is, people see the AIs coding, I dunno, what, a 100 times faster minimum in terms of churning out lines? And it just blows out their mental estimation models and they substitute an "infinity" for the capability of the models, either today or in the future. But they are not infinitely capable. They are finitely capable. As such they will still face many of the same challenges humans do... no matter how good they get in the future. Getting better will move the threshold but it can never remove it.

There is no model coming that will be able to consume an arbitrarily large amount of code goop and integrate with it instantly. That's not a limitation of Artificial Intelligences, that's a limitation of finite intelligences. A model that makes what we humans would call subjectively better code is going to produce a code base that can do more and go farther than a model that just hyper-focuses on the short-term and slops something out that works today. That's a continuum, not a binary, so there will always be room for a better model that makes better code. We will never overwhelm bad code with infinite intelligence because we can't have the latter.

Today, in 2026, providing the guidance for better code is a human role. I'm not promising it will be forever, but it is today. If you're not doing that, you will pay the price of a bad code base. I say that without emotion, just as "tech debt" is not always necessarily bad. It's just a tradeoff you need to decide about, but I guarantee a lot of people are making poor ones today without realizing it, and will be paying for it for years to come no matter how good the future AIs may be. (If the rumors and guesses are true that Windows is nearly in collapse from AI code... how much larger an object lesson do you need? If that is their problem they're probably in even bigger trouble than they realize.)

I also don't guarantee that "good code for humans" and "good code for AIs" will remain as aligned as they are now, though it is my opinion we ought to strive for that to be the case. It hasn't been talked about as much lately, but it's still good for us to be able to figure out why a system did what it did and even if it costs us some percentage of efficiency, having the AIs write human-legible code into the indefinite future is probably still a valuable thing to do so we can examine things if necessary. (Personally I suspect that while there will be some efficiency gain for letting the AIs make their own programming languages that I doubt it'll ever be more than some more-or-less fixed percentage gain rather than some step-change in capability that we're missing out on... and if it is, maybe we should miss out on that step-change. As the moltbots prove that whatever fiction we may have told ourselves about keeping AIs in boxes is total garbage in a world where people will proactively let AIs out of the box for entertainment purposes.)


Perhaps it depends on the nature of the tech-debt. A lot of the software we create has consequences beyond a paticular codebase.

Published APIs cannot be changed without causing friction on the client's end, which may not be under our control. Even if the API is properly versioned, users will be unhappy if they are asked to adopt a completely changed version of the API on a regular basis.

Data that was created according to a previous version of the data model continues to exist in various places and may not be easy to migrate.

User interfaces cannot be radically changed too frequently without confusing the hell out of human users.


> ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified?

I haven't tried that yet, so not sure.

Once upon a time I was at a company where the PRD specified that the product needs to have a toggle to enable a certain feature temporarily. Engineering implemented it literally, it worked perfectly. But it was vital to be able to disable the feature, which should've been obvious to anyone. Since the PRD didn't mention that, it was not implemented.

In that case, it was done as a protest. But AI is kind of like that, although out of sheer dumbness.

The story is meant to say that with AI it is imperative to be extremely prescriptive about everything, or things will go haywire. So doing a full rewrite will probably work well, only if you manage to have very tight test case coverage for absolutely everything. Which is pretty hard.


Take Claude Code itself. It's got access to an endless amount of tokens and many (hopefully smart) engineers working on it and they can't build a fucking TUI with it.

So, my answer would be no. Tech debt shows up even if every single change made the right decisions and this type of holistic view of projects is something AIs absolutely suck at. They can't keep all that context in their heads so they are forever stuck in the local maxima. That has been my experience at least. Maybe it'll get better... any day now!


Would you mind expanding on what you do for anticorruption? It has been something ive been thinking about and wanting to get into lately. It seems like complete poison to democracy, and more should be done to bring it to light wherever it occurs


A good place to start is OSINT (open source intelligence) for your city/municipality because it requires little commitment, is scoped with regards to complexity and amount of information, and usually risk-free. Gather publicly available information about the companies in your area, who owns/runs them, your city council, any ongoing projects, the processes of funding stuff with public money and so on. Don't bother finding the best collection method or way to structure all the data, just start, you will figure things out on the way. Also be aware of your personal bias, which might make you dismiss important information or affect your judgement.

The next steps highly depend on where you live. Your HN profile says Australia, so at least safety-wise you are in a better spot. Connect to people in your area (preferrably offline), for example by organizing a local meetup, maybe there is one already. Activities can range from exchanging ideas to spreading awareness in your community to actively going against corrupt affairs. Make sure you know what and who you are up against, or you will have a very bad time.

Anticorruption is a group effort because it requires a lot of work and often special knowledge (info tech, law, finance, opsec, public relations and propaganda, ...) and, more importantly, a group provides safety from corrupt actors. On your own you will not be able to deal with lawsuits, misinformation, character assassination and worse.


I'd argue that the trend - that seems now deeply, deeply embedded - of alternative facts and straight lying is more important. As it opens the door to all manner of corruption.


generally there is a "temperature" parameter that can be used to add some randomness or variety to the LLMs outputs by changing the likelihood of the next word being selected. This means you could just keep regenerating the same response and get different answers each time. each time it will give different plausible responses, and this is all from the same model. This doesn't mean it believes any of them, it just keeps hallucinating likely text, some of which will fit better than others. It is still very much the same brain (or set of trained parameters) playing with itself.


I wanted to play around with the temperature, but unfortunately o1 only supports '1' as the value.


I have used kagi for quite a while now, and i use it pretty much exclusively. I was unhappy with google ignoring many terms in my search queries and giving me results that I generally considered to be 'intro' pages and generic content, even when my searches were very specific. I have found kagi much better. I dont use any of the advanced stuff like summarisation or ai stuff, i just want search results that have my keywords in them.


Google search is almost useless for anything but the most basic queries now. Anything technical and it ignores half of the search terms like you said.


Google Search has been in decline since they came up with Google+ and removed the Plus Operator from Google Search at the same time (and replaced it with quotes that don't do the same thing). About 13-14 years ago.


If you add minus signs to of popular sites you get better results. However you then end up with a search of something like:

   -google -twitter -reddit -amazon -youtube some search


intext: and quoting solves this…


I haven't found quoting helps much. I also feel like i shouldnt have to craft search queries with a lot of inurl or other tags or quoting. Kagi just seems to work better. Its worth 10$ a month to me to not have to worry about it, I use search engines a lot.


and intext: ? I didn't say that quoting solves it


[flagged]


I have been using search for engines for 30 years, my queries are not vague, i put as many keywords and "inurl"s and whatnot in as i can manage. I dont use kagi blocklists. Google results for my specific queries are garbage. I am much happier with kagi. If you are happy with google, thats fine too. Perhaps we are just in different bubbles and mine are not well served by google.


> i put as many keywords and "inurl"s and whatnot in as i can manage

More keywords does the opposite of narrowing the query. Unless you use quotes and/or other operators, it’s enough for the document to contain only one of your keywords (anywhere, including meta tags) to be a match. Hope this helps.


You’re paying Google via the profile they have of you. Remember there’s no such thing as a free lunch while you’re feeling so superior


If you use Kagi and visit any sites with google analytics etc. then you are paying Kagi with your bucks AND Google with your profile.

Oh and you can use Google completely anonymously. You can't do it with Kagi. All your searches go with your billing address (but you can trust Vlad they won't ever use it to profile you).

Maybe the aura of superiority from paying $10 per month for something 99% ppl don't pay for blinds you to the above facts.


It costs money to run a service. If you are not paying for that service, there is an obvious incentive to monetize your data. If you are paying a reasonable price for a service, that business can sustain itself without using advertising.

Your billing address is not something advertisers can use to track you. Sure, if you use Kagi to commit crimes you may not be anonymous to the police. But there are a lot of people who do not want to be profiled by ad networks, yet do not consider providing their billing information to be a privacy issue.

Your comment about "the aura of superiority" is dismissive and a little confrontational. The commenter you were responding to was clear about why he likes Kagi - better results. I agree with him.

I find that searches for product reviews and similar commercial terms have much higher quality results than google. I also find that I get better results when I am searching for errors or lines from logfiles. Even when quoted, I find that Google will often return results that partially match my quote ignoring the important part, which makes the search useless.


> If you are not paying for that service, there is an obvious incentive to monetize your data.

And it goes away if you pay for it?

> Sure, if you use Kagi to commit crimes you may not be anonymous to the police.

The "nothing to hide" arguments, gotcha.

> But there are a lot of people who do not want to be profiled by ad networks

And using Kagi helps with that how? You know that if you open any website from results, you are still profiled by ad networks?

> Your comment about "the aura of superiority" is dismissive and a little confrontational

I was replying to someone who accused me of being superior (because I know how to use Google? lol). Garbage in, garbage out.

> Even when quoted, I find that Google will often return results that partially match my quote ignoring the important part

As I said, if there is no exact match I see "no results for your search". It is a daily occurrence. It smells of planted misinformation, sorry. Considering you are a newly created account as well.


Have you used Kagi? It’s easily 10x if not 100x better than Google.


I have and stopped. A few people said that domain blacklisting is the killer feature worth $10. Well, actually it felt like paying to do work and locking myself in. You need to maintain those lists, new spam websites appear daily, and it does not really matter if you use precise queries. Most people don't know that Google is capable of them and Kagi capitalizes on them


I don't use the domain blacklisting at all, encounter spam ~never, and almost always get what I'm looking for in the top 5 results.

1) Google now requiring special skill to use is a totally legitimate way in which Google is inferior

2) I'm actually quite good at Googling and know much of the advanced syntax -- still sucks as of a few years ago!

Obviously people's mileage may vary, but if anyone else is reading this and hasn't tried Kagi: you should. It is unambiguously and inarguably worth trying.


> I don't use the domain blacklisting at all, encounter spam ~never, and almost always get what I'm looking for in the top 5 results.

I encountered spam with Kagi half a dozen times. It was labeled "trustworthy". It is 100x worse than just seeing spam. Having to pay for it just adds insult to injury.

If there is a search engine it will be gamed. What we need is not trust the algorithm but have a good query mechanism.

> Obviously people's mileage may vary, but if anyone else is reading this and hasn't tried Kagi: you should

If your searches are uncomplicated and you are okay relying on algorithm without thinking, absolutely. buy it for your grandma and set it as her default search? 100%. But if you are a power Google user, you will be disappointed.


I don’t even know what “trustworthy” label you’re referring to, but also no, literally no one is gaming a search engine with like 0.01% market share.


I thought you use Kagi. It's the label on the right of search result. When I used Kagi it could change color depending on how good Kagi thinks it is (I also remember a bar indicator under it but maybe something changed again). You click on it and you can downrank/uprank the site. If I wanted to really have only good results I would have to do it every day. I'm sure it was good for Kagi because they used it for their crowd sourced ranking but I didn't feel good doing it and paying for the privilege.

I left after I noticed that Google gets me mostly the same results. Plus I have more privacy and less tracking across all web if I don't need to leave private mode.

> literally no one is gaming a search engine with like 0.01% market share.

Kagi is a front-end for google and other search engines. Gaming google and bing is literally gaming kagi.


are you at all confident that this isn't hallucinated? I'd never trust an answer like this from an LLM


You need to break down the lignin without damaging the cellulose. Boiling and mashing doesnt do this very well, you still get bundles of fibres sticking together, you really need them all to separate. The bonds holding lignins together are slightly different between grasses and wood. For grasses sodium carbonate or hydroxide dissolves the lignin pretty well (i havnt tried but you could probably get away with wood ash and some salt for this, to provide sodium and high ph), and allows you to make some pretty nice paper. This as far as i know includes things like papyrus, sugarcane etc. Wood is different, you need sodium hydroxide and sodium sulfate to dissolve the lignin. This contributes to the smell of the paper making process where wood is used. I dont think the ancients knew about the use of sulfates, so their papers were mostly made of grasses or leather (vellum).


Of course there has been 'climate change' and 'global warming' before, caused by things like large volcanic eruptions (deccan traps) or meteorites. But they have pretty universally coincided with mass extinctions. And when we say glabal warming is anthropogenic, we mean this time humans are causing the warming by emitting a lot of co2 (and methane and no2 etc). It is fairly unequivocal at this point. This time its entirely within our power to prevent another mass extinction, because this this time we are causing it. Why wouldnt we try?


Australian houses aren't bought with american dollars though. If this were the case wouldnt we see a big correlation between house prices and USD exchange rates? Why wouldn't the correlation be with other currencies such as china. Honest questions, it seems to me like massive immigration/low supply seems to e plain things pretty well.


What is the coconut coir for? How did you stick everything together? Does it all go in a bag of some sort? How does price compare to retail mattresses? Hoe comfortable is it?


I haven't tried on ios, but brave browser has an inbuilt ad blocker. Might be worth a try. I am quite happy with brave and kagi on android.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: