Hacker Newsnew | past | comments | ask | show | jobs | submit | zdragnar's commentslogin

> I think we all jumped on the AI mothership with our eyes closed

Oh no, there's plenty of us willing to say we told you so.

What's more interesting to me is what it's going to look like if big companies start removing "AI usage" from their performance metrics and cease compelling us to use it. More than anything else, that's been the dumbest thing to happen with this whole craze.


Where in the US are you? I was able to book a visit with my primary the very next day less than a month ago.

Not the person you replied to but I'm in North Texas and I just recently had to reschedule my physical. And yup, the next appointment is 2 months out.

I also had cancer in the past and you might think that that would mean I get faster appointments. I do not.

And I have a very, very, very good PPO plan.


> I also had cancer in the past and you might think that that would mean I get faster appointments. I do not.

Sadly you do not may be because lower life expectancy -> lower return on treatment "investment".


That was my thinking... even for specialists, I can generally get into a new one within a few weeks.

My SO is on state Medicaid (cancer) and does experience the kinds of waits mentioned above... so I guess it does follow similarly for government/state backed healthcare, where I'm mostly out of pocket.

But even when I had relatively typical coverage, I didn't have issues getting into a doctor more often than not. I think getting my sleep study was the longest wait I had for anything, they were months backed up with appointments... but my kidney and retina specialists were somewhat easy to get started with.


As usual when people say "the US", we're papering over the fact that the United States is really 50 countries in a trench coat.

> the United States is really 50 countries in a trench coat.

Appropriate attire... when you're in a trench :)


> One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.

Eh, I don't think this is exactly as reliable as you'd expect.

My previous job had a fairly straight forward code base but had fairly poor reliability for the few customers we had, and the WTF portions usually weren't the ones that caused downtime.

On the other hand, I'm currently working on a legacy system with daily WTFs from pretty much everyone, with a greater degree of complexity in a number of places, and yet we get fewer bug reports and at least an order of magnitude if not two more daily users.

With all of that said... I don't think I've used any of Microsoft's new software in years and thought to myself "this feels like it was well made."


The rapid decay of WTF/day over time applies to both new employees and new customers.

> currently working on a legacy system

"Legacy" is the magic word here! Those customers are pissed, trust me, but they've long ago given up trying to do anything about it. That's why you don't hear about it. Not because there are no bugs, but because nobody can be bothered to submit bug reports after learning long ago that doing so is futile.

I once read a paper claiming that for every major software incident (crash, data loss, outage, etc...) between only one in a thousand to one in ten thousand will be formally reported up to an engineer capable of fixing the issue.

I refused to believe that metric until I started collecting crash reports (and other stats) automatically on a legacy system and discovered to my horror that it was crashing multiple times per user per day, and required on average a backup restore once a week or so per user due to data corruption! We got about one support call per 4,500 such incidents.


The customers aren't pissed, we're doing demos to new departments and lining up customizations and expansion as quickly as we can. We're growing faster than ever within our largest customer.

I also didn't say there are no bugs or complaints, I said the system is more stable. But yes, there are fewer bugs and complaints, especially on the critical features.

I didn't use the word legacy to mean abandoned, just that it's been around a long time, we're maintaining it while also building newer features in newer tech, as opposed to my previous company which was a green field startup.


> But yes, there are fewer bugs and complaints

How do you know?

By that question I mean: Do you think there are fewer bugs because you hear fewer complaints from humans, or because you have a no-humans-involved mechanism for objectively evaluating the rate of bugs?

Even if you have a mechanical method for collecting bug reports, crash logs, or whatever, that can still obscure the true quality of the codebase.

One such example that I keep thinking about was the computer game Path of Exile. It has "super fans" that all have 10,000 hours of playtime that will swear up and down that it is one of the best games ever. When I first played it, I found so many little bugs and issues that I had more fun jotting them down than actually playing the game! I collected pages and pages of bullet points. None were crash bugs that would have been logged, and every one was the type of thing that players would eventually learn to work around by avoiding scenarios that caused the issue. I.e.: "Don't click to fast after going through a door because your orientation will be random on the other side, so you might be sent back to where you came from", that kind of thing.

Honestly and objectively measuring the quality of a software application (or any product) is hard.


> since you couldn’t reliably stop it the computer itself and new teams with new computers come in.

Wifi connection settings in Windows have a "metered connection" setting, which disables automatically downloading updates. I don't recall exactly when this was introduced, but I had to use it for a year while I was stuck on satellite internet. You can even set data caps and such.

Of course, it's always off by default, and I have no idea if there's any way to provision the connection via enterprise admin to default to on for a particular network (I would assume not) so you'd be stuck hoping everyone that comes in does the right thing.


It's a good setting. I've found it gets reset sometimes from Windows updates, so you must remain vigilant.

So, does this ban all news related to politicians?

Newspapers and television programs sell time and space via advertisements, and there is more in the world than could conceivably fit.

Therefore, every inclusion is an editorial decision. Any positive or negative opinion, any review of a biography or book about a politician, every interview is now a contribution in kind- after all, the time and space have value, which are included in this law as "anything of value".

Basically, this is literally what the Citizens United decision boiled down to- a blatant infringement on free speech. People HATE citizens United because it lets companies donate money, but this is the flip side to the equation.


"(b) The term does not include the distribution of bona fide news, commentary, or editorial content unless the publishing entity is owned or controlled by a political party, a political committee, or a candidate".

What’s “bona fide news?” Does it include the World Socialist Web Site? MS NOW? Newsmax? Russia Today?

Generally it’s not advisable for the government to have the power to ban political communication and decide on a case-by-case basis what communication falls into the banned classes.


Just like it says in the First Amendment! Congress shall make no law except…

If this thing passes it’s a dead letter to at least the current SCOTUS.


It stores plugins as strings in the database, then pulls those strings back and evals them as PHP on requests.

"Better coded" is very much a subjective assessment.


Thank you very much for sharing your research results!

I really appreciate your work and even more that you took time and risk exposing your findings, I wish more people did this.


We're in the process of building new gigawatt datacenters for the sole purpose of doing this stuff. If we end up not needing them, there's gonna be a whole lot of capacity sitting around soaking up ongoing maintenance costs.

For ex. of the five new data centers being planned in Wisconsin, the two I know of that have public energy consumption estimates will need more electricity than all of the residential electric usage in Wisconsin combined at 3.9 gigawatts.

https://www.wpr.org/news/data-centers-could-cost-wisconsins-...


All I know is I never want to hear another person talk about how my personal electrical usage is excessive after all the power usage needed for these data centers. My house should be able to feel comfortable in the summer if we're building these many data centers.

They don't make any of the documentation for those settings easy to find or understand because the support contracts make them so much money.

Before, that could create a moat.

Soon, it will be table stakes to put scattered internal communications, notes, documents into an AI’s knowledge base, where the information can no longer hide.

When that fails, the AI can read the code itself, so that the settings and how to change them are easily explained in simple terms. Actually, this is possibly even better than letting the scattered internal information serve as an intermediate layer.


That works for small customers who actually want to spend time customizing things themselves. Big customers love having to sign support contracts, because it gives them someone to blame when something goes wrong. Nobody else gets to touch any of the settings or knobs to avoid breaking anything.

Being big is the actual moat.


There are pretty much two usage patterns that come up all the time:

1- automatically add bearer tokens to requests rather than manually specifying them every single time

2- automatically dispatch some event or function when a 401 response is returned to clear the stale user session and return them to a login page.

There's no reason to repeat this logic in every single place you make an API call.

Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

Finally, there's some nice mocking utilities for axios for unit testing different responses and error codes.

You're either going to copy/paste code everywhere, or you will write your own helper functions and never touch fetch directly. Axios... just works. No need to reinvent anything, and there's a ton of other handy features the GP mentioned as well you may or may not find yourself needing.


Interceptors are just wrappers in disguise.

    const myfetch = async (req, options) => {
        let options = options || {};
        options.headers = options.headers || {};
        options.headers['Authorization'] = token;
    
        let res = await fetch(new Request(req, options));
        if (res.status == 401) {
            // do your thing
            throw new Error("oh no");
        }
        return res;
    }
Convenience is a thing, but it doesn't require a massive library.

That fetch requires so many users to rewrite the same code - that was already handled well by every existing node HTTP client- says something about the standards process.

It could also be trivially written for XMLHttpRequest or any node client if needed. Would be nice if they had always been the same, but oh well - having a server and client version isn't that bad.

Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...

(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)


> Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually

Mine's about 100 LOC. There's a lot you can get wrong. Having a way to use a known working version and update that rather than adding a hundred potentially unnecessary lines of code is a good thing. https://github.com/mikemaccana/fetch-unfucked/blob/master/sr...

> import a library and write interceptors for that...

What you suggesting people would have to intercept? Just import a library you trust and use it.


Your wrapper does do a bunch of extra things that aren't necessary, but pulling in a library here is a far greater maintenance and security liability than writing those 100 lines of trivial code for the umpteenth time.

So yes you should just write and keep those lines. The fact that you haven't touched that file in 3 years is a great anecdotal indicator of how little maintenance such a wrapper requires, and so the primary reason for using a library is non-existent. Not like the fetch API changes in any notable way, nor does the needs of the app making API calls, and as long as the wrapper is slim it won't get in the way of an app changing its demands of fetch.

Now, if we were dealing with constantly changing lines, several hundred or even thousand lines, etc., then it would be a different story.


But you said so yourself they are necessary… otherwise you would just use fetch. This reasoning is going around in circles.

Why the 'but'? Where is the circular reasoning? What are you suggesting we have to intercept?

- Don't waste time rewriting and maintaining code unecessarily. Install a package and use it.

- Have a minimum release age.

I do not know what the issue is.


but it does for massive DDoS :p

> Likewise, every response I get is JSON.

fetch responses have a .json() method. It's literally the first example in MDN: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/U...

It's literally easier than not using JSON because I have to think about if I want `repsponse.text()` or `response.body()`.


that's such a weak argument. you can write about 20 lines of code to do exactly this without requiring a third party library.

Helper functions seem trivial and not like you’re reimplementing much.

Don't be silly, this is the JS ecosystem. Why use your brain for a minute and come up with a 50 byte helper function, if you can instead import a library with 3912726 dependencies and let the compiler spend 90 seconds on every build to tree shake 3912723 out again and give you a highly optimized bundle that's only 3 megabytes small?

> usage patterns

IMO interceptors are bad. they hide what might get transformed with the API call at the place it is being used.

> Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

This is not true unless you are not interfacing with your own backends. even then why not just make a helper that unwraps as json by default but can be passed an arg to parse as something else


One more use case for Axios is it automatically follows redirects, forwarding headers, and more importantly, omiting or rewriting the headers that shouldn't be forwarded for security reasons.

fetch automatically follows redirects, fetch will forward your headers, omitting or rewriting headers is how security breaks… now a scraper got through because it’s masquerading as Chrome.

> And from a product standpoint, there's really only one reason to ship a native app

I have worked on several applications where the product managers wanted to make our web app something that could be installed through the app store, because that's how users expect to get apps.

I know people who don't even type search queries or URLs into a browser, they just tell the phone what they want to find and open whatever shows up in a search result.

I've tried pushing back against the native app argument and won once because customers actually reported liking that we had a website instead of an app, and other times because deploying an app through the stores was more work than anyone had time to take on. Otherwise, we would've been deploying through app stores for sure.

Marketing gets plenty of data from google analytics or whatever platform they're using anyway, so neither they nor product managers actually care about the data from native APIs.


> I know people who don't even type search queries or URLs into a browser, they just tell the phone what they want to find and open whatever shows up in a search result.

I don't know exactly what you are talking about here, but if I wanted to find a restaurant that is local I definitely just type 'Miguels' into the browser and then it searches google for 'Miguels' automatically and it know's my location so the first result is going to be their website and phone number and I can load the website for the menu or just call if I know what my family wants.

However even then, I'd rather have an app for them where I can enter in the items I want to order. I've noticed apps tend to be more responsive. Maybe it's just the coding paradigm that the applications tend to load all of the content already and the actions I take in the app are just changing what is displayed, but on a website they make every 'action' trigger an API call that requires a response before it moves on to the next page? This makes a big difference when my connection isn't great.

I also find it easier to swap between active apps instead of between tabs of a browser. If I want to check on the status of the order or whatnot, it's easier to swap to the app and have that refresh then it is to click the 'tab' button of the browser and find the correct tab the order was placed in.


>I definitely just type 'Miguels' into the browser

So you open safari first. I think that’s a step further than what’s being described.

Many people it’s just “hey siri, book a table at Miguel’s.” And then click whatever app, web result, or native OS feature pops up.

It’s a chaotic crapshoot that I have never been able to stomach personally. For others, that’s just called using their phone.


This is pretty much what I meant. Even if the browser is what comes up, the fact is the user isn't interacting with the browser as a browser. They're interacting with their phone through an app (voice => search). They don't understand website URLs, or what search engines are doing. That makes it harder for them to return (engagement metrics!) than tapping the icon on their phone that opens up directly to the app.

It's also why so many websites try to offer push notifications or, back when it seemed like Apple wouldn't cripple it, the "add to home screen" or whatever CTA was that would set the website as an icon. Anything that gives the user a fast path back to engaging without having to deal with interacting with the browser itself is what PMs and marketing want.


I want to be really clear that I'm not trying to argue with your experience, just to understand it... but:

> However even then, I'd rather have an app for them where I can enter in the items I want to order.

Really? You want to download a different app for every restaurant you order from?


I recently took a trip to Hawaii, particularly Maui. I've never been before, but I hit the weather lottery and got to experience the Kona low system that raked the island with copious rain. Anyway... What I found, in the areas that we were staying, was that there were a lot of food trucks that looked to have great coffee, poke, food in general. But with the weather it was unclear if the food truck was 1) accessible 2) open due to other weather issues.

What I found was that none of these food trucks (and even some relatively nice restaurants) had operational web pages. One had a domain but, for some apparent reason, they posted the menu to <some-random-name>.azurewebsites.net. And that page just... Didn't work. The rest got even worse. Most had listings on Google Maps, but the hours and availability did not reflect reality. We went to a coffee food truck that wasn't there, even though the day before they had commented on a review. Then we had others that had a link to an Instagram page of which some claimed to house their "current" hours and location, yet we tried going to two of them and both weren't open.

It's 2026. If you have your business on Google Maps you should be able to update hours and availability quickly. But beyond that it costs almost nothing to host a simple availability page on a representative domain. And even if you don't want to deal with the responsibility of a domain, there are multitudes of other options. Now, I'm guessing that this isn't the norm for most of these vendors, at least I hope. But we weren't there during the worst of the rain, we hit the second low that went through in our timing. So while it was a significant amount of rain and some of the more treacherous switchback roads were closed - I'm talking about food trucks that were off of very accessible main roads & highways. My SO reached out via IG to about a half dozen vendors and only one responded 2 days later.

Clearly tech and simple services like availability and location that is easy to update is not accessible (or known) for these types of businesses. But it definitely does not require an app (nor should it). Having these simple "status" sites would have made the friction the weather caused significantly less than what we experienced. I don't want an app when I'm trying to find out if a restaurant is open. I, personally, don't find apps any more responsive. In many cases a lot of web sites are littered with far too many components that are not required. I've been doing a lot with Datastar and FastAPI recently and some of the tools I've thrown together (that handle hundreds of MB of data in-browser) load instantly and are blazing fast. So much so that I've been asked how I "did that". It's amazing how fast a web app can be when it's not pulling data from 27 different sources and loading who knows what for JS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: