Hacker Newsnew | past | comments | ask | show | jobs | submit | ebiederm's commentslogin

But it happens successfully.

The code base is Xorg rather than Xfree86 because of one such fork.

Gcc went through the egcs fork.

OpenOffice became LibreOffice in a fork.

When leadership of a project fails to keep the volunteers behind them such forks happen.


It is not breaking userspace if there are no programs in userspace that care.

If you have a program that cares please report it.

The evidence is that no one has had a program that cares since 2016. A decade of holding on to dead code seems enough.


I don't have a program that uses it, so it's not my place to watch mailing lists for it getting deprecated. Basic searching of Github however (what I'd have expected them to do before removing it) reveals 20k files that contain IPPROTO_UDPLITE, and many projects that use it directly. Probably the most renowned is FFmpeg (!!!).

https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/udp...

https://github.com/search?q=IPPROTO_UDPLITE&type=code

https://github.com/tormol/udplite

https://github.com/nplab/packetdrill

https://github.com/python/cpython/blob/4e96282ee42ab51cf325b...

https://github.com/search?q=repo%3Aviveris%2Frohc%20udplite&...


Chuck Moore of Forth fame.


When Wirth talks about modules and abstraction I believe he was talking about what was known as the software crisis. In particular the observation that as programs size increases the number of possible interactions of program components grows quadratically.

Modules in particular and good abstractions in general make the number of interactions between components tractible.

The N^2 component interaction problem is real and it continues to cause problems.

Even with our best solutions there is room for improvement.

Last I paid attention there was a culture that had developed around the administration of CISCO routers because things that should be unrelated affecting each other is a real world problem for the administrators of those routers.

Any time something changes in siftware and something unrelated is affected this general problem is making it's appearance.

There is also a long term tension between abstractions and entire system simplicity. The wrong abstract or an abstraction poorly implemented can make things worse.


Wirth was talking about "modular programming", a term that isn't as well-known today as it once was. It's where Modula got its name, and there were entire conferences and journals that arose in response to the term's coinage. Ultimately the label "object-oriented" got a lot more mindshare, even to describe concepts that can be accurately described as modular programming and aren't terribly accurately described as object-oriented (generally lacking one or more of the necessary message-passing and "extreme late-binding" criteria required for O-O).


Message-ID is a requirement for Usenet where it came from.

It is a requirement for being able to reply to messages and in general for email threading.

Message-ID is a requirement to archive email.

Practically every email client has included Message-ID since dial-up internet was fast and fashionable.

Given all of the above I am amazed more places don't drop email without a Message-ID.

Not including a Message-ID seems to be saying you don't want replies and you don't want your message to be archived. That seems very shady to me.


My EV is absolutely terrible range wise at cold weather. It is EPA rated at 220miles of range. I only see that when the temperature is at or above 80F.

Most of the winter it tells me I can only do between 100 and 120 miles. It is definitely half the EPA range with climate controls disabled at 0F. (Ask me how I know).

I love driving it in the winter. I don't have a pressing need to go long distances, so that is not a current concern. Not having to stand outside in the bitter cold to fuel up in absolutely awesome.

There are EVs on the market that do much, much better than mine in cool weather and I now know what to look for.

To really penetrate the midwest it will take a car that can realistically do a road trip to Florida from say Duluth, MN or Michigan's UP in the winter.

Because not only do folks in the midwest drive long distances without a second thought, they sometimes do it in the cold of winter so they can get a break from the snow.

So yes still getting 90% of the range at -40C does sound attractive.


> EPA rated at 220miles of range

That right there is a big problem to begin with. The headline EPA number only reflects reality if you have a mix of city and highway driving. The problem is that people only care about range when driving 75mph. I think the headline EPA number should reflect that reality.


You are right - very few are doing 200 miles of city only driving between charges.


Having moved between states and taken a lot of drivers tests. I can say the exact rules are something that vary between states and over time. Including how it was taught.

My first drivers test was yield to the right. Later it was fifo order of who made it to the stop.

My running interpretation is fifo order with yielding to the right in case of ambiguity.


Does the 2026 Nissan Leaf meet your criteria for a dumb car?

All it's connected features appear to come from Android Auto or Apple Car Play. AKA from a connection to your phone.

I like the looks of it because it appears to be a serious EV unlike too many which are just some company getting their toes wet.


Did the new Leaf get dumber? I have an old 2019 model and it’s connected. In the mobile app I see its location, turn on AC etc.


Does Nissan still not put telematics in the base model in 2026?


Looking at the specs page the base model includes "Dual 12.3" widescreen displays" Why? What the hell is wrong with modern cars?


Lots (most?) cars are going to LCDs for the instrument panel. The second screen is the infotainment.


My previous car had its infotainment system reboot several times while I was on the expressway. The idea of my instrument panel, or other more critical systems, crashing and rebooting while driving terrifies me.


The infotainment is not connected to the ECU and other car control electronics. At least not on my Tesla nor my F150 Lightning. You can reboot them to your hearts content while driving down the road.


Yes, but it is still rather unnerving when part of the car goes dark. It also makes me question the QA on this stuff. If that is crashing, will the other systems be crashing at some point as well? Is there redundancy? These are the questions that went through my mind while hoping the screen would come back on before I missed my exit. Even knowing the systems are completely separate, it spoke to overall quality.


I agree that it is unnerving, but I expect it to be normal in the future. They save a bunch of time by being able to push out a 90% product with low risk of catastrophe and just push updates later to fix it up. As a bonus, they can market the frequent updates as a benefit rather than cleaning up technical debt they would have had to iron out before shipping the first car.


I've had multiple vehicles have instrument cluster failures while operating them. None of those have been screens. "Analog" gauges have not actually been analog for a while. They're all digital controls being read by a computer.

Even a carbureted motorcycle I owned from the early 2000s had "analog" gauges with values given to it from a computer!


> from the early 2000s

For sure, and even earlier -- I had a 1995 Mustang with faux analog gauges, it has definitely been a Thing for decades now.


Backup cameras are an enormous safety improvement. Plus touchscreens are much cheaper than buttons and knobs.


> Backup cameras are an enormous safety improvement.

Sure, however....

> Plus touchscreens are much cheaper than buttons and knobs.

And how much LESS safe is using a touchscreen while operating a motor vehicle? Its literally no different from using an iPad.


There are large implementation differences in touch screens. My wife's care needs several second: turn the radio on, wait for the splash screen, press the drives heat control, wait for it to appear (100s of ms - long enough to notice) then find the button in the miedle of the screen - finally I can change the heated seats. My car that button is always has the button at the bottom of the screen in the same place so is is ms to look and see.


You still lose the tactile feedback of the button though. It's much harder to hit it while not looking versus a physical knob.

There's a reason Euro NCAP requires physical climate controls for ca models to get a five star safety rating starting in 2026.


Backup cameras are an enormous safety improvement.

You know that a backup camera can be added to practically any car right? My ~2002 Toyota has a Pioneer deck from around 2007 (I guess?) that supports reversing camera input. My wifes 2012 Toyota hybrid has a reversing camera using some POS cheap Chinese deck that's so shit it doesn't even support Bluetooth audio.

No part of reversing cameras are dependent on any of the "modern" trends in cars that are being discussed here.


I responded to a comment about screens.


You don't need 'dual 12.3" touch screens' for a reversing camera.


I should have mentioned a digital dashboard is also cheaper than a traditional one, I guess. But isn't that obvious?


What's that got to do with reversing cameras?


Dual screens. One for infotainment, including the backup camera, the other for the dash.

Have you never seen a newer model car?


I feel like you're deliberately missing the point.

You don't need them to have a reversing camera. Literally millions of cars over the past 2 decades have perfectly fine reversing cameras using the screen of a regular double-DIN deck (or fold out single-DIN deck).


I, too, felt you were being intentionally dense in this thread. We've just been talking past each other.

I don't see a meaningful distinction between a screen on a DIN unit and an integrated screen.

With Android Auto or the ios equivalent -- a hard requirement for most car buyers today -- a touchscreen is basically required.

Other "smart" features aren't required but I'm not surprised car companies want to try and extract value from in-car tech. It's got nothing to do with providing value to consumers.


> I don't see a meaningful distinction between a screen on a DIN unit and an integrated screen.

Someone questioned why a car needs two 12" touch screens.

To which you replied

> Backup cameras are an enormous safety improvement.

My entire point is, that there's zero relationship between having a backup camera, and needing a 12" touchscreen, or a touch screen of any kind.

If your backup camera needs a touch screen, you've already failed. The entire point is that it activates automatically and deactivates automatically.

They've been available for literally decades - Toyota had a production model with a reversing camera in the fucking 80s.

Nothing else you've said since is related to your claim "Backup cameras are an enormous safety improvement" and that claim is completely unrelated to OP's question about why a car needs not one but two 12" touch screens.


Does Nissan still air cool their batteries or have they wised up?


The 2026 redesign has put in a proper liquid cooling loop.

(Battery heating is inexplicably an extra $300 option, and not available on the base trim AFAICS?)


I appreciate that I am not the only one seeing the connection between property based testing and proofs.

I will quibble a little with their characterization of proofs as being more computationally impractical.

Proof verification is cheap. On a good day it is as cheap as type checking. Type checking being a kind of proof verification. That said writing proofs can be tricky.

I am still figuring out what writing proofs requires. Anything beyond what your type system can express currently requires a different set of tools (Rocq, Lean, etc) than writing asserts and ordinary programs. Plus writing proofs tends to have lots of mundane details that can be tedious to write.

So while I agree proofs seem impractical. I won't agree the reason is computational cost.


There is a tradeoff between the compute required to generate a proof and the compute required to check it. Fully generic methods such as SMT solvers require compute exponential in the number of variables in scope and lines of code in a single function. Breaking the exponential requires (and is perhaps equivalent to) understanding the code in sufficient detail (cf https://arxiv.org/abs/2406.11779). In practice, the computational cost of semi-automated proof generation is a significant bottleneck, cf https://dspace.mit.edu/handle/1721.1/130763?show=full and https://jasongross.github.io/papers/2022-superlinear-slownes... .


I've been working on this thing where the proofs (using the esbmc library) check the safety properties and the unit tests check the correctness so the state space doesn't explode and it takes a year to run the verification. Been working out pretty well so far (aside from spending more time tracking down esbmc bugs than working on my own code) and found some real issues, mostly integer overflow errors but other ones too.

Kind of loosely based on the paper "A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification" (https://arxiv.org/abs/2305.14752) which, I believe, was posted to HN not too long ago.


I don't know if this is realistic but as a general rule if I was contracting with someone so that my business would have higher reliability, I would ask for a service level agreement with a agreed upon amount the vendor will pay you for every unit of time there service is not up.

At least then your pain is their pain, and they are incentivesed to prevent problems and fix them quickly.


Usually those agreements either just give you credits for the same service, pay way less than you lost or basically everything falls under force majeure.

If it works for you that's great, but when the actual shit hits the fan I don't think you should expect actual compensation.


At our scale I doubt if we can get any cloud provider to write custom contracts. But if I had negotiating power, I completely agree.


Nobody that uses Kubernetes and random shit from Github would sign such an agreement if they actually had to pay out and could not weasel their way out of it. That would be signing up for a near-unlimited liability and business suicide.

Let's assume an incident costs you (the customer) ~5k, just assuming the time it takes to get a professional on very short notice to debug (since the whole promise of managed services is that you no longer need technical staff at all). That's also ignoring the actual cost to your business (lost sales, reputational risk, or missing your own SLAs).

For the provider to be willing to pay out something like this they'd need to charge you monthly several times that amount (otherwise just one incident and they're forever underwater on the LTV). Yet such a monthly amount would make the service unaffordable to all but the most deep-pocketed customers... for whom the impact of an outage on their business would cost even more meaning they'd want the payouts to be even bigger, leading to a catch-22.

High-availability good enough for the provider to put 5-figure sums on the line is actually really hard (there's a reason actual critical stuff like stock exchange order processing or card transactions don't run on the "cloud", nor on Kubernetes for that matter), so the next best thing is make-believe "high availability" where everyone (except the occasional poor soul like you that actually believed the marketing) understands the charade and plays along (because their own SLAs are often make-believe too).

See also: the recent Cloudflare or AWS outages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: