The problem is the code unconditionally dereferences the pointer, which would be UB if it was a null pointer. This means it is legal to optimize out any code paths that rely on this, even if they occur earlier in program order.
Shouldn't control flow diverge if the assert is triggered when NDEBUG is not defined? Pretty sure assert is defined to call abort when triggered and that is tagged [[noreturn]].
Right so strictly speaking C++ could do anything here when passed a null pointer, because even though assert terminates the program, the C++ compiler cannot see that, and there is then undefined behaviour in that case
> because even though assert terminates the program, the C++ compiler cannot see that
I think it should be able to. I'm pretty sure assert is defined to call abort when triggered and abort is tagged with [[noreturn]], so the compiler knows control flow isn't coming back.
I'm sorry, but what exactly is the problem with the code? I've been staring at it for quite a while now and still don't see what is counterintuitive about it.
Depends on where you're coming from, but some people would expect it to enforce that the pointer is non-null, then proceed. Which would actually give you a guaranteed crash in case it is null. But that's not what it does in C++, and I could see it not being entirely obvious.
A lot of compilers will optimize out a NULL pointer check because dereferencing a NULL pointer is UB.
Because assert will not run the following code in the case of a NULL pointer, AFAIK this exact code is still defined behavior, but if for some reason some code dereferenced the NULL pointer before, it would be optimized out - there are some corner cases that aren't obvious on the surface.
This kind of thing was always theoretically allowed, but really started to become insidious within the past 5-10 years. It's probably one of the more surprising UB things that bites people in the field.
GCC has a flag "-fno-delete-null-pointer-checks" to specifically turn off this behavior.
This is an actual Linux kernel exploit caused by this behavior where the compiler optimized out code that checked for a NULL pointer and returned an error.
Sure, but none of that is relevant to just the code snippet that was posted. The compiler can exploit UB in other code to do weird things, but that's just C being C. There's nothing unexpected in the snippet posted.
The issue is cause by C declaring that dereferencing a null pointer is UB. It's not really an issue with assertions.
You can get the same optimisation-removes-code for any UB.
> There's nothing unexpected in the snippet posted.
> The issue is cause by C declaring that dereferencing a null pointer is UB. It's not really an issue with assertions.
> You can get the same optimisation-removes-code for any UB.
I disagree - It’s a 4 line toy example but in a 30-40 line function these things are not always clear. The actual problem is if you compile with NDEBUG=1, the nullptr check is removed and the optimiser can (and will, currently) do unexpected things.
The printf sample above is a good example of the side effects.
> Analyzing CI failures overnight and surfacing summaries
Look like on ec2 with python? Because with Claude, it’s that prompt, and with your solution it’s infra + security groups + multiple APIs + whatever code you actually write
I would suggest the prompt is an example of garbage in that's going to produce garbage out. Sitting down to confront the problem you're solving will show this, while Claude is going to happily spit out what looks like a plausibly functional system.
So for example the only "analysis" of CI failures are which systems failed and who/what committed the changes to those things. The only way AI would help me here is if the system was so jank that the sole primitive i can use is textual analysis of log files. Which granted is probably real for a lot of software firms, but I really hope I have better build and test infrastructure than that.
> I would suggest the prompt is an example of garbage in that's going to produce garbage out. Sitting down to confront the problem you're solving will show this, while Claude is going to happily spit out what looks like a plausibly functional system.
I think this shows the value.
> Which granted is probably real for a lot of software firms
Here's the rub though; for many many people it's a huge improvement over what they have right now.
Firewire user here! I have an old-but-very-functional rack mixer (Presonus) that will cost £700+ to replace, _plus_ I have to configure and set up the new one. I have a 2007 Macbook Pro that I keep around just for interfacing with it.
Same: a single StudioMix mixer with 2x FP10’s in the racks. This setup is just so lovely and functional I don’t want to upgrade it really, it just plain works. I have an old iMac as the DAW for the job, but the idea of replacing it with an ARM-based system, if it works, is so very appealing…
> The computer is a Qotom Q305p 3205u.
> Total price with power adaptor: $60
I'd love to have a handful of tiny computers that I could do fun stuff on, but there is absolutely nothing available in the UK for less than double that price.
> Then I found out it was broken. I contributed a fix. The fix was ignored and there was never any release since November 2024.
This seems like a pretty good reason to fork to me.
> Sending HTTP requests is a basic capability in the modern world, the standard library should include a friendly, fully-featured, battle-tested, async-ready client. But not in Python,
Or Javascript (well node), or golang (http/net is _worse_ than urllib IMO), Rust , Java (UrlRequest is the same as python's), even dotnet's HttpClient is... fine.
Honestly the thing that consistently surprises me is that requests hasn't been standardised and brought into the standard library
For the record, you're most likely not even interacting with that API directly if you're using any current framework, because most just provide automagically generated clients and you only define the interface with some annotations
To me what makes this very "Java" is the arguments being passed, and all the OOP stuff that isn't providing any benefit and isn't really modeling real-world-ish objects (which IMHO is where OOP shines). .version(Version.HTTP_1_1) and .followRedirects(Redirect.NORMAL) I can sort of accept, but it requires knowing what class and value to pass, which is lookups/documentation reference. These are spread out over a bunch of classes. But we start getting so "Java" with the next ones. .connectTimeout(Duration.ofSeconds(20)) (why can't I just pass 20 or 20_000 or something? Do we really need another class and method here?) .proxy(ProxySelector.of(new InetSocketAddress("proxy.example.com", 80))), geez that's complex. .authenticator(Authenticator.getDefault()), why not just pass bearer token or something? Now I have to look up this Authenticator class, initialize it, figure out where it's getting the credentials, how it's inserting them, how I put the credentials in the right place, etc. The important details are hidden/obscured behind needless abstraction layers IMHO.
I think Java is a good language, but most modern Java patterns can get ludicrous with the abstractions. When I was writing lots of Java, I was constantly setting up an ncat listener to hit so I could see what it's actually writing, and then have to hunt down where a certain thing is being done and figuring out the right way to get it to behave correctly. Contrast with a typical Typescript HTTP request and you can mostly tell just from reading the snippet what the actual HTTP request is going to look like.
> but it requires knowing what class and value to pass
Unless you use a text editor without any coding capabilities, your IDE should show you which values you can pass. The alternative is to have more methods, I guess?
> why can't I just pass 20 or 20_000 or something
20 what? Milliseconds? Seconds? Minutes? While I wouldn't write the full Duration.ofSeconds(20) (you can save the "Duration."), I don't understand how one could prefer a version that makes you guess the unit.
Yes it is, can't add anything here. There's a tradeoff between "do the simple thing" and "make all things possible", and Java chooses the second here.
> .authenticator(Authenticator.getDefault()), why not just pass bearer token or something?
Because this Authenticator is meant for prompting a user interactively. I concur that this is very confusing, but if you want a Bearer token, just set the header.
> Unless you use a text editor without any coding capabilities, your IDE should show you which values you can pass. The alternative is to have more methods, I guess?
Fair enough, as much as I don't like it, in Java world it's safe to assume everyone is using an IDE. And when your language is (essentially) dependent on an IDE, this becomes a non-issue (actually I might argue it's even a nice feature since it's very type safe).
> 20 what? Milliseconds? Seconds? Minutes? While I wouldn't write the full Duration.ofSeconds(20) (you can save the "Duration."), I don't understand how one could prefer a version that makes you guess the unit.
I would assume milliseconds and would probably have it in the method name, like timeoutMs(...) or something. I will say it's very readable, but if I was writing it I'd find it annoying. But optimizing for readability is a reasonable decision, especially since 80% of coding is reading rather than writing (on average).
I didn't mention IHttpClientFactory - just HttpClient. I will concede that ASP manages to be confusing quite often. As for the latter, guidelines are not requirements anymore than "RTFM" is; You can use HttpClient without reading the guidelines and be just fine.
Yeah this is all over Rust codebases too for good reason. The argument is that default params obfuscate behaviour and passing in a struct (in Rust) with defaults kneecaps your ability to validate parameters at compile time.
Your http client setup is over-complicated. You certainly don't need `.proxy` if you are not using a proxy or if you are using the system default proxy, nor do you need `.authenticator` if you are not doing HTTP authentication. Nor do you need `version` since there is already a fallback to HTTP/1.1.
I mean dont get me wrong, I work with Java basically 8 hours per day.
I also get _why_ the API is as it is - It essentially boils down to the massive Inversion of Control fetish the Java ecosystem has.
It does enable code that "hides" implementation very well, like the quoted examples authentication API lets you authenticate in any way you can imagine, as in literally any way imaginable.
Its incredibly flexible. Want to only be able to send the request out after you've touched a file, send of a Message through a message broker and then maybe flex by waiting for the response of that async communication and use that as a custom attribute in the payload, additionally to a dynamically negotiated header to be set according to the response of a DNS query? yeah, we can do that! and the caller doesnt have to know any of that... at least as long as it works as intended
Same with the Proxy layer, the client is _entirely_ extensible, it is what Inversion of Control enables.
It just comes with the unfortunate side-effect of forcing the dev to be extremely fluent in enterprisey patterns. I dont mind it anymore, myself. the other day ive even implemented a custom "dependency injection" inspired system for data in a very dynamic application at my dayjob. I did that so the caller wont even need to know what data he needs! it just get automatically resolved through the abstraction. But i strongly suspect if a jr develeoper which hasnt gotten used to the java ecosystem will come across it, he'll be completely out of his depth how the grander system works - even though a dev thats used to it will likely understand the system within a couple of moments.
Like everything in software, everything has advantages and disadvantages. And Java has just historically always tried to "hide complexity", which in practice however paradoxically multiplies complexity _if youre not already used to the pattern used_.
Thanks for the thoughtful response, I appreciate it.
Yeah, I remember the first time I encountered a spring project (well before boot was out) and just about lost my shit with how much magic was happening.
It is productive once you know a whole lot about it though, and I already had to make that investment so might as well reap the rewards.
Go's net/http Client is built for functionality and complete support of the protocol, including even such corner cases as support for trailer headers: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... Which for a lot of people reading this message is probably the first time they've heard of this.
It is not built for convenience. It has no methods for simply posting JSON, or marshaling a JSON response from a body automatically, no "fluent" interface, no automatic method for dealing with querystring parameters in a URL, no direct integration with any particular authentication/authorization scheme (other than Basic Authentication, which is part of the protocol). It only accepts streams for request bodys and only yields streams for response bodies, and while this is absolutely correct for a low-level library and any "request" library that mandates strings with no ability to stream in either direction is objectively wrong, it is a rather nice feature to have available when you know the request or response is going to be small. And so on and so on.
There's a lot of libraries you can grab that will fix this, if you care, everything from clones of the request library, to libraries designed explicitly to handle scraping cases, and so on. And that is in some sense also exactly why the net/http client is designed the way it is. It's designed to be in the standard library, where it can be indefinitely supported because it just reflects the protocol as directly as possible, and whatever whims of fate or fashion roll through the developer community as to the best way to make web requests may be now or in the future, those things can build on the solid foundation of net/http's Request and Response values.
Python is in fact a pretty good demonstration of the risks of trying to go too "high level" in such a client in the standard library.
Thr comment I replied to was talking about sending a http requests. Go’s server side net/http is excellent, the client side is clunky verbose and suffers from many of the problems that Python’s urllib does.
>The Requests package is recommended for a higher-level HTTP client interface.
Which was fine when requests were the de-facto-standard only player in town, but at some point modern problems (async, http2) required modern solutions (httpx) and thus ecosystem fragmentation began.
Well, the reason for all the fragmentation is because the Python stdlib doesn't have the core building blocks for an async http or http2 client in the way requests could build on urllib.
The h11, h2, httpcore stack is probably the closest thing to what the Python stdlib should look like to end the fragmentation but it would be a huge undertaking for the core devs.
> but it would be a huge undertaking for the core devs.
More importantly, it would be massively breaking to remove the existing functionality (and everyone would ignore a deprecation), and confusing not to (much like it was when 2.x had both "urllib" and "urllib2").
It'd be nice to have something high level in the standard library based on urllib primitives. Offering competition to those, not so much.
It's fine but it's sharp-edged, in that it's recommended to use IHttpClientFactory to avoid the dual problem of socket exhaustion ( if creating/destroying lots of HttpClients ) versus DNS caching outliving DNS ( if using a very long-lived singleton HttpClient ).
And while this article [1] says "It's been around for a while", it was only added in .NET Framework 4.5, which shows it took a while for the API to stabilise. There were other ways to make web requests before that of course, and also part of the standard library, and it's never been "difficult" to do so, but there is a history prior to HttpClient of changing ways to do requests.
For modern dotnet however it's all pretty much a solved problem, and there's only ever been HttpClient and a fairly consistent story of how to use it.
Python’s urllib2 (now urllib.request) started out in the year 2000 [0].
.NET’s WebRequest was available in .NET Framework 1.1 in 2003 [1].
But since then, Microsoft noticed the issues with WebRequest and came up with HttpClient in 2012. It has some issues and footguns, like those related to HttpClient lifetime, but it’s a solid library. On the other hand, the requests library for Python started in 2011 [2], but the stdlib library hasn’t seen many improvements.
I don’t know what’s worse - in 2026 someone genuinely suggesting Jenkins as a viable GHA alternative, or me agreeing with that.
Jenkins has possibly the worst user experience of any piece of software I’ve had to use in the last few years. It’s slow, brittle, somehow both heavyweight and has no features, littered with security vulns due to architecture, is impossible to navigate, has absolutely no standardisation for usage.
Yep, I went through the exact same pains. We desperately wanted to move to something else and I kept steering us away from Jenkins as I'd experienced its pains at a previous role. We evaluated tons of different options and begrudgingly settled on Jenkins too.
[0] https://dev.epicgames.com/documentation/en-us/unreal-engine/...
reply