Hacker Newsnew | past | comments | ask | show | jobs | submit | ameliaquining's commentslogin

It's not that adversaries can directly see the domain name; this doesn't have anything to do with domain fronting. The issue is that ECH doesn't hide the server's IP address, so it's mostly useless for privacy if that IP address uniquely identifies that server. The situation where it helps is if the server shares that IP address with lots of other people, i.e., if it's behind a big cloud CDN that supports ECH (AFAIK that's currently just Cloudflare). But if that's the case, it doesn't matter whether Nginx or whatever other web server you run supports ECH, because your users' TLS negotiations aren't with that server, they're with Cloudflare.

I can't speak for anyone else but I think I can work around that by moving the site around to different VPS nodes from time to time. I get bored with my silly hobby sites all the time and nuke the VM's then fire them up later which gives them a new IP. I don't know what others might do if anything.

If I had a long running site I could do the same thing by having multiple font-end caching nodes using HAProxy or NGinx that come and go but I acknowledge others may not have the time to do that and most probably would not.


That's not quite it. The issue is that there's no other traffic bound to that IP - ECH doesn't buy you any security, because an observer doesn't even need to look at the content of the traffic to know where it's headed.

Maybe it will be more useful for outbound from NGinx or HAProxy to the origin server using ECH so the destination ISP has no idea what sites are on the origin assuming that traffic is not passing over a VPN already.

Anyone who wants to track your users can just follow the IP changes as they occur in real time.

Anyone who wants to track your users can just follow the IP changes as they occur in real time.

That's cool. I only make my own mini-CDN's.

There is always the option to put sites on a .onion domain but I don't host anything nearly exciting or controversial enough. For text that's probably a good option. I don't know if Tor is fast enough for binary or streaming sites yet. No idea how many here even know how to access a .onion site.

I will test out your theory and see if anyone bothers to track my IP addresses and does anything with them. I probably need to come up with something edgy that people would want to block. Idea's for something edgy?


Tor is completely usable at reasonable speeds by even normies via Brave.

That's kindof what I suspected but have not kept up with it.

Doesn't matter, I (not OP, but also operating VPS) still want to support this, so the clients can eventually assume all correctly configured servers support it.

It's in private preview. Probably they'll put it in the main docs and such once it's open to everyone.

The tooling for that already exists, since a PR can consist of multiple Git commits and you can look at them separately in the UI. I don't know whether agents are any good at navigating that, but if not, they won't do any better with stacked PRs. Stacked PRs do create some new affordances for the review process, but that seems different from what you're looking for.

Looking at multiple commits is not a good workflow:

* It amounts to doing N code reviews at once rather than a few small reviews which can be done individually

* Github doesn't have any good UI to move between commits or to look at multiple at once. I have to find them, open them in separate tabs, etc.

* Github's overall UX for reviewing changes, quickly seeing a list of all comments, etc. is just awful. Gerrit is miles ahead. Microsoft's internal tooling was better 16 years ago.

* The more commits you have to read through at once the harder it is to keep track of the state of things.


>It amounts to doing N code reviews at once rather than a few small reviews which can be done individually

I truly do not comprehend this view. How is reviewing N commits different from/having to do less reviews reviewing N separate pull requests? It's the same constant.


Small reviews allow moving faster for both the author and reviewer.

A chain of commits:

* Does not go out for review until the author has written all of them

* Cannot be submitted even in partial form until the reviewer has read all of them

Reviewing a chain of commits, as the reviewer I have to review them all. For 10 commits, this means setting aside an hour or whatever - something I will put off until there's a gap in my schedule.

For stacked commits, they can go out for review when each commit is ready. I can review a small CL very quick and will generally do so almost as soon as I get the notification. The author is immediately unblocked. Any feedback I have can be addressed immediately before the author keeps building on top of it.


Let's compare 2 approaches to delivering commits A, B, C.

Single PR with commits A, B, C: You must merge all commits or no commits. If you don't approve of all the commits, then none of the commits are approved.

3 stacked PRs: I approve PR A and B, and request changes on PR C. The developer of this stack is on vacation. We can incrementally deliver value by merging PRs A and B since those particular changes are blocking some other engineer's work, and we can wait until dev is back to fix PR C.


> You must merge all commits or no commits

This seems to be the root of the problem. Nothing stops a reviewer merging some commits of a PR, except a desire to avoid the git CLI tooling (or your IDE's support, or....). The central model used in a lot of companies requires the reviewee to do the final merge, but this has never been how git was meant to be used and it doesn't have to be used that way. The reviewer can also do merges. Merge (of whichever commits) = approval, in that model.


Yes, the root of the problem is the workflow of the company being centered around GitHub instead of Git itself.

This feature helps improve GitHub so it's useful for companies that do this this way.

At our company, only admin users can actually directly git push to main/master. Everything else HAS to be merged via github and pass through the merge queue.

So this stacked PRs feature will be very helpful for us.


It's crazy that you're getting downvoted for this take.

This isn't reddit people. You're not supposed to downvote just because you disagree. Downvotes are for people who are being assholes, spamming, etc...

If you disagree with a take, reply with a rebuttal. Don't just click downvote.


Historically, hn etiquette is that it's fine to downvote for disagreement. This came from pg himself.

That said, while he hasn't posted here for a long time, this is still in the guidelines:

> Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.

https://news.ycombinator.com/newsguidelines.html


Well, I stand corrected.

How would that work? Commits in different repos aren't ordered relative to one another. I suppose you could have a "don't let me merge this PR until after this other PR is merged" feature, but you could do that with a GitHub Action; it doesn't really need dedicated backend or UI support.

> How would that work?

In practical terms: I manually write a list of PRs, and maintain that list in the description of each of the PRs. Massive duplication. But it clearly shows the merge train.



I must have missed that. Amazing! From a reviewer's POV, this will be so nice to at the very least remove diff noise for PRs built on top of another PR. I usually refrain from reviewing child PRs until the parent is merged and the child can be rebased, for the sole reason that the diffs are hard to review i.r.t. what came from where.

damn, I missed it as well

presenting only cli commands in announcement wasn't a good choice


Discourse is self-hostable; they can't require their users to use a filesystem that supports deduplication. (Or, well, they could, but it would greatly complicate installation and maintenance and whatnot, and also there would need to be some kind of story for existing installations.)

Fair, I am/was confused by the hosting model and presentation. This is a nice User-preparation/consideration, I guess. I still maintain a backup filesystem unaware of duplication at the block level is a mistake.

I completely overlooked the shipping-of-tarballs. Links make sense, here. I had 'unpacked' and relatively-local data in mind. Absolutely would not go as far to suggest their scheme pick up 'zfs {send,receive}'/equivalent, lol.


They do also offer it as multi-tenant hosted SaaS, and the post is about their experience running backups on that. But whatever solution they use has to also work with the self-hosted version, which imposes some constraints.

Sweet

The timeline here is for when major governments have access to CRQCs. It will be much longer than that (barring an AI singularity or something) before you have access to one.

Is there any particular reason why this is its own bespoke test runner, instead of a library that plugs into existing ones?


Do you mean test runners like JUnit, pytest, etc? Or browser test runners specifically?


The latter. For example, the concept here seems like it ought to work with the Playwright test runner for Node.js (and then you'd have compatibility with the Playwright ecosystem).


OK. Yeah so I made the decision early to go one level below Playwright and target CDP directly, because I believe I need the tight connection with Chrome/Chromium to make it as fast as possible. Also I really value the distribution aspects of writing this in Rust and compiling to a single executable with everything in it.


Would something else that wasn't a VC-funded startup really work better? The technical problem seems fundamental.


Yes, the technical problem is fundamental. But if Deno managed to be a truly great runtime that solved a lot of people’s gripes with Node and made ES modules etc the price of admission for using it there would have been momentum to create a new module ecosystem.

But once you add that NPM compatibility layer the incentives shift, it just isn’t worth anyone’s while to create new, modern modules when the old ones work well enough.

It all feels similar to the Python 2 vs 3 dilemma. They went the other way and hey, it was a years long quagmire. But the ecosystem came out of it in a much better place in the end.


It wasn't worth recreating packages everytime you needed something that Deno had. If you ended up needing something and there was already something on npm for it, it was easier to just switch back to using node than to adapt/maintain a fork or alternative to an npm package. I think the lack of npm compatibility earlier on led to a lot of churn. Deno would probably be dead if they never improved the npm compatibility, especially considering the rise of bun promising performance improvements like Deno, but with better node compatibility at the time.


The big difference is that Python 3 was still CPython going forward, there was no one left to fork CPython 2 into an incompatible direction.

Or like Wayland and X.Org Server.

Quite different than an alternative that comes out of nowhere, expecting users to migrate.


I believe that's the joke, yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: