Hacker Newsnew | past | comments | ask | show | jobs | submit | fl0ki's commentslogin

> I think it would be great to have the ability to easily reorder/modify commits while in active development

Take a look at `git rebase --interactive`.


That's not "easily". Easily would be: you drag your commit(s) from one place to another or copy/paste to achieve the same

If that's the kind of UX you prefer, please consider filing a feature request against your git UI of choice. My point is that git itself already has the core capability, and how convenient it is to use usually depends on your editor. (e.g. in vim, dd to cut a line and p to paste it in a new position is a very quick way to reorder)

And my point is that all this 'core capability' stuff is not relevant to the discussion of good UI, similarly the fact that GitHub has Pull Requests doesn't help when it's bad UI that needs "stack" reinventing.

Case in point:

> dd to cut a line and p to paste it in a new position is a very quick way to reorder)

It isn't quick, you're just swiping the whole issue under the rug - first, you need the whole separate interface, but more importantly, this new interface is very primitive, you see close to no context, only some commit names, so it's not quick to find what to move and where because the content for those decisions is in a different place. Sure, you could add some vim plugin that expands it and adds per-commit info (what, you want to view the diff for all 3 commits you selected and DDed? Tough luck, you don't see the lines anymore! And even if you did, that's not this plugin), but then it's not your `--interactive` git "core" that does convenience


Like I said, if you prefer an integrated graphical UI, you can file feature requests against the one you prefer. What git itself does makes a lot of sense for the canonical CLI tool to do, though even then you can propose or prototype changes if you have ideas. This is how projects like jj started in the first place.

How does bad UI make a lot of sense?

I only agree if you have a bounded dataset size that you know will never grow. If it can grow in future (and if you're not sure, you should assume it can), not only will many data structures and algorithms scale poorly along the way, but they will grow to dominate the bottleneck as well. By the time it no longer meets requirements and you get a trouble ticket, you're now under time pressure to develop, qualify, and deploy a new solution. You're much more likely to encounter regressions when doing this under time pressure.

If you've been monitoring properly, you buy yourself time before it becomes a problem as such, but in my experience most developers who don't anticipate load scaling also don't monitor properly.

I've seen a "senior software engineer with 20 years of industry experience" put code into production that ended up needing 30 minute timeouts for a HTTP response only 2 years after initial deployment. That is not a typo, 30 minutes. I had to take over and rewrite their "simple" code to stop the VP-level escalations our org received because of this engineering philosophy.


> You're much more likely to encounter regressions when doing this under time pressure.

There is nothing to suggest you should wait to optimize under pressure, only that you should optimize only after you have measured. Benchmark tests are still best written during the development cycle, not while running hot in production.

Starting with the naive solution helps quickly ensure that your API is sensible and that your testing/benchmarking is in good shape before you start poking at the hard bits where you are much more likely to screw things up, all while offering a baseline score to prove that your optimizations are actually necessary and an improvement.


Also there's no reason why a horizontally scalable service can't be stress tested with 2x or 10x of prod load in non-prod environments.


> Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants.

I get where he's coming from, but I've seen people get this very wrong in practice. They use an algorithm that's indeed faster for small n, which doesn't matter because anything was going to be fast enough for small n, meanwhile their algorithm is so slow for large n that it ends up becoming a production crisis just a year later. They prematurely optimized after all, but for an n that did not need optimization, while prematurely pessimizing for an n that ultimately did need optimization.


The error here is not understanding the data being transformed.

You won't get it right either way if you don't /know/ how big n is going to be.

If you can't know, why not? Should you even be coding this at all?

Maybe there should be a rule zero.

0) Understand the data on which your data transformation is going to operate.

The extent you don't is the extent to which the endeavour is doomed.


My best professional relationships are between people who are confident enough to take direct feedback and appreciate it rather than resent it.

However, my worst professional relationships are with people who will rebuke your feedback whether you Crocker it or not. If you're direct, they'll say you should have been more diplomatic about it, but if you're diplomatic, they'll say you're being dishonest and should have been direct. There is no right way to approach it, these people will always find a way to criticize the delivery, and to delegitimize the feedback because of it.


This seems as good a thread as any to mention that the gzhttp package in klauspost/compress for Go now supports zstd on both server handlers and client transports. Strangely this was added in a patch version instead of a minor version despite both expanding the API surface and changing default behavior.

https://github.com/klauspost/compress/releases/tag/v1.18.4


About the versioning, glad you spotted it anyway. There isn't as much use of the gzhttp package compared to the other ones, so the bar is a bit higher for that one.

Also making good progress on getting a slimmer version of zstd into the stdlib and improving the stdlib deflate.


Yeah, I make it a habit to read the changelogs of every update to every direct dependency. I was anticipating this change for years, thanks for doing it!


> Also making good progress on getting a slimmer version of zstd into the stdlib

Awesome! Please let me know if there is anything I can do to help



Try Sikarugir for PC gaming on macOS. It runs everything I've cared to try, with little or no tweaking.

https://github.com/Sikarugir-App/Sikarugir


I think the big difference is that if you just want to optimize for some objective, it's usually very clear how to do that from Apple's options, so there's not much research to be done. It can still be challenging to choose what's the best value when it's your own money, but at least you know what you're getting, and the quality hasn't been a concern for years.


"Former Googlers" were probably used to using protobuf so they could get from a function call straight out to a struct of the right schema. It's one level of abstraction higher and near-universal in Google, especially in internal-to-internal communication edges.

I don't think it's a strong hiring signal if they weren't already familiar with APIs for (de)serialization in between, because if they're worth anything then they'll just pick that up from documentation and be done with it.


I've been using a work-issued one since 2018, and my only complaint in 2026 is that some of its rear USB ports are failing.


For those who don't already know, you can get a lot of PC gaming performance out of these machines using Sikarugir. You can install all of Steam via winetricks and go from there, or launch DRM-free games directly.

https://github.com/Sikarugir-App/Sikarugir


oh, you mean there is a free alternative to crossover? :O


There has been for a long time. It used to be called Wineskin — Sikarugir is the successor to that project. There's also Porting Kit which helps setting up and installing the wrappers.

Underneath it all is Wine which is the open source compatibility layer project which Crossover contributes to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: