If someone uses git commits like the save function of their editor and doesn't write messages intended for reading by anyone else, it makes sense to want to hide them
For other cases, you lose the information about why things are this way. It's too verbose to //comment on every like with how it came to be this way but on (non-rare in total, but rare per line) occasion it's useful to see what the change was that made the line be like this, or even just who to potentially ask for help (when >1 person worked on a feature branch, which I'd say is common)
> If someone uses git commits like the save function of their editor
I use it like that too and yet the reviewers don't get to see these commits. Git has very powerful tools for manipulating the commit graph that many people just don't bother to learn. Imagine if I sent a patchset to the Linux Kernel Mailing List containing such "fix typo", "please work now", "wtf" patches - my shamelessness has its limits!
Seems like a lot of extra effort (save, add, commit, come up with some message even if it's a prayer to work now) only to undo it again later and create a patch or alternate history out of the final version. Why bother with the intermediate commits if you're not planning for it to be part of the history?
Git is a version control system. It does not care about what it versions.
When I work on something, I commit often and use the commit graph as a undo tool on steroids. I can see what I tried, I can cherry-pick or revert stuff while experimenting, I can leave promising but unfinished stuff to look at later, or I can just commit to have a simple way to send stuff to CI, or a remote backup synced between machines.
Once I'm done working on something, it's time to take a step back, look at the big picture, see how many changes my experiments have actually yielded, separate them, describe and decide whether they go to review together or split in some way, as sometimes working on a single thing requires multiple distinct changes (one PR with multiple commits), but sometimes working in a single session yields fixes for multiple unrelated issues (several PRs). Only then it gets presented to the reviewer.
It just happens that I can do both these distinct jobs with a single tool.
Because I might want to go back to this current messy state but I don't want to commit it like this (hardcoded test strings, debug logs, cutted corners to see if something works, you name it).
I simply commit something like "WIP: testing xy" and if its working and properly implemented i can squash/rebase/edit the commit message and force push it to my feature branch.
Using a Git client like Gitkraken makes this incredibly easy, takes seconds.
This way I can leverage version control without committing bogus states to the final PR.
If the team is using a PR workflow, the PR is a working place to produce one single commit. The individual commits are just timestamped changes and comments. Think of it as the equivalent of annotated diff in mailing list conversation.
So far I haven't had much concrete reason for my family to switch away from Windows. The updates maybe, needing to pay for a new license and the UI changes are like pulling the chair out from under them, especially as they get older (Windows 7 was hard for my grandma, thankfully they left 10 mostly alone but 11 is quite different again so she's currently staying on 10 — not that her hardware supports 11 anyway but that's fixable), but it's either learning the new Windows UI, let's say ten storypoints of newness, or learning some Linux desktop environment, even if it's Mint which is similar to 7/XP it's not quite the same either and probably like 15 storypoints at minimum, even if then you're done for much longer
But if OSes are being locked down and software has trouble distributing security updates through official repositories for Windows... that's a good reason to finally make the switch. Same as why my family is on Android: I can install f-droid, disable the google store, and don't have to worry about them installing malware / spyware / adware
There's different degrees of openness. Android till 2026 was an acceptable compromise (let's see how it goed forwards). Windows is also on the decline with their account policy, not sure about this certificate revocation thing (thankfully haven't had to deal with it yet; I'm not a user myself) but it sounds like they're moving to a walled garden also
When the degree changes and gets even less open, yeah you can say "well of course, they were never truly open, they're commercial" but it's still a change and might lead people to alter their choices
You'll find that people that are not computer experts will take to modern Linux with much more ease than those that have complex needs, which for 90% of the people these days means that access to the Web satisfies all their needs.
Moving from Windows 7 to 11 will probably be as traumatic as moving from Windows 11 to KDE, so it's an investment worth doing in my opinion.
While I agree entirely that Linux in 2026 has never been more usable… how much actual work is being put into Office and 365 tooling native on Linux?
Like none. Literally the best office you MIGHT KIND OF be able to run in 2016, but probably more like 2013.
Valve focused on games, that is awesome and really helpful…
But there are 10,000 distros and instead of putting real resources to put even rickety bridges over MS’s moat, no sorry, this team is making duplication-of-effort distro 10,001 which is now identical to thousands of others but the taskbar is in the middle of screen.
The people working on Linux are consistently uninterested in then things people would need to drop windows.
> Is it really inconceivable to you that the thing could have bugs?
Or user error, or hardware setups where the docs didn't say "don't do that". If zfs is somehow better in any of those three areas, that would result in fewer corruption stories as well. Hard to know without being able to control for popularity though
Seems really weird to me to assume people make up stories to promote their favorite filesystem. Of course I have one to share as well (opened the thread without knowing it was about btrfs to begin with, I'm not brigading...)
---
I tried btrfs once in my life. I wanted to (1) mirror two disks so a routine disk failure doesn't mean I lose X hours of updates since the last off-site backup, and (2) detect bit rot. And of course it resulted in a giant headache:
The disks got out of sync, put themselves in read-only mode with different data on each (which one has the latest data? Do they both have new fragments?), I eventually figured out which one has the latest data, and I mix up the source and destination device in the recovery command. Iirc the latter was caused by me stopping to read the man page when I found the info I was after and didn't read the whole thing carefully, where subsequent text would have clued me in
The recovery mess-up is user error but if this happens to people on btrfs more often than zfs, maybe zfs is more recommendable anyway. But I've not tried zfs so that's not a statement I can make
I'm back to ext4. Will just use backups and hope for the best. This constant risk of full-filesystem corruption isn't worth it to catch the few files that changed in the last hours, or the few bytes that will rot over my lifetime. On my todo list is writing a little tool that just stores sha2sum+mtime for each file and alerts me if the former changed without the latter, then I can retrieve it from backup and perhaps swap out the disk
Note that phones also have NFC readers. Instead of requiring everyone to have a locked-down phone, they could offer day you use said phone to read the chip or use any other (USB) reader you like. I believe there's a German government app that already does this, Ausweisapp2 iirc. As someone with a different nationality who lives in Germany, I don't know more than that
Are you saying there's a threshold percentage somewhere below which you're happy to
A: exclude these people from society or force them to switch to big tech, and
B: accept the consequence where a single other country holds access to everyone's identity information for convenience reasons (because it works for the 99% that are too tech-illiterate to install software that they control instead of the other way around)
There's a difference between needing to lock down the whole OS and just the secure element. The secure hardware component can sign a challenge and prove possession of a private key without you being able to extract it. Smartcards have done this for decades (most people here will know an implementation under the name Yubikey).
Conveying authentic information across untrusted channels (your phone screen, say) has been a solved problem since asymmetric cryptography was invented back before I was born
> when they tested good chess players on random board positions they were just as good as people that did not play chess.
Doesn't that prove the opposite as the statement in the first paragraph if they were only as good as non-players? I assume there's a typo in there somewhere because I would expect the original thesis to be true. My gf would squarely beat me at chess960 just because she sees the relations between the pieces a million times faster. She can walk into a room and look at the board I've been 'rearranging' (playing on) for 45 minutes and still know what I should do faster than me
It sounds like they're recalling a study where they looked at brain activation and accuracy when trying to memorize random positions vs “real” positions, which is a very different thing.
For other cases, you lose the information about why things are this way. It's too verbose to //comment on every like with how it came to be this way but on (non-rare in total, but rare per line) occasion it's useful to see what the change was that made the line be like this, or even just who to potentially ask for help (when >1 person worked on a feature branch, which I'd say is common)
reply