Hacker Newsnew | past | comments | ask | show | jobs | submit | more hungryhobbit's commentslogin

Wait, so a company shared their work with the public for however long, then decided to leave what was shared up ... but stop sharing ... and you're upset?!?

They did everything properly by the rules of OSS, decided it wasn't in their best interest to keep doing OSS, and left all their code available, as required by OSS. They were a textbook good participant.

Meanwhile, 99% of companies never open source anything: why aren't you complaining about how "unethical" they are?


> and left all their code available, as required by OSS.

IANAL, and I don't have a horse in this race, but I don't think that's required by OSS, not by the spirit of "the law", and (at least) not by GPL, MIT, and other similar mainstream licenses.

The spirit of open source is: you buy (or just download for free) a binary, you get the 4 rights. Whatever happens when the developer/company stops distributing (whether at a cost or free as in beer) that binary is completely outside the scope of the license.


You only have the right to modify if you can access the source.

If you got (a snapshot of) the source along with the binary, that's fine, there's no need to keep hosting the source anywhere.

But if the company said "for source, see: our github", then that github has to stay up/public, for all the people who downloaded the binary a long time ago and are only getting around to exercising their right to modify today.

They don't need to post new versions of their software to it, of course. But they need to continue to make the source available somehow to people who were granted a right that can only be exercised if the source is made available to them.

(IIRC, some very early versions of this required you to send a physical letter to the company to get a copy of the source back on CD. That would be fine too. But they'd also have to advertise this somewhere, e.g. by stubbing the github repo and replacing it with a note that you can do that.)


In GPL, it has to be valid for 3 years, but only if they're not the copyright holder.

In MIT, a.k.a. "the fuck you license" there is no requirement and they don't even have to give you source code at all.


> a company shared their work with the public for however long, then decided to leave what was shared up

More like a company took advantage of a community that expected their freely offered labor to not be commercialized at any point in time without making available said works in a fully free vector as well, as that's an implicit expectation behind "open source".


> … took advantage of a community…

It would be helpful for everyone if that community would pause before contributing to code bases with licenses which allow for that. MIT, BSD, Apache, …

It would be helpful for them because they’ll know what they’re getting into. For us because we won’t have to see this tragedy unfold time and time again. And for all open source users because more efforts will be directed towards programs with licenses that protect end users. GPL, AGPL, …

It will be a little worse for companies seeking free labor. A price I’m willing to pay.


It looks like it's Apache licensed, so this was the expected and intended outcome for contributors. If they wanted their work to remain free and not become proprietary, they should have only contributed under perma-free licenses like GPL.


Donating software to the world is not an expectation that nobody uses that software to make money or build proprietary products on top of it.

Not all f/oss contributors are anticapitalist zealots like the FSF, as evidenced by the huge popularity of permissive licenses such as MIT.

There’s nothing implicit about it. The licenses are explicit legal documents.


> anticapitalist zealots like the FSF

In what way are they?

'The term "free" is used in the sense of "free speech", not "free of charge"'

https://en.wikipedia.org/wiki/The_Free_Software_Definition


The GPL protects against this.


Naive fools…

Companies stand to turn a profit. OSS is here to help enable that or push the goal posts. It’s not a charity unless the org feels charitable. Sure, non-profits exist but they were never one of those.


I think the comment on corpos is good, but calling the naive people fools might be unnecessary - it’s probably not their fault nobody told them about this sort of thing before and learning that lesson is probably disappointing enough already.

It’s unfortunate that this keeps happening to projects like MinIO and others too.


We should return to the HN guidelines, and read it as charitably as possible.

I'm interpreting it as closer to pity, rather than genuine criticism =)


Sure! Slightly edited the tone, but I’m noticing that often people have idealistic attitudes about FOSS until they get burnt by bad faith actors or even just indifferent corps that have to keep the lights on. Quite unfortunate, definitely not their fault. Pity is correct.


It’s definitely pity. It’s a hard pill to swallow when you were led to believe a certain world view of an entity only to find out they were milking your data.


They are going about to learn the same lesson Elastic learned with OpenSearch...


I can't think of any free or open license that requires you to leave your code available for any specific period of time if you are not simultaneously distributing binaries.


Because this thread isn't about those other companies.


How can people still not understand that OSS can be abused?

It doesn't matter that the previous code is still available. Nobody can technically delete it from the internet, so that's hardly something they did "right".

The original maintainers are gone, and users will have to rely on someone else to pick up the work, or maintain it themselves. All of this creates friction, and fragments the community.

And are you not familiar with the concept of OSS rugpulls? It's when a company uses OSS as a marketing tool, and when they deem it's not profitable enough, they start cutting corners, prioritizing their commercial product, or, as in this case, shut down the OSS project altogether. None of this is being a "textbook good participant".

> Meanwhile, 99% of companies never open source anything: why aren't you complaining about how "unethical" they are?

Frankly, there are many companies with proprietary products that behave more ethically and have more respect for their users than this. The fact that a project is released as OSS doesn't make it inherently better. Seeing OSS as a "free gift" is a terrible way of looking at it.


> It doesn't matter that the previous code is still available…The original maintainers are gone, and users will have to rely on someone else to pick up the work, or maintain it themselves.

It does matter: popular products have been forked or the open-source component was reused. E.g. Terraform and OpenTofu, Redis and Redict, Docker and Colima (partly MinIO and RustFS; the latter is a full rewrite, but since the former was FOSS and it’s a “drop-in binary replacement”, I’m sure they looked at the code for reference…)

If your environment doesn’t have API changes and vulnerabilities, forking requires practically zero effort. If it does, the alternative to maintaining yourself or convincing someone to maintain it for you (e.g. with donations), is having the original maintainers keep working for free.

Although this specific product may be mostly closed source (they’ve had commercial addons before the announcement). If so, the problem here is thinking it was open in the first place.


To be clear, colima isn't a fork of docker. It's just the lima VM with the docker OCI runtime + cli which is FOSS and always has been. Docker Desktop is the pile of garbage you can kinda sorta replace it with, but PodMan and PodMan Desktop is closer to a clone of Docker than Colima. Colima _is_ Docker


I thought Valkey was the blessed fork of Redis. Is Redict better in some way?


No


https://en.wikipedia.org/wiki/Cognitive_dissonance

You might want to get your arguments in order. In one sentence you're calling OSS rugpulls a problem, and then in another you're claiming that proprietary products behave more ethically.

So which is it? Is it less-ethical to have provided software as open source, and then later become a proprietary product? Why? I see having source code, even for an old/unmaintained product be strictly superior to having never provided the source code no matter how much "respect" the company has for their users today.


You might want to think about my argument a bit more.

> Is it less-ethical to have provided software as open source, and then later become a proprietary product? Why?

Because usually these companies use OSS as a marketing gimmick, not because they believe in it, or want to contribute to a public good. So, yes, this dishonesty is user hostile, and some companies with proprietary products do have more respect for their users. The freedoms provided by free software are a value add on top of essential values that any developer/company should have for the users of their software. OSS projects are not inherently better simply because the code is free to use, share, and modify.

To be fair, I don't think a developer/company should be expected to maintain an OSS project indefinitely. Priorities change, life happens. But being a good OSS steward means making this transition gradually, trying to find a new maintainer, etc., to avoid impacting your existing user base. Archiving the project and demanding payment is the epitome of hostile behavior.


It seems like you’re trying to build a system of ethics around being annoyed by OSS maintainers not working for free in perpetuity.

Having access to Apache licensed code that you can build off of is better than never having access to any code at all. Anything else about values or respect has to be inferred or imagined and has no bearing on the software itself.

Edit: Like who cares if they “wanted” to contribute to the public good? Did they actually contribute to the public good? It seems like they did and the code that did so is right there. If “life happens” then why are they obligated to do a smooth transition?

I love free stuff as much as the next person, hell, free stuff is my favorite kind of stuff. Is it annoying when there’s less free stuff? Yes. Does my personal irritation constitute a violation of a lofty set of ideals that just coincidentally dictates that nobody annoy me? No.

I would love to live in a world where it just so happens that it’s ethically wrong to bother me though. That would be sweet.


That's what they always do it always comes down to a sense of perpetual entitlement over the work of others, work they themselves would never do.

I've had the same discussion for years now on HN. It is not unethical to decide to stop supporting something especially if you played by all the rules the entire time.

No one is owed perpetual labor and they completely disregard localstack has been oss for something like 10 years at this point just celebrate it had a good run, fork and maintain yourself if you need it that badly.

It is incredibly weird to think something that was maintained oss for 10 years is a rugpull that's just called life, circumstances change.


> I've had the same discussion for years now on HN. It is not unethical to decide to stop supporting something especially if you played by all the rules the entire time.

What's unethical is taking yhe fruits of other people's work private: ranging from code contributions, through bug reports and evangelism.

Companies are never honest about how they intend to use CLAs and pretend its for the furtherance of open source ethos. Thankfully, there's an innate right to fork entire projects after rug pulls, whixh makes them calculated gambles amd nor a quick heist.


> What's unethical is taking yhe fruits of other people's work private: ranging from code contributions, through bug reports and evangelism.

First, if it's open source, then the contributions are still there for everyone to use.

Second, if the license allows it, then the license allows it.

Now, if the contributions were made with a contribution license to prevent it, you've got a solid argument. Otherwise you're applying your own morals in a situation where they're irrelevant.


I agree, along with the child comment. I think the issue is that if there wasn't some kind of ability to "rug pull," that we would see far fewer open source contributions in the first place.

I hate that a company can take a fully open-source project, and then turn it into a commercial offering, dropping support for the project's open source model. I am fine with a project's maintainers stopping support for a project because they have other things to deal with, or just are burnt out. I understand that both of these things are allowed under the specific license you choose, and still believe you should have the freedom to do what was done here (although not agreeing with the idea of what was done, I still think it should be allowed). If you want to guarantee your code is allowed to live on as fully open, you pick that license. If you don't, but want to contribute as a means to selling your talent, I still think the world would have far less software if this was discouraged. The source is still legal from before the license was changed, and I feel that even if the project doesn't get forked, it is still there for others to learn from.

With that said I'm wondering if there has ever been a legal case where source was previously fully open source, then became closed source, and someone was taken to court over using portions of the code that was previously open. It seems like it would be cut and dry about the case being thrown out, but what if the code was referenced, and then rewritten? What if there was code in the open source version that obviously needed to be rewritten, but the authors closed the source, and then someone did the obvious rewrite? This is more of a thought experiment than anything, but I wonder if there's any precedent for this, or if you'd just have to put up the money for attorneys to prove that it was an obvious change?


> Second, if the license allows it, then the license allows it.

I'm not arguing the legality. One can be a jerk while complying with the letter of the license.

I stopped signing CLAs, and I feel bad for those suckered into signing CLAs - based on a deliberate lie that they are joining a "community" - when the rug pull is inevitably attempted. I hate that "open source as a growth hack" have metastisized onto rug pull long cons.

> Otherwise you're applying your own morals in a situation where they're irrelevant.

Sharing my opinion on an HN thread about an open source rug-pull is extremely relevant.


The ethical problem is the bait-and-switch. A project that begins open and remains open is no problem; a project that begins closed and remains closed is no problem (ethically); a project that begins closed and becomes open is no ethical problem either. But a project that begins open, advertises their openness to the world, uses their openness to attract lots of community interest and then suddenly becomes closed is pulling a bait-and-switch, or rugpull.


> a project that begins open, advertises their openness to the world, uses their openness to attract lots of community interest and then suddenly becomes closed

Do you have any examples of that happening? When I click on the link at the top of this thread it takes me to a GitHub repo with a bunch of Apache licensed code that is open to anyone that wants to use or modify or build off of however they want. Heck, with permissive licensing like that you or I could fork it and put any part/all of that code into a proprietary product and make money off of it if we wanted to, and that would be entirely in keeping with the spirit and practice of FOSS.

This project seems perfectly open from what I can see, looks like the original devs stopped working on it though


Precisely.

It's remarkable that people think releasing a project as OSS is a license to disrespect users. This isn't even related to OSS. Software authors should have basic decency and respect for the users of their software. This relationship starts with that.

Publishing a project as OSS doesn't relinquish you from this responsibility. It doesn't give you the right to be an asshole.

And yet we fall for this trap time and time again, and there are always those who somehow defend this behavior.

I think it's an inherent conflict with the entrepreneurship mindset and those who visit this forum. Their primary goal is to profit from software. OSS is seen as a "gift" and an act of philanthropy, rather than a social movement to collaborate on building public goods. That's silly communism, after all. I'm demanding that people work for free for my benefit! Unbelievable.


Wow.

"Software authors should have basic decency and respect for the users of their software." Why? Not at all.

"Publishing a project as OSS doesn't relinquish you from this responsibility. It doesn't give you the right to be an asshole." You are free to be asshole and it's nobody's business.

Actually it's exactly opposite. Such feeling of superiority and privilege, that just because you use some software, you have any right to command its author is the very definition of being an asshole.

"I'm demanding that people work for free for my benefit! Unbelievable." Yes, that's unbelievable.


> "Software authors should have basic decency and respect for the users of their software." Why? Not at all.

Because that's the core reason why we build software in the first place. We solve problems for people. Software doesn't exist in a void. There's an inherent relationship created between software authors and its users. This exists for any good software, at least. If you think software accomplishes its purpose by just being published, regardless of its license, you've failed at the most fundamental principle of software development.

> you have any right to command its author is the very definition of being an asshole.

Hah. I'm not "commanding" anyone anything. I'm simply calling out asshole behavior. The fact is that software from authors who behave like this rarely amounts to anything. It either dies in obscurity, or is picked up by someone who does care about their users.

> "I'm demanding that people work for free for my benefit! Unbelievable." Yes, that's unbelievable.

Clearly sarcasm goes over your head, since I'm mimicking what you and others think I'm saying. But feel free to continue to think I'm coming from a place of moral superiority and privilege.


If you want software to be free for everyone except for the authors to use, modify, distribute, and sell without restriction I am sure you could work with a lawyer to draft a new “Apache for everybody on earth other than the maintainers, who permanently waive all rights” license.

If that’s what all good maintainers do, and intend to do, there’s really no reason for maintainers to tempt themselves by using awful “open” licenses that allow them the loophole of doing what they want with the software they create. Plus who wouldn’t want to codify that they’re not an asshole?

It shouldn’t be hard to get maintainers that intend for their software to “amount to something” to adopt it, and it would bring a sense of comfort to the people that rely on the software that you write when you announce that it’s the new default license for everything in your repos.


I have no idea what you're talking about.


Your argument is about some sort of covenant between the developers/maintainers and the users. That’s what a license is. That is the agreement between the parties. In that sense your problem isn’t with individual developers, it’s with permissive licensing.

If you don’t like it when OSS maintainers pivot to proprietary software, why not just create a license that precludes that from happening? The maintainers could waive their rights to pivot or later reuse the code that they wrote in any proprietary software, and that way people could just choose to only create and use NoRugPullForeverEver-licensed software and avoid the headaches altogether.


It's a matter of honesty and trust. A company that has never provided source code is more honest and trustworthy than one that provides source code, extracts community labor (by accepting issues and/or PRs) and then makes off with said labor (even if they left a frozen version available) at a future point.


Does the amount of labor that was provided by a community make a difference? What if it was minimal? Where do you draw the line (any piece of code accepted, or a "large portion" of code)?

I didn't downvote you, but I suspect combining PRs with issues is what most people have an issue with. Issues obviously help to improve software, but only through the fixing or writing of new code.

Maybe I'm in the minority, but I also think that if it were a requirement to never close source your project after it's already been open sourced, we'd have far fewer projects available that are open source. Often a project is created on a company's dime, and open source, to draw attention to the developer skills and ability to solve a problem. If the code was legally disallowed to be close sourced in the future, we might see far less code available universally. A working repository of code is potentially a reference for another developer to learn something new. I don't have any examples, but I know for a fact that I've read code that had been open source, and later close sourced, and learned something from the open source version (even if it was out of date for the latest libraries/platform).


Open Source Software doesn't mean maintenance free.

The code is all there mate.

Their time and efforts and ongoing contributions to the project are not.

OSS is not about fairness and free work from people. It's just putting the code out there in public.


> The original maintainers are gone, and users will have to rely on someone else to pick up the work,

That’s a risk that no license, open source or not, can protect against. Priorities may change, causing maintainers to stop maintaining, or maintainers (companies or people) may cease to exist.

OSS licenses also do not promise that development will continue forever, will continue in a direction you like or anything like that.

The only thing open source licenses say is “here’s a specific set of source code that you can use under these limitations”. The expectation that there will be maintenance is a matter of trust that you may or may not have in the developers.

> or maintain it themselves.

With open source, at least you have that option.

> And are you not familiar with the concept of OSS rugpulls? It's when a company uses OSS as a marketing tool, and when they deem it's not profitable enough, they start cutting corners, prioritizing their commercial product, or, as in this case, shut down the OSS project altogether.

Companies have to live. It’s not nice if something like that happen to you for a tool you depend on, but you can’t deny companies to stop doing development altogether.

In this case, you have something better, as, in addition to picking up maintenance on the existing open source version, you have the choice to pay for a version maintained by the original developers.


So basically businesses should go bankrupt because making money is "unethical"


They address this; it's not that they don't fail, in practice...

the key insight is that changes should be flagged as conflicting when they touch each other, giving you informative conflict presentation on top of a system which never actually fails.


Isn't that how the current systems work though? Git inserts conflict markers in the file, and then emacs (or whatever editor) highlights them

The big red block seems the same as "flagged", unless I'm misunderstanding something


With git, conflicts interrupt the merge/rebase. And if you end up in a situation with multiple rebases/merges/both, it's easy to get a "bad" state, or be forced to resolve redundant conflict(s) over and over.

In Jujutsu and Pijul, for example, conflicts are recorded by default but marked as conflict commits/changes. You can continue to make commits/changes on top. Once you resolve the conflict of A+B, no future merges or rebases would cause the same conflict again.


Thank you for the explanation, i did misunderstand! seems like a neat feature


Linux has been able to serve most non-gaming use cases for over a decade now (source: I've been running the OS longer than that). The one thing it used to not be able to do was play games ... and now it does that.


For non technical users it simply isn't true.

The happy path has improved a lot. When Linux is working it's reasonably usable. But once something breaks it breaks HARD and recovery is still miserable.

For reference I've been using Linux since Red Hat 5.2 circa 2000. I cut my teeth debugging problems without internet access. I ran an LTSP lab at my high school. I remember the hell that was XF86Config (I was there, Gandalf, I was there 3000 years ago).

....and like the previous commenter I run Windows on my personal machines because I want to spend my free time using them, not debugging them.


The only app non technical users use anymore is a web browser. And since Linux has the same web browsers, non techies don't care. Also, it isn't spying on them or putting ads on their desktop, or breaking the mic randomly like Windows does. Big differences to most people.


I'm halfway between a technical and non-technical user. And half my time is spent in Chrome. But my other half time is in Excel, Dropbox, and Everything. Do those run on Linux? Maybe there are equivalents but I don't have the time to investigate. Access crashes too frequently these days but I couldn't find a GUI equivalent for PostgreSQL. Spying/ads/breaking the misc aren't in my top 10 Windows issues.


> And since Linux has the same web browsers, non techies don't care.

....which is why Chromebooks took over the consumer market. Oh, wait.


I dunno, I thought about this before switching to Linux, when I gave my wife a Linux box I had sitting around in a pinch during the pandemic laptop shortage—a lot of people these days just need a browser, and there’s not really much to go wrong with that. If something does go wrong you can just nuke the whole thing and start over pretty easily.

I’ve certainly run into some odd situations on my desktop Linux machine over the past 6 years since I started using it full time, but I think most of them were related to the nature of how I use the machine more than inherent instability. I think I’ve spent many more hours of my life unwinding piles of malware and bloat from non-technical folks’ Windows machines than debugging this one.


My parents used Linux as their home computer for three years, regularly updating it and doing basic document writing with open office, as well as all of their banking etc

They don’t know what Linux is, and know nothing about tech, they just know that we had a 30 minute lesson on “here’s Firefox, this icon means you need to install updates, here’s how you print”.

Oh and this was Linux Mint back in ~2016

Things have only gotten easier since then


I'm now also comfortable to have my family members use Linux on day to day basis. To the point where I use linux to revive 10 year old Macbooks, since the Apple hardware is just solid. I don't dual boot, I just fully flash with Linux.

When something breaks,

Fixing Linux means running some commands that LLM suggests.

Fixing Windows means downloading some shady .exe that may or may not fix the problem and it may or may not backdoor your machine.

Fixing MacOS means paying $5 for some app that maybe does the thing.


I don't get this. People would put up with absolute nonsense on Windows. But when it comes to Linux, they want to experiment, mess around with the configs, copy/paste random commands from internet and basically turn into l33t haxers and then stuff breaks, its Linux's fault. Like how? Install Fedora, don't add any extra repos, don't install anything not in the Software Center and let us see how many times your system breaks.

I have been using Linux since 2000s as well. I do remember the rpm hell, dealing with x config issues etc. It is NOT the same experience now a days. I don't have the time or inclination to mess around so I use Fedora + KDE and that basically stays out of my way. I don't rice my desktop or do any hacking around beyond basic automation and I have had zero instances of the system just breaking.


Examples from the last 3 years:

* I wanted to update a Raspberry Pi from Ubuntu LTS 22 to LTS 24. Turns out this is basically impossible. Ubuntu themselves tell you not to do it and their recommended solution is to wipe the system and try again. I ignored them and tried to do it anyway and my Pi ended up refusing to boot. Great.

* I needed to update a Raspberry Pi to change the list of WiFi networks it knew about. Except apparently there are two different networking stacks for Linux with different config files and I edited the wrong one.

* I built a new TrueNAS server. Turns out that you absolutely cannot configure the networking from the GUI. There's a section there, sure, but every time it refuses to save the information until you "test the changes" and that fails to reconnect every single time. You have to locally plug a monitor into the machine, boot it, and log in with a keyboard to get to the config there.

* Not strictly a bug, but I installed Debian in WSL and it doesn't include `man` by default. So I get a command line and no help for it. Brilliant.


But, a Raspberry Pi isn't supposed to be a replacement for your desktop; it is meant as a device for experimentation.


The Raspberry Pi 500, essentially the Pi 5 inside a keyboard, is sold as a "refined personal computer". "A fast, powerful computer built into a high-quality keyboard, for the ultimate compact PC experience."

https://www.raspberrypi.com/products/raspberry-pi-500/


Then marketing struck again. Anyway, that isn't a device the average user would buy, so I'm not concerned about Ubuntu failing to upgrade on such a platform. I would take the complaint as valid if the issue existed on a consumer laptop, but this isn't the case.


Right. I'd like to see them do the Windows 11 upgrade on the same hardware...


Oh, and from a few days ago:

* I want to install jj

* Its docs say to use cargo-binstall

* How do I get that? With cargo, so sudo apt install cargo

* `cargo binstall --strategies crate-meta-data jj-cli` -> `error: no such command: `binstall``

* `cargo install binstall` -> `error: cannot install package `cargo-binstall 1.17.7`, it requires rustc 1.79.0 or newer, while the currently active rustc version is 1.75.0`

* `sudo apt install rust` -> E: Unable to locate package rust

* `sudo apt install rustc` -> `rustc is already the newest version (1.75.0+dfsg0ubuntu1~bpo0-0ubuntu0.22.04).`

Apparently the guidance is to manage your rust versions with a tool other than apt that you install with `curl ... | sh` because no one ever learns anything about security

.....yep, just as user friendly as I remember.


This would not be any easier on Windows?


Yeah like... on Windows that's the exact same steps you would need to take if you insisted on using binstall? You might have slightly different steps for installing rustup for Windows (e.g. you need to install Visual Studio).

The other path I can see (looking at https://docs.jj-vcs.dev/latest/install-and-setup/#windows) is that you could maybe instead use winget directly.

Though honestly IMO this is more of a failure on the jj devs to not provide something that can be installed straight using apt, I guess (looking at https://docs.jj-vcs.dev/latest/install-and-setup/#linux). For Arch for example you just install it from the official repos.


The way Windows programs used to install was you insert a CD or download an .exe, doubleclick it, and then repeatedly press "Next" until "Finished".


> * I wanted to update a Raspberry Pi from Ubuntu LTS 22 to LTS 24. Turns out this is basically impossible. Ubuntu themselves tell you not to do it and their recommended solution is to wipe the system and try again. I ignored them and tried to do it anyway and my Pi ended up refusing to boot. Great.

"Ubuntu themselves tell you not to do it" - you do see it right? Let us see how you forgive Windows for breaking things by ignoring Microsoft's advice and blame them anyway when it breaks.

> * I needed to update a Raspberry Pi to change the list of WiFi networks it knew about. Except apparently there are two different networking stacks for Linux with different config files and I edited the wrong one.

Why? Why not connect it to the network you want so that it just connects to that going forward?

> * I built a new TrueNAS server. Turns out that you absolutely cannot configure the networking from the GUI. There's a section there, sure, but every time it refuses to save the information until you "test the changes" and that fails to reconnect every single time. You have to locally plug a monitor into the machine, boot it, and log in with a keyboard to get to the config there.

And TrueNAS's shortcomings are somehow Linux's fault just like every Windows thirdparty software issue is Windows' fault?

> * I want to install jj * Its docs say to use cargo-binstall

No, they don't ask that as the first choice - this is what they say in https://docs.jj-vcs.dev/latest/install-and-setup/:

    Installation¶
    Download pre-built binaries for a release¶
    There are pre-built binaries of the last released version of jj for Windows, Mac, or Linux (the "musl" version should work on all distributions).

    Cargo Binstall¶
    If you use cargo-binstall, ....
You could have just used the pre-built binaries as per their advice. But if you didn't, you should have atleast bothered to click on that cargo-binstall link to see that it is an add-on which has its own instructions - it is not bundled with cargo by default. Unlike you, I did follow the steps and was able to install jj without issues:

    $ > curl -L --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/cargo-bins/cargo-binstall/main/install-from-binstall-release.sh | bash
    + set -o pipefail
    + set -o pipefail
    + case "${BINSTALL_VERSION:-}" in
    ++ mktemp -d
    + cd /tmp/tmp.8IdPJtQBlE
    + '[' -z '' ']'
    ...
    + case ":$PATH:" in
    + '[' -n '' ']'
    $ > cargo binstall --strategies crate-meta-data jj-cli
    INFO the current QuickInstall statistics endpoint url="https://cargo-quickinstall-stats-server.fly.dev/record-install"

    Binstall would like to collect install statistics for the QuickInstall project
    to help inform which packages should be included in its index in the future.
    If you agree, please type 'yes'. If you disagree, telemetry will not be sent.
    ...
    INFO resolve: Resolving package: 'jj-cli'
    WARN resolve: When resolving jj-cli bin fake-bisector is not found. But since it requires features test-fakes, this bin is ignored.
    WARN resolve: When resolving jj-cli bin fake-diff-editor is not found. But since it requires features test-fakes, this bin is ignored.
    WARN resolve: When resolving jj-cli bin fake-echo is not found. But since it requires features test-fakes, this bin is ignored.
    WARN resolve: When resolving jj-cli bin fake-editor is not found. But since it requires features test-fakes, this bin is ignored.
    WARN resolve: When resolving jj-cli bin fake-formatter is not found. But since it requires features test-fakes, this bin is ignored.
    WARN The package jj-cli v0.39.0 (x86_64-unknown-linux-musl) has been downloaded from github.com
    INFO This will install the following binaries:
    INFO   - jj => /home/xxxxx/.cargo/bin/jj
    Do you wish to continue? [yes]/no yes
    INFO Installing binaries...
    INFO Done in 7.549505679s

    $ > jj version
    jj 0.39.0-d9689cd9b51b4139d2842fcf6c30f65f4eed8cd1
    $ > 
Again, a) this is third party b) just because you don't know how to follow the instructions, doesn't make the OS bad. Hell it doesn't even make cargo-binstall or jj look bad. By now, you should see that years of experience != knowing how to use things.

Having said all that, none of this stuff you mentioned even remotely resembled an average user's workflow who just uses his computer for listening to music and browsing the internet with some occasional document editing thrown in. Despite its warts and shortcomings, Linux does a much better job today than it used to.


> "Ubuntu themselves tell you not to do it" - you do see it right? Let us see how you forgive Windows for breaking things by ignoring Microsoft's advice and blame them anyway when it breaks.

Not giving a supported upgrade path between version N and N+1 of your operating system is unacceptable, user hostile, and not something a home user could deal with. "Install from scratch, wipe all your files, and set everything up again" is not OK. You can upgrade Windows from 1.0 through 11 without Microsoft saying "nah, this is impossible": https://www.youtube.com/watch?v=cwXX5FQEl88

> No, they don't ask that as the first choice - this is what they say in https://docs.jj-vcs.dev/latest/install-and-setup/:

"the binaries" are a tarball whose instructions refer back to the previous document, whose "Install > Linux" section starts "from source" and says "go obtain Rust > 1.88", so all of the previous problems still apply.


> "the binaries" are a tarball whose instructions refer back to the previous document, whose "Install > Linux" section starts "from source" and says "go obtain Rust > 1.88", so all of the previous problems still apply.

Again with the assertions without checking things. This is the path of "the binaries":

https://github.com/jj-vcs/jj/releases/tag/v0.39.0

I downloaded the file myself and extracted to see:

    $ > ls -l jj-v0.39.0-x86_64-unknown-linux-musl.tar.gz 
    -rw-r--r--. 1 xxxx xxxx 10373711 Mar xx xx:xx jj-v0.39.0-x86_64-unknown-linux-musl.tar.gz
    $ > tar xzvf jj-v0.39.0-x86_64-unknown-linux-musl.tar.gz 
    ./
    ./README.md
    ./LICENSE
    ./jj
    $ > ls -l jj
    -rwxr-xr-x. 1 xxxx xxxx 27122184 Mar  5 02:33 jj
    $ > file jj
    jj: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), static-pie linked, BuildID[sha1]=70d48428bc2100069e6813aff97e3dce8d2bb4a0, not stripped
    $ > ./jj version
    jj 0.39.0-d9689cd9b51b4139d2842fcf6c30f65f4eed8cd1
    $ > 
It is overconfident low-skill users like you that bring a bad name to Linux.


From your link, at the very top: "See the installation instructions to get started".

Not "figure out how to extract a tarball, find somewhere unspecified on your path to put things blah blah" but "to get started go read this doc whose first step is to install rust, which your package manager isn't capable of".

This is a fairly standard Linux experience, not one reserved for developer tools.

On Windows, if you're not going through an app store you get an EXE or MSI installer that you double click and it does everything else necessary. Every time.


Yeah. Maybe just stop using Linux. You'll never be happy with it anyway. Most its-never-my-fault people aren't.


And this is why Linux desktop remains a ~1% marketshare OS, despite all of the vocal complaints about the corporate enshittification of Windows. Countless people say they're going to switch out of frustration, and then quickly meet reality and understand how good they actually have it with Windows when they try Linux, not at all helped by encountering the snobby community who will deride anyone for not knowing everything they know. The Linux ecosystem very much assumes you already have the knowledge of having always used Linux. For somebody who just started using it, "following the install instructions at the top of the page" is a perfectly reasonable thing to be doing. It is not the user's fault if those instructions are bad and you could totally get it working more easily if only you already knew what you were doing.

I note you also dropped the line of argument about the OS updating, where you were chiding them, saying they did need to follow instructions in that case. Of course, the instructions in that case are indefensible - you cannot seriously suggest an OS is production-ready for the real world if the instructions are "this cannot be updated. Seriously, don't even try.".


> The Linux ecosystem very much assumes you already have the knowledge of having always used Linux.

Yes, because as per the poster, they are not a novice:

> For reference I've been using Linux since Red Hat 5.2 circa 2000. I cut my teeth debugging problems without internet access. I ran an LTSP lab at my high school. I remember the hell that was XF86Config (I was there, Gandalf, I was there 3000 years ago).

No one is expecting a novice to know how to run curl, untar and compile. This is not that situation by the very admission above.

> For somebody who just started using it, "following the install instructions at the top of the page" is a perfectly reasonable thing to be doing. It is not the user's fault if those instructions are bad and you could totally get it working more easily if only you already knew what you were doing.

Did you actually go to jj's github which the poster mentioned? This is what is literally the top of the Installation page:

    Installation and setup¶
    Installation¶
    Download pre-built binaries for a release¶
    There are pre-built binaries of the last released version of jj for Windows, Mac, or Linux (the "musl" version should work on all distributions).
I demonstrated in this thread that if you download and untar the pre-built binary, it works perfectly. No curl command or compilation necessary. Again, I don't expect a novice to know this but for someone proclaiming to have wrestled with XF86Config config, this should be par for the course.

> I note you also dropped the line of argument about the OS updating, where you were chiding them, saying they did need to follow instructions in that case. Of course, the instructions in that case are indefensible - you cannot seriously suggest an OS is production-ready for the real world if the instructions are "this cannot be updated. Seriously, don't even try.".

I admit that I was shallow on this point. I did research further and Raspberry Pi situation isn't great when it comes to upgrades. Most people are using separate SD cards to host the OS and doing a hard upgrade. I admit and apologise to @Arainach for not checking further on this point and ignoring it.

Edit: I guess today was the day I couldn't ignore Linux bashing from an experienced user and got somewhat carried away. My tone could and should have been softer.


> You can upgrade Windows from 1.0 through 11 without Microsoft saying "nah, this is impossible"

Have you tried that lately? It was probably true for Windows 10, but not 11. There is no supported path to install 11 if you don't have the Microsoft-approved hardware with TPM etc, which would certainly include Raspberry Pis. Installing Windows 11 on non-Microsoft-approved hardware seems to require levels of jank at least as bad as anything I've seen in Linux. Advice is all over the place, usually involving full reinstalls, setting random registry keys, running Powershell scripts downloaded from a random Github repo as Admin, or something along those lines. And no telling which if any work at any particular time, since Microsoft is constantly fighting them apparently.


That's a lot of words to say "I didn't click on the linked video of someone doing it on physical hardware".


> Why? Why not connect [the Raspberry Pi] to the network you want so that it just connects to that going forward?

I'm not the guy who wrote that, but I had the same use-case myself. (Except that I happened to choose the correct networking stack so I didn't have a problem). I wanted to set up a Raspberry Pi in my parents' house that would run Tailscale so I could use it as an exit node. (With my parents' full knowledge and permission). I wanted to pre-configure it with their WiFi password so that when I showed up for Christmas, I didn't have to spend any time configuring the device, just plug it in and go have dinner. (Then they changed ISPs, got a new router with a new WiFi password, and I had to ask them to plug it into the wired network so I could connect to it remotely and change the WiFi password again, so I had to do that work twice. But thankfully, I didn't have to walk them through the steps, just say "Hey, please plug it into the router with an Ethernet cable until you get an email from me telling you I've reconfigured the WiFi".)


I think you are missing the point. Windows programs would install with "Next", "Next", "Next", "Finished". Just look at what you posted and compare the user experience.


Define normal. I would argue at least 75% of the US population has zero interest in learning how to install a new OS, let alone actually do so themselves.

I say this as a decades-long Linux user (who has tried to evangelize it many times).


Gamers are one of the few demographics still buying new Windows PCs. There are now so many discord servers and subreddits filled with people discussing which Linux distro to use.

Honestly for your average home consumer, there isn't much need for a Windows PC now days.


I can't drive stick.

This doesn't mean if someone gave me a manual car I wouldn't try to learn.

If your around a bunch of car people then it's much easier to over estimate how many people will want to drive stick.


I would argue its close to 99% of the population. Technical people like us usually live in a bubble.


No we don't... right guys?!



lmaoo


> has zero interest in learning

Well I can agree with that, but that's not the same thing as being incapable of doing it. Both of my parents could easily install Linux, it's infantilizing to argue that they can't fill out a user wizard and select a drive to wipe.


> and select a drive to wipe

You are vastly overestimating the percentage of the population that knows what a "drive" is. Not saying that's a good thing, but it's the reality.


You don't have to know. The Calamares installer annotates your partitions and explains what will happen in natural language. If you can order a pizza online, you can install Linux.


Yeah. If ordering a pizza also regularly involves entering BIOS setup to change boot device ordering, change SATA mode from RAID to AHCI and disable secure boot, depending on your distro.


> change SATA mode from RAID to AHCI

This is funny. I have an HP PC that has an option in the BIOS to "prepare for RAID" or some such. I wondered what that was, so I turned it on. I had Linux on it at the time, and nothing happened. I shrugged and just forgot about it.

Fast forward a few months later, when I gave this PC to my dad. He installed Windows on it, then started thinking the PC was somehow borked: "the installer sees the drive, installs, reboots, then it fails to boot". I was shocked, that PC worked perfectly.

Then I remembered about that setting, told him to untick the box in the BIOS, and he was off to the races.


Yeah, support the company that promised to help your government illegally mass surveil and mass kill people, because they support a use case slightly better than the non-mass-murdering option.


Both of them promised to help their government illegally mass surveil and mass kill people. One of them just didn't want it done to US citizens.

I'm not a US citizen, so both companies are the same, as far as I'm concerned.


You are absolutely correct that both are evil ... as are most corporations.

Still, I feel like "will commit illegal mass murder against their own citizens" is a significant enough degree more evil. I think lots of corporations will help their government murder citizens of other countries, but very few would go so far as to agree to murder their own (fellow) citizens ... just to get a juicy contract.


I see your viewpoint but, to me, "both will happily murder you but one is better because they won't murder ME!" isn't very compelling. Like, I get it, but also it changes nothing for me. They're both bad.


It's not about "won't murder me" it's about "won't murder their own tribe". Humans are very tribal creatures, and we have all sorts of built-in societal taboos about betraying our tribe.

We also have taboos against betraying/murdering/whatever people of other tribes, but those taboos are much weaker and get relaxed sometimes (eg. in war). My point is, it takes significantly more anti-social (ie. evil) behavior to betray your own tribe, in the deepest way possible, than it does to do horrible things to other tribes.

This is just as much true for Russians murdering Ukranians as Ukranians murdering Russians, or any other conflict group: almost all Russians would consider a Russian who helps kill Russians to be more evil than a Russian who kills Ukranians (and vice versa).


Right, but I consider someone who'll murder exclusively other tribes to be infinitely closer to someone who'll murder their own tribe than to someone who won't murder anyone.


watching trump get elected twice; you can see why americanos have no problemos with mental backflips when choosing.

But you're still choosing evil when you could try local models


Will you send me an H100?


Are you doing something that actually demands it? Have you tried local models on either the mac or AMD395+?


I will be able to do something that demands it once I have it ;)


Most people who win the lottery are poor again within the decade.


Will you send me an AMD395+ or a new Mac that can handle the local models? That would probably be enough for me.



That a gross exaggeration. But to your point, I could say the same for almost any product I use from Big Tech, every laptop company I buy my hardware from, etc. I'm sure the same applies to you. I can't fight every vendor all the time. For now I pick what works best for my use case.


>Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead

I truly believe so much of the anti-AI sentiment is the same as the Luddites.

They're often used as a meme now, but they were very real people, faced with a real and present risk to their livelihoods. They acted out of fear, but not just irrational fear.

AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity ... and also, unquestionably, a threat to programmer jobs.

Maybe the OP is right about waiting, but to me whenever new tech is disrupting jobs, that seems like the best time to learn it. If you don't, it's not just FOMO as the author suggests ... it's failing to keep up on the skills that keep you employed.


> it's failing to keep up on the skills that keep you employed.

I judge "failing to keep up" by my ability to "catch up". Right now if I search for paying courses on AI-assisted coding, I get a royal bunch for anything between 3$ to about 25$. These are distilled and converging observations by people who have had more time playing around with these toys than me. Most are less than 10 hours (usually 3 to 5). I also find countless free ones on YouTube popping up every week that can catch me up to a decent bouquet of current practices in an hour or two. They all also more or less need to be updated to relevancy after a few months (e.g. I've recently deleted my numerous bookmarks on MCP).

Don't get me wrong, LLM-assisted coding is disruptive, but when practice becomes obsolete after a few months it's not really what's keeping you employed. If after you've spent much time and effort to live near that edge, the gap that truly separates you from me in any meaningful way can be covered in a few hours to catch up, you're not really leaving me behind.


The burden of proof lies with he who makes grand claims. My counterargument in the face of your lack of evidence is: “Where are all the improvements to my daily life? Where are the disrupting geniuses who go-to market 100x faster than their Luddite counterparts?”

To paraphrase another analogy that I enjoyed, it’s a bit like when 3d printing became a thing and hype con artists claimed that no one would buy anything anymore, you could just 3d print it.


You don’t need 100x productivity to be disruptive. In business 10% gain can be quite enormous. My senior engineers are estimating 25-50% gains. That is a far cry from your 100,000% gain, but very real and meaningful.


The last study that came out on this showed that engineers were significantly overestimating their own productivity gains.

If a stat like that is not accurately measured, it's useless.


The study from last July or is there something new?


This is a completely different claim than the commenter made that I was responding to


I have found that maximising AI coding is a skill on its own. There is a lot of context switching. There is making sure agents are running in loops. Keeping the quality high is also important, as they often take shortcuts. And finally you need an somewhat of an architectural vision to ensure agents don’t just work in a single file.

This is all very tiring and difficult. You can be significantly better than other people at this skill.


This is not an argument for its revolutionary utility. Balancing rocks on the beach is very tiring and difficult for some people, and you can be significantly better at it. Not really bringing anything to the immediate conversation with that insight.


  AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity .
And yet, the only research that tries to evaluate this in a controlled, scientific way does not actually show this. Critics then say those studies aren’t valid because of X, Y or Z but don’t provide anything stronger than anecdotes in rebuttal.

It’s ridiculous double standard and poisons any reasonable discussion to assert something is a fact and anyone who disagrees is a hysterical Luddite based on no actual evidence.


On what basis are you assuming that Anthropic committed greater copyright theft than Meta, OpenAI, and Google (not to mention many lesser-known options)?


Legally speaking they were found to have by a court and the others weren’t


When did that happen? Did they admit guilt in the big settlement, or was there a different case?


I remember when my dev team included some people using Emacs, some using Eclipse (this was pre-VS Code), and some using IntelliJ.

Developers will always disagree on the best tool for X ... but we should all fear the Luddites who refuse to even try new tools, like AI. That personality type doesn't at all mesh with my idea of a "good programmer".


I will try anything reasonable. And have tried LLM tools for programming. But there's no way I would use it daily. It's too inefficient, too error prone, and will actively make me a worse programmer (as I will be writing less code and making fewer decisions. I will also understand less of the systems I'm building).

All the excellent developers around me are _not_ using AI except for very small, contained tasks.


Are you implying that someone who prefers Eclipse is more likely to be a good software engineer than someone who prefers Emacs? If so, that is so hilariously backwards that I can't even begin to understand the types of experiences that you must've had.

I am sure that you're objectively wrong if that is what you're saying.


I'm reading it as: those unwilling to try both and make an honest evaluation and instead have preconceived notions and bigotry tend to make bad programmers. That preferences are fine, but dogmatism should be avoided.


Nowadays most people try VSCode or JetBrains "by default" in school or at a first job. It's Emacs that's for explorers who actually try alternatives


I went to a James Gosling talk where he excoriated the Emacs users in his audience for clinging to outdated technology and not using a state-of-the-art IDE.

But the IDE he was hawking wasn't Eclipse. I think it was Sun Studio.


Flat out wrong. The most impressive engineers I've met in my career did not care for fancy tools with bells and whistles.


Sure, I bet they didn't outright dismiss them as useless to the entire field though! I'm sure they still understood the value those fancy tools provided to their peers.


Unless someone is trolling, it’s rare for people to deem it as “useless”. Most counterpoints have been about ethics and issues that surround LLM usage. Things like licensing, coding vs review time, correctness and maintainability of the generated code, etc… Unless you believe we’re in a software engineering utopia, I think it’s fair to call those out.


I think it’s fair to call those things out 100% of the time regardless of the LLM usage. Not sure what gave you the impression otherwise.


a sculptor or a painter are doing something inherently different from someone who describes the outcome and have it created by someone else.

in my world they are called product managers or product owners (scrum) but they are not programmers. prompting an LLM is producing a product but it is not programming.

i refuse to use AI because i want to remain a programmer, and not become a manager.


>You've been fooled into thinking that being victimized is a moral failure of the victim.

And you seem to have been fooled into thinking all victims are powerless.


I strongly disagree. There's always been two camps ... on everything!

Emacs vs. vi. Command-line editor vs. IDE. IntelliJ vs. VS Code. I could do like twenty more of these: dev teams have always split on technology choices.

But, all of those were rational separations. Emacs and vi, IntelliJ and VS Code ... they're all viable options, so they boil down to subjective preference. By definition, anything subjective will vary between different humans.

What makes AI different to me is the fear. Nobody decided not to use emacs because they were afraid it was going to take their job ... but a huge portion of the anti-AI crowd is motivated by irrational fear, related to that concern.


For the sake of argument let's assume we have a common goal: produce a software product that does its job and is maintainable (emphasis on the latter).

Now given that LLMs are known to not produce 100% correct code you should review every single line. Now the production rate of LLMs is so high that it becomes very hard to really read and understand every line of the output. While at the same time you are gradually losing the ability to understand everything because you stopped actively coding. And at the same time there are others in your team who aren't that diligent adding more to the crufty code base.

What is this if not a recipe for disaster?


I think differences in the business determine whether the maintenance/understanding aspect is important. If developing an MVP for a pitch or testing markets then any negatives aren't much of a consideration.. if working in a mature competitive or highly regulated domain then yeah, it's important


A huge portion of the pro-AI crowd is motivated by irrational hype and delusion.

LLMs are not a tool like an editor or an IDE, they make up code in an unpredictable way; I can't see how anyone who enjoyed software development could like that.


Pretty much anyone who's not you, will make code in an unpredictable way. I review other people's code and I go 'really, you decided to do it that way?' quite often, especially with coders with less years of experience than me.

That's kind of how this is starting to feel to me, like I'm turning more into a project manager that has to review code from a team of juniors, when it comes to A.I. Although those juniors are now starting to show that they're more capable than even I am, especially when it comes to speed.


Certainly those of us who maintain and administer it don't like that


what about the fear is irrational?


It doesn’t help that the CEOs of these companies are hyping up the fear. It’s no wonder people are afraid when the people making the products are spouting prophecies of doom.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: