Hacker Newsnew | past | comments | ask | show | jobs | submit | sho_hn's commentslogin

Note it's only a freebie if you are on Windows/Mac, too. If you're on Linux, it works terribly on wine and you have to use the browser version, and then you need a $95/month subscription.

Yeah I have a Windows gaming box which I use so I use it on that. I would not use a 3D design tool in the browser anyway.

It's been improving rapidly. The upcoming (imminently) 1.1 has a large amount of modern UI affordances, such as on-canvas gizmos that at times actually are easier to use than e.g. the Fusion ones. I'm a heavy Fusion user, but for me FreeCAD is nearly there now and the improvement over 1.0.x is massive.

There's a lot more to do, but my feeling is the project is taking UI/UX design much more seriously than it has in the past, with the ramp-up of an internal design-focused team etc. I get that feeling from reading the weekly progress updates and MR discussions.

I'm very optimistic for the future of FreeCAD personally. I think it's a great time to contribute if you are interested in making UI/UX better as well because there's much higher interest in that kind of work now. I think it's close to having its own Blender/KiCAD moment.


> I never understood the Wayland push and I still don’t.

What happened is basically this:

- X11 was fairly complex, had a lot of dead-weight code, and a whole bunch of fundamental issues that were building up to warrant an "X12" and breaking the core protocols to get fixed.

- Alongside this, at some point in the relatively distant past, the XFree86 implementation (the most used one at the time, which later begat Xorg) was massively fat and did a huge amount of stuff, including driver-level work - think "PCI in userspace".

- Over the years, more and more of the work moved out of the X server and into e.g. the Linux kernel. Drivers, mode setting, input event handling, higher-level input event handling (libinput). Xorg also got a bit cleaner and modularized, making some of the remaining hard bits available in library form (e.g. xkb).

- With almost everything you need for a windowing system now cleanly factored out and no longer in X, trying out a wholly new windowing system suddenly became a tractable problem. This enabled the original Wayland author to attempt it and submit it for others to appraise.

- A lot of the active X and GUI developer liked what they saw and it managed to catch on.

- A lot of the people involved were in fact not naive, and did not ignore the classic "should you iterate or start over" conundrum. Wayland had a fairly strong backward compat story very early on in the form of Xwayland, which was created almost immediately, and convinced a lot of people.

In the end this is a very classic software engineering story. Would it have been possible to iterate X11 toward what Wayland is now in-place? Maybe. Not sure. Would the end result look a lot like Wayland today? Probably, the Wayland design is still quite good and modern.

It's a lot like Python 2.x vs. 3.x in the end.


> Why is Wayland so complicated?

It's not particularly complicated, and certainly a lot simpler and cleaner than X11 in almost every way.

The reality of the situation is that there's sort of a hateful half-knowledge mob dynamic around the topic, where people run into a bug, do some online search, run into the existing mob, copy over some of its "arguments" and the whole thing keeps rolling and snowballing.

Sometimes this innocent, like OP discovering that UIs are actually non-trivial and there's different types of windows for different things (like in really any production-grade windowing system). So they share their new-found knowledge in the form of a list. Then the mob comes along and goes "look at this! they have a list of things, and it's probably too long!" and then in the next discussion it's "Did you know that Wayland has a LONG LIST OF THINGS?!" and so on and so forth.

It's like politics, and it's cyclic. One day the narrative will shift again.

The mob will not believe me either, for that matter, but FWIW, I've worked on the Linux desktop for over 20 years, and as the lead developer of KDE Plasma's taskbar and a semi-regular contributor to its window manager, I'm exposed to the complexity of these systems in a way that only relatively few people in the world are. And I'd rather keep the Wayland code than the X11 one, which I wrote a lot of.


> The reality of the situation is that there's sort of a hateful half-knowledge mob dynamic around the topic, where people run into a bug, do some online search, run into the existing mob, copy over some of its "arguments" and the whole thing keeps rolling and snowballing.

Most other Linux projects "just work" without any drama (usually those not originating at Red Hat?). Makes you wonder why Wayland is so special (or maybe it is something special about the Red Hat company culture?).

Sometimes a badly designed system is simply a badly designed system, and the main forces behind Wayland seem to be exceptionally tone deaf and defensive when it comes to feedback both from users and application developers (e.g. there seems to be a general "we know better what's good for you" attitude).


Wayland introduced unwanted and unnecessary fermentation that even Xorg users will have to suffer from.

Well, bloated just got a whole new meaning! Hopefully the gases are temporary only!

Feels like the world religions that doubled down on reincarnation/rebirth/cyclic narratives were, literally, ahead of their time.

Cherish it if the Great RNG In The Sky gave your simulation cycle a good seed.


Scott Aaronson wrote a bit about the following thought [0]. If copying a brain and simulating reality ala The Matrix is possible at all, then if you get your brain copied you live one biological live but your copies have an unbounded number of existences (millions? billions? trillions?)

So, if copying brains is possible, and you don't know which version of you you are, you might have odds of, say, 1 to 1 trillion to be living your first, biological live.

Which is to say, if copying brains is possible, you are likely to be running in a simulation already.

[0] there's multiple links and I can't find where I first read, but I found this one from 2024, https://scottaaronson.blog/?p=7774 and uhh.. turns out the argument isn't from him personally (and he doesn't even believe on it), and is best presented here https://simulation-argument.com/ (though it's presented very differently so idk)


I often refer to it as RNGesus.

There are actually cases this has happened in (e.g. re-releases using ScummVM under the hood; id basing products on community source ports, etc.), but it's not always that simple.

Chris Sawyer as creator for example is known to have particular opinions on this as I recall, and if you e.g. look over to film making there's also a hot debate over preserving original artistic intent and original creations over later remasters. OpenTTD is more than a maintenance upgrade, it's a continuation and a different game.

Honestly I think it's probably just OK what Atari has done here. Monetizing the original assets is well in their rights both legally and morally (especially considering e.g. royalities to Chris), OpenTTD remains available everywhere, they're monetarily supporting OpenTTS, gamers will find it.

Note that once a commercial company decides to ship a FOSS project, they also are much more invested in potentially controlling its direction to different ends. This setup keeps OpenTTD community-run and independent, free to make decisions independent of a commercial agenda. This also feels worth protecting.


Another example is Heroes III with VCMI and HotA and other similar things. Some are attempts to do a bug-for-bug "vanilla" recreation, others expand on it in defined ways, still others add new features "in the spirit" of the original.

When you get to the last, you can definitely see how the original creator/artists could disagree.


Your stance is aggressive and provocative, but no less so than the challenge AI poses to software developers in general. I think what you say should be seriously entertained.

And as someone who loves Python and has written a lot of it, I tend to agree. It's increasingly clear the way to be productive with AI coding and the way to make it reliable is to make sure AI works within strong guardrails, with testsuites, etc. that combat and corral the inherent indeterminism and problems like prompt injection as much as possible.

Getting help from the language - having the static tooling be as strict and uncompromising as possible, and delegating having to deal with the pain to AI - seems the right way.


On the other hand, AI being very good at everything while select humans may only be very good at some things is likely also a quality we want to retain (or, well, achieve).

Honestly, a Pi 5 is powerful enough to run a full desktop very comfortably. It's not a low-powered computer anymore by any means.

Indeed, after adding the NVME SSD card and installing Ubuntu on the drive, it's my daily driver.

My anecdotal experience is rather different.

I write a lot of C++ and QML code. Codex 5.3, only released in Feb, is the the first model I've used that would regularly generate code that passes my 25 years expert smell test and has turned generative coding from a timesap/nuisance into a tool I can somewhat rely on not to set me back.

Claude still wasn't quite there at the time, but I haven't tried 4.6 yet.

QML is a declarative-first markup language that is a superset of the JavaScript syntax. It's niche and doesn't have a giant amount of training data in the corpus. Codex 5.3 is the first model that doesn't super botch it or prefers to write reams of procedural JS embeds (yes, after steering). Much reduced is also the tendency to go overboard on spamming everything with clouds of helper functions/methods in both C++ and QML. It knows when to stop, so to speak, and is either trained or able to reason toward a more idiomatic ideal, with far less explicit instruction / AGENTS.md wrangling.

It's a huge difference. It might be the result of very specific optimization, or perhaps simultaneous advancements in the harness play a bigger role, but in my books my kneck of the woods (or place on the long tail) only really came online in 2026 as far as LLMs are concerned.


As a Qt C++ and QML developer myself[1], Opus 4.6 thinking is much better than any other model I've tested (Codex 5.3/GPT 5.4/Gemini 3.1 Pro).

[1] https://rubymamistvalove.com/block-editor


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: