>They haven't been following international law since 1979.
History doesn't start in 1979. Why not go back 26 years further to when the US and UK overthrew Iran's democratically elected leader? International law doesn't allow that either.
Not sure about the stats, but it does feel like there are fewer. So from what I know encryption and sending fs state had bugs in ZFS.
And on btrfs anything above raid1 (5,6 etc) has had very serious bugs. Actually read an opinion somewhere (don't remember where) raid5,6 on btrfs cannot work due to on-disk format being just bad for the case. I guess this is why raid1c3/c4 is being promoted and worked on now?
I love ZFS, but have been corrected a couple of times when I said it was bomb proof. Can't remember the details, but it has served me faithfully for 10 years or so? Plus the bugs were pretty niche if I recall correctly.
Edit: found some comments below:
ZFS on Linux has had many bugs over the years, notably with ZFS-native encryption and especially sending/receiving encrypted volumes. Another issue is that using swap on ZFS is still guaranteed to hang the kernel in low memory scenarios, because ZFS needs to allocate memory to write to swap.
there will always be a legal shield for those in power, i.e. the presidency but it will have a bigger chilling effect on freedom of speech against those the conservatives wish to silence.
platforms will become required to police any
dissenting voices and shut down any content that the “current” administration deems unsuitable. they have levers they can pull to keep media companies in line.
the issue isn’t what plebs are posting but that the media companies ALLOW these “undesirable” posts to exist (thanks to section 230). without that protection, they will either have to aggressively moderate or disable open comments and communication which is a form of censorship. exactly what the conservatives want.
Runtime-wise we use more garbage collected languages now. Java and such are great and can be very high performance, the real cost though is memory. GC languages need much more memory for book keeping, but they also need much more memory to be performant. Realistically, a Java app needs 10x the amount of memory as a similar C++ application to get good performance. That's because GC languages only perform well when most of their heap is unused.
As a side-note, that's how GC languages can perform so well in benchmarks. If you run benchmarks that generate huge amounts of garbage or consistently run the heap at 90%+ usage, that's when you'll see that orders of magnitude slowdown.
Oh also containers, lots more containerized applications on modern Linux desktops.
Programs that manually allocate and deallocate memory to store "huge amounts of garbage" can easily incur more memory management overhead than programs using garbage collection to do the same.
If a Java application requires an order of magnitude more memory than a similar C++ application, it's probably only superficially similar, and not only "because GC".
Well no because in a manual memory management language if you allocate 1 object, and then destroy it, and then reallocate another object then you've used 1 object amount of memory.
In Java, that's two objects, and one will be collected later.
What this means is that a C++ application running at 90% memory usage is going to use about the same amount of work per allocation/destruction as it would at 10% usage. The same IS NOT true for GC languages.
At 90% usage, each allocation and deallocation will be much more work, and will trigger collections and compactions.
It is absolutely true that GC languages can perform allocations cheaper than manual languages. But, this is only true at low amounts of heap usage. The closer you get to 100% heap usage, the less true this becomes. At 90% heap usage, you're gonna be looking at an order of magnitude of slowdown. And that's when you get those crazy statistics like 50% of program runtime being in GC collections.
So, GC languages really run best with more memory. Which is why both C# and Java pre-allocate much more memory than they need.
And, keep in mind I'm only referring to the allocation itself, not the allocation strategy. GC languages also have very poor allocation strategies, particularly Java where everything is boxed and separate.
Given that efficiency one of Linux's most touted advantages, what in the world is Ubuntu's PR department thinking? Ubuntu isn't providing any more functionality than when its memory requirement was 4GB. What is hogging all that extra ram?
> what in the world is Ubuntu's PR department thinking?
The same as any other corporate PR department: "At least now when people run it with N GB of RAM, we can just point to the system requirements and say 'This is what we support' rather than end up in a back-and-forth"
If you expect them to have any sort of long-term outlook on "Lets be careful with how developers view our organization", I think you're about a decade too late for Canonical.
No official reason given, so all the tech press is basically speculating (if someone finds a source that does a teardown, please share; I can't seem to locate one). I think my favorite piece of speculation is that it reflects an anticipated modern workload of using the OS as a vector to launch a web browser and open multiple tabs in it, which is just going to be a memory hog as experienced by most Ubuntu users.
Besides the correct answer that Canonical sucks, I would argue that “efficiency” is not a selling point to get someone to use a desktop operating system.
Mainstream users and business organizations don’t really understand that concept and would prefer to see how the operating system enables their use cases and workflows.
Apple under-speccing their machines like they’ve been doing since the dawn of time is not some kind of indicator of any trends. You can buy $350 PC laptops that come with 12GB of RAM (example: https://www.staples.com/asus-vivobook-x1404-14-laptop-intel-...)
RAM shortages will be quite temporary. Making predictions based on individual component shortages has never been a winning strategy in the history of the industry. Next you’ll tell me that graphics cards will be impossible to get because of blockchain.
I think the OP is asking why Apple is enclosing macs in a walled garden when that concept is generally associated with iPhones, not general-purpose computers.
Yup. Just checked. Right now I have "com.adobe.acc.installer.v2" running as root on two threads. The other 3 background processes (at least those with adobe in the name) are under the user. The whole stack is using like 75mb ram at all times. You kill the process they restart. You delete the files from your launchd, open adobe software they come back.
> But when all of the Epstein thing happened, I genuinely thought that US media which moved the headlines faster than I can think about the issues for, would actually slow down given the severity and we as a society could think about it.
Not to worry since the public face of the Epstein files coverup is back in the news.
It's the combination of both factors that counts. Even if Google Play has a lower malware rate, a user is still far more likely to try to install apps through Google Play given the sheer size of its catalog and its prominent, default placement on people's devices.
History doesn't start in 1979. Why not go back 26 years further to when the US and UK overthrew Iran's democratically elected leader? International law doesn't allow that either.
reply