What about a reflex camera with a CCD? It’s mechanical in that it moves the mirror out of the way to expose the digital censor. I’d call that a digital camera because the output is digital.
ETA: AAMOF, we called them Digital Cameras when they first arrived.
It appears that they put an actual file system in front of S3 (AWS EFS basically) and then perform transparent syncing. The blog post discusses a lot of caveats (consistency, for example) or object namings (incosistencies are emitted as events to customers).
Having been a fan of S3 for such a long time, I'm really a fan of the design. It's a good compromise and kudos to whoever managed to push through the design.
Because people will use it as filesystem regardless of the original intent because it is very convenient abstraction. So might as well do it in optimal and supported way I guess ?
They found a way to make money on it by putting a cache in front of it. Less load for them, better performance for you. Maybe you save money, maybe you dont.
People and by people I mean architects and lead devs at big account orgs ( $$$ ) have been using S3 as a filesystem as one of the backbones of their usually wacky mega complex projects.
So there always been a pressure to AWS make it work like that. I suspect the amount of support tickets AWS receives related to "My S3 backed project is slow/fails sometimes/run into AWS limits (like the max number of buckets per account)" and "Why don't.." questions in the design phase which many times AWS people are in the room, serve as enough of a long applied pressure to overcome technical limitations of S3.
I'm not a fan of this type of "let's put a fresh coat on top of it and pretend it's something that fundamentally is not" abstractions. But I suspect here is a case of social pressure turbo charged by $$$.
I think it opens them up to a huge customer base of less technically apt people who just downloaded some random "S3asYourFS.exe" program but also opens them up to needing to support that functionality and field support calls from less technically apt people. I don't know if that business decision makes sense (since AWS already lacks the CS infrastructure to even deal with professional clients) but the idea that you could get everyone and their brother paying monthly fees to AWS is likely too tempting of a fruit to pass up.
Because without significant engineering effort (see the blog post), the mismatch between object store semantics and file semantics mean you will probably Have A Bad Time. In much earlier eras of S3, there were also some implementation specifics like throughput limits based on key prefixes (that one vanished circa 2016) that made it even worse to use for hierarchical directory shapes.
it's not a subsidy. It's predatory pricing and it should be illegal. I offer you a service at a loss to remove competition and then increase prices once you are stuck with it.
1. Phone storage wasn't paid at an absurdly premium price. Sometimes the option with just higher storage may be $300 more.
2. High speed Internet was available cheaply everywhere.
If I'm in a town in the middle of nowhere. I'm not going to use my expensive data plan (because in the US mobile data is extremely expensive compared to EU)
To download a 500Mb app that will take 5 minutes to download because the Internet is slow just to pay for parking
For some reason, app sizes seem to have exploded, but especially on iPhones. Maybe it's the fact cheap Androids are still being used but I was surprised to find out how many 50MB Android apps were in 200MB iPhone apps.
When it took ages to download the same app to my work iPhone as I was downloading to my normal Android I thought there was something wrong with the iPhone at first, but it was literally spending five times the data to download what seemed to be an identical app.
There's something to be said for downloading a 50MiB app to save yourself from downloading 1MiB every time you pull out the website, but with modern app sizes, things are getting ridiculous.
How many times are you redeploying your homelab stuff? I also run lxc containers and thought about automating deployments but in my one year running proxmox I have only deployed each container once. If anything breaks I have PBS running to recover a previous backup. I don't see myself having to repeat this process more than once or twice
It less about how many times and more about used to automate everything, spend less time doing boring things and more time doing fun stuff.
For example, when I first deployed a Jellyfin LXC container with GPU and what not, the container itself hosts nothing, Proxmox mounts the NFS shared from TrueNAS to it, and it uses a local NVMe for transcoding.
And yet, novice me picked a small storage size, 5GB or something because I only run Debian Netinst which uses 200MB of ram and 0.00001% CPU. Debian Netinst itself requires what 1-2GB of disk??
Back to your question, I had to redeploy another Jellyfin container coz it ran out of disk space with:
1. the GPU passthrough
2. mount all the NFS shares once the LXC is up
3. the transcode folder
4. rsync from TrueNAS and restore the metadata with all the movies and what not.
Had I planned to do it?? Nope.
One command line and I have a brand new Jellyfin LXC with much bigger storage, and working like nothing happened, fully automated from my PC via Ansible.
Not sure I understand the problem. Are people just letting AI do anything? I use Claude Code and it asks for permission to run commands, edit files, etc. No need for sandbox
Yes, people very much are, and that's exactly the problem! People run `claude --dangerously-skip-permissions` and `codex --yolo` all the time. And I think one of the appeals of opencode (besides cross-model, which is huge) is that the permissions are looser by default. These options are presumably intended for VM or container environments, but people are running them outside. And of course it works fine the first 100 times people do it, which drives them to take bigger and bigger risks.
reply