It can be any number of things. From spending hour or two just writing requirements, to giving an example of existing curated code from another project you wrote and would like to emulate, or rewriting existing apps in a different language/architecture (sort of like translating), to serving as a QA agent or reviewer for the LLM agent, or vice versa.
I kinda like how you can just use it for anything you like. I have bazillion personal projects, I can now get help with, polish up, simplify, or build UI for, and it's nice. Anything from reverse engineering, to data extraction, to playing with FPGAs, is just so much less tedious and I can focus on the fun parts.
Czechia has a very dense public transport network and if you want to walk a very nice network of marked tourist tracks. Not that different form 1989, except for marking an explicit cycling network since then.
NY Times doesn't even know what NATO is, while writing a full-page article about it, so yeah this kind of ignorance about where USA has it hands and guns is not surprising, from some random on the internet. ;)
Primarily, countries should prosecute their own criminals. That's the whole sovereignty thing. If you don't, and these are international criminals, your country as a whole is what we call a state supporter of terrorism, or some such, if those international crimes have political goals and are directed against other countries and their people as a whole (and don't fit the high bar of self-defense). If the crimes are done by those in power, it's just state terrorism.
I'll rephrase my previous post for you, to make it clearer:
Lack of prosecution of high-level war criminals makes your country a state supporter of terrorism. (the claim in the post)
Because that's what US war criminal leaders do. They terrorize entire nation by threatening population's survival via destruction of all their power plants, which I assume includes nuclear fallout from their nuclear power plant.
Russia needs its energy sources for its own war, too. Energy getting more expensive globally, while UA reducing the supply by targeting RU production, is a double edged sword. RU is now putting bans on export of some fuels, etc. Whether EU turning into a defense alliance with sole focus on RU, while taking in all lessons from UA war (without having to deal with US pressure to buy its expensive state of the art military HW which may not be all that effective in the potential drone war) is great for russia is also questionable.
Based on a FIDO2 spec I used it to write a reasonably compliant security token implementation that runs on top of Linux USB gadget subsystem (xcept for attestation, because that's completely useless anyway). It also extracted tests from an messy proprietary electron based compliance testsuite that FIDO alliance uses and rewrote them in clean and much more understandable C without a shitton of dependencies that electron mess uses. Without any dependencies but openssl libcrypto, for that matter.
In like 4 hours. (and most of that was me copy pasting things around to feed it reasonable chunks of information, feature by feature)
It also wrote a real-time passive DTLS-SRTP decryptor in C in like 1 hour total based on just the DTLS-SRTP RFC and a sample code of how I write suckless things in C.
I mean people can believe whatever they want. But I believe LLMs can write a reasonably fine C.
I believe that coding LLMs are particularly nice for people who are into C and suckless.
LLMs are great at C, probably because C is historically the most popular language in the world, by far. It only declined slightly very recently. But there's insane amount of code written in it.
Docker is about containerization/sandboxing, you don't need to duplicate the OS. You can run your app as the init process for the sandbox with nothing else running in the background.
That makes docker entirely useless if you use it just for sandboxing. Systemd services can do all that just fine, without all the complexity of docker.
I think that on linux docker is not nearly as resource intensive as on Mac. Not sure of the actual (for example) memory pressures due to things like not sharing shared libs between processes, granted
Any node server app will be ~50-100 MiB (because that's roughly the size of node binary + shared deps + some runtime state for your app). If you failed to optimize things correctly, and you're storing and working with lots of data in the node process itself, instead of it serving as a thin intermediary between http service and a database/other backend services, you may get spikes of memory use well above that, but that should be avoided in any case, for multiple reasons.
And most of this 50-100 MiB will be shared if you run multiple node services on the same machine the old way. So you can run 6 node app servers this way, and they'll consume eg. 150MiB of RAM total.
With docker, it's anyone guess how much running 6 node backend apps will consume, because it depends on how many things can be shared in RAM, and usually it will be nothing.
Only Java qualifies under your arbitrary rules, and even then I imagine it's trying to catch up to .NET (after all.. blu-ray players execute Java).. which can run on embedded systems https://nanoframework.net/
I listed some popular languages that web applications I happened to run dockerised are using. They are not arbitrary.
If you run normal web applications they often take many hundreds of megabytes if they are built with some popular languages that I happened to list off the top of my head. That is a fact.
Comparing that to cut down frameworks with many limitations meant for embedded devices isn't a valid comparison.
"just as well"? lmao sure i guess i could just manually set up the environment and have differences from what im hoping to use in productio
> 1GiB machine can run a lot of server software,
this is naive
it really depends if you're crapping out some basic web app versus doing something that's actually complicated and has a need for higher performance than synchronous web calls :)
in addition, my mq pays attention to memory pressure and tunes its flow control based on that. so i have a test harness that tests both conditions to ensure that some of my backoff logic works
> if RAM is not wasted on having duplicate OSes on one machine.
Yes, it's exactly how docker works if you use it for where it matters for a hobbyist - which is where you are installing random third-party apps/containers that you want to run on your SBC locally.
I don't know why people instantly forget the context of the discussion, when their favorite way of doing things gets threatened. :)
Context is hobbyists and SBC market (mostly various ARM boards). Maybe I'm weird, but I really don't care about minor differences between my arch linux workstation, and my arch linux arm SBCs, because 1) they're completely different architectures, so I can't avoid the differences anyway 2) it's a hobby, I have one instance at most of any service. 3) most hobbyist run services will not work with a shitton of data or have to handle 1000s of parallel clients
> Yes, it's exactly how docker works if you use it for where it matters for a hobbyist
What you described is exactly the opposite of how it works. There is no reasonable scenario in how that is how it works. In fact, what you're saying is opposite of the whole point of containers versus using a VM.
> when their favorite way of doing things gets threatened
No, it's when someone (like you) thinks they have an absolute answer without knowing the context.
And by the way, in my scenario, container overhead is in the range of under a hundred MiB total . The thing I'm working on HAPPENS to require a fair amount of RAM.
But you confidently asserted that "1GiB machine can run a lot of server software". And that's true for many people (like you), but not true for a lot of other people (like me).
> most hobbyist run services will not work with a shitton of data or have to handle 1000s of parallel clients
neither of these are true for me but you need to take a step back and maybe stop making absolute statements about what people are doing or working on :)
you dont get to define "where it matters" for a hobbyist
> which is where you are installing random third-party apps/containers that you want to run on your SBC locally
this is such a consoomer take. for those of us who actually build software, we have actual valid reasons for using it during development
> they're completely different architectures, so I can't avoid the differences anyway
ironically this is a side benefit modern containers are useful
i think you have a fundamental misunderstanding of how containers work and why theyre useful for software development. based on your other posts in this thread only makes me more sure of that. im not saying containers/etc are a perfect solution or always the right solution, but your misconceptions are separate from that
No I don't have a fundamental misunderstanding. In the entire thread I'm talking about docker, not "containers" in general. You seem to have a misunderstanding apparently.
I've been working with "containers" since before docker existed, and I also wrote several applications that use basic technologies so called "docker containers" are based on in Linux. You can use these technologies (various namespaces, etc.) in a way that does not waste RAM. That will not happen for common docker use, where you don't control the apps and base OS completely. You can if you try hard make it efficient, but you have to have a lot of control. The moment you start pulling random dockerfiles from random sources, you'll be wasting colossal amounts of resources compared to just installing packages on your host OS, to share maximum amount of resources.
And for all these "let's have just a big static binary and put it into a container" containers, that don't really have/or need a real full OS userspace under them, there's barely any difference deployment wise from just running them without docker. In fact docker is just a very complicated additional duplicated layer in this case for what systemd does, that most people already have on their OS. So that's another RAM waste and additional overhead from what is now reduced to a service manager in this use case scenario.
> No I don't have a fundamental misunderstanding. In the entire thread I'm talking about docker, not "containers" in general. You seem to have a misunderstanding apparently.
i said modern containers. and you do have a FUNDAMENTAL MISUNDERSTANDING. you are repeating falsehoods throughout this entire thread.
> That will not happen for common docker use
again you are asserting a "common" use of software, when the people youre replying to are clearly using it for development
> where you don't control the apps and base OS completely
stopping saying "you" to me. id tell you to speak for yourself but you seem incapable of doing that
> And for all these "let's have just a big static binary and put it into a container" containers, that don't really have/or need a real full OS userspace under them, there's barely any difference deployment wise from just running them without docker.
ironically enough it does have differences, glaring big differences. like ironically the deployment differences are about the only reason to use docker in this situation
another stark example of you popping off with incorrect assertions. and yes there are reasons not to do use docker for this as well but it depends on multiple factors
> In fact docker is just a very complicated additional duplicated layer in this case for what systemd does, that most people already have on their OS. So that's another RAM waste and additional overhead from what is now reduced to a service manager in this use case scenario.
there are so many misconceptions in there asserted as if theyre the entire truth. yes people can use docker containers poorly but its not everyone.
> The moment you start pulling random dockerfiles from random sources, you'll be wasting colossal amounts of resources compared to just installing packages on your host OS, to share maximum amount of resources.
its a good thing that I'm not doing that! ive already stated that im using them to build software, not just "pulling random dockerfiles from random sources"
you are digging your heels in and you are now trying to assert a set of conditions and situation in which youre correct, even though youre dead wrong for the use cases that the people youre replying to are describing
you have repeated falsehoods as fact repeatedly and seem unable to adjust to people telling you "im not doing that thing youre complaining about"
frankly, i think youre out of your depth on this subject and youre trying to do anything you can justify your original claim that 1GiB is enough, or whatever
TLDR
feel free to have the last word, im sure youll have lots of them. maybe youll get lucky and a few will be correct. im exiting this conversation
there are no real deployment differences, eg. systemd has portable services, full containers via nspawn, etc. and there are many other ways to realize what docker does with or without containers (eg. what yandex does internally by just packaging their internal software and parts of configuration into debian packages, and manage reproducibility that way)
and you don't provide any other technical arguments
what remains is you strongly telling me something I already acknowledged in the previous post (that you can perhaps make efficient use of docker, but it's hard to make it not waste resources in general use case)
A few, until their current stocks run out. Orange Pi already increased prices (their boards are similar price or more expensive than equivalent Pi's now), and Radxa seems to just stop selling certain models (at least in NA) once they run out of stock.
Arduino has one of the cheapest 4GB boards now, but I wonder if it's just because they made a ton and the demand for their strange board has been low?
I kinda like how you can just use it for anything you like. I have bazillion personal projects, I can now get help with, polish up, simplify, or build UI for, and it's nice. Anything from reverse engineering, to data extraction, to playing with FPGAs, is just so much less tedious and I can focus on the fun parts.
reply