I think developers are drowning in tools to make things "easy", when in truth many problems are already easy with the most basic stuff in our tool belt (a compiler, some bash scripts, and some libraries). You can always build up from there.
This tooling fetish hurts both companies and developers.
It’s that, and the fact that precious few people seem to understand fundamentals anymore, which is itself fed by the desire to outsource everything to 3rd parties. You can build an entire stack where the only thing you’ve actually made is the core application, and even that is likely to be influenced if not built by AI.
The other troubling thing is that if you do invest time into learning fundamentals, you'll be penalized for it because it won't be what you're interviewed on and probably won't be what you're expected to do on the job.
Yeah; IMO Docker was our last universal improvement to productivity, in 2013, and very little we've invented since then can be said to have had such a wide-ranging positive impact, with such few drawbacks. Some systems are helpful for some companies, but then try to get applied to other companies where they don't make sense and things fall apart or productivity suffers. Cloudflare and others are trying to make v8 isolates a thing, and while they are awesome for some workloads, people want them to be the "next docker", and they aren't.
The model "give me docker image, we put it on internet" is staggeringly powerful. It'll probably still be the most OP way to host applications in 2040.
Docker + IaC* for me; git ops, immutable servers, immutable code, immutable config, (nearly) immutable infrastructure means I haven't had to drop to the command line on a server since 2015. If something is wrong you restart the container, if that doesn't work you restart the host it's running on. The "downside" to this is my "admin" shell skills outside of personal dev laptop commands have gotten rusty.
> If something is wrong you restart the container, if that doesn't work you restart the host it's running on
Haha, lucky you. If only world was this beautiful :) I regularly shell into Kubernetes nodes to debug memory leaks from non-limited pods, or to check some strange network issues.
I'm working on a project right now that should be two or three services running on a VM. Instead we have 40+ services spread across a K8s cluster with all the Helm Chart, ArgoCD, CICD pipeline fun that comes with it.
It drives me absolutely nuts. But hey if the company wants to pay me to add all that stuff to my resume, I guess I shouldn't complain.
Yeah, the previous company I worked for started with a Django monolith that someone had come in and taken an axe to essentially at random until there were 20 Django "microservices" that had to constantly talk to each other in order to do any operation while trying to maintain consistency across a gigantic k8s cluster. They were even all still connected to the same original database that had served the monolith!
Unfortunately my campaign of "what if we stuck all the django back together and just had one big server" got cut short by being laid off because they'd spent too much money on AWS and couldn't afford employees any more.
I had to chuckle, how ironic this is...
I worked on a project where they had 6 microservices with about 2-3 endpoints each and some of them would internally call other microservices to sync and join data. That was for 20 users top and managed by 1 team. The cloud bill was exciting to look at!
Agreed, but not Bash. Bash should not be used for anything except interactive use. It's just way too error-prone and janky otherwise.
I am not at all a fan of Python but even so any script that you write in Bash would be better in Python (except stuff like installation scripts where you want as few dependencies as possible).
If it's worth saving in a file, it's worth not using Bash.
I find myself exhausted after a time, when I have to switch between 2-3 apps and many more tabs trying to co-ordinate things or when debugging issues with a teammate. And this is with me working professionally for only ~3 years.
I think the tools are nice to use early on but quickly become tough to manage as I get caught up with work, and can't keep up with the best way to manage them. Takes a lot of mental effort and context switching to manage updates or track things everywhere.
I once started working at a company that sold one of those visual programming things, and during training I was tasked with making a simple program, I was a bit overwhelmed with the amount of bugs and lack of tools to make basic features, so I made a prototype of the application I wanted in python, the plan was to port it later, I got it in a couple of days.
The tool developers weren't keen of the idea, they told me "Yeah, I can solve the problem with a script too, the challenge is to do it with our tool". And I thought it was kind of funny how they admitted that the premise of the tool didn't work.
It's like this holy grail panacea that arises 20 times every month, developers want to invent something that will avoid the work of actually developing, so they sunk-cost-fallacy themselves into a deep hole out of which they can only escape if they admit that they are tasked with automating, they cannot meta-automate themselves, and that they will have to gasp do some things manually and repeatedly, like any other working class.
And that is actually the advantage of serverless, in my mind. For some low-traffic workloads, you can host for next to nothing. Per invocation, it is expensive, but if you only have a few invocations of a workload that isn't very latency sensitive, you can run an entirely serverless architecture for pennies per month.
Where people get burned is moving high traffic volumes to serverless... then they look at their bill and go, "Oh my god, what have I done!?" Or they try to throw all sorts of duct tape at serverless to make it highly performant, which is a fool's errand.
Exactly. I've always found that how people want to use lambda is the exact opposite of how to use it cost effectively.
I've seen a lot of people want to use lambdas as rest endpoints and effectively replace their entire API with a cluster of lambdas.
But that's about the most expensive way to use a lambda! 1 request, one lambda.
Where these things are useful is when you say "I have this daily data pull and ETL that I need to do." Then all the sudden the cost is pretty dang competitive.
The amount of 0s in the price per second is mesmerizing, but just multiply this by 24h and 30 days, and you are well within the price range of a better EC2 with much better performance, plus you can process 1000 req/s instead of 1 req/s for the same price.
"Cheap" is relevant if you are talking about work load that is one off and doesn't run continuously. A lot of people use serverless to run a 24-7 service which sort of defeats the purpose. It doesn't get that cheap anymore.
Serverless is good if you have one off tasks that are used intermittently and are not consistent.
This tooling fetish hurts both companies and developers.