They're talking about people still running ES3 browser engines, like IE8, which was released 15+ years ago and went EOL 10+ years ago. The author could have done a better job clarifying this, but they're not pushing for a world with 2y device lifetimes.
Indeed, they're talking about the opposite extreme from the usual problem we all bemoan in here, which is JS devs being determined to use the newest shiniest thing as soon as it's been announced, instead of being willing to continue to use what they've always used and to wait until the new stuff works across all browsers. This article really surprised me, in how far some are apparently going in the opposite direction. I'm very surprised the baseline mentioned is ES3 rather than ES5 or 6.
The GP's comment - that we have to upgrade our hardware because devs are "anorexically obsessed with lean code, and find complex dependancies too confusing/bothersome" - is surely the exact opposite of reality? We have to upgrade to faster hardware because the bloat slows everything down!
Fair, but personally I’d absolutely prefer slower bloated code with twice the lifespan to faster code that forces me to buy new hardware I can’t afford. But I’m a nearly extinct type of consumer who happily clings to pre-subscription-era software (e.g., Photoshop 7, Sketchup 2017). I understand and begrudgingly accept that businesses couldn’t survive by tending to the desires of folks like me.
Thanks for the clarification. I did not understand.
My knee-jerky reaction to the author’s blithe exhortation to upgrade stems from pain of watching as my prized workhorse (a 2015 MacBook) dies in my arms despite its magnificently healthy and powerful body.
This is not correct. A business this big would definitely be using accrual accounting (not cash) which generally means you count the revenue when the actual ownerships transfers to the buyer. Since the truck was operated by the seller, the transfer of ownership is almost certainly counted as when the buyer receives the goods.
Honestly my impression was the “nines” of reliability just means how many nines your reliability starts with, as a decimal. I never thought much about it though.
I will also say it’s amusing that the debate is between one and two nines. Neither is objectively great. If you built a system with >3.65 days of downtime in a year that wouldn’t be something you’d brag about in an interview.
I used a first-gen eeepc with Linux in college. I didn't have any problems with speed for normal use, though I ssh'd into servers for anything more intensive than running a browser.
Thanks for replying - so its used as a generic catch-all term internally? Did previous DoD secretaries use it in speeches? I thought they used bureaucratic terms like service member. I guess that doesn't work in casual conversation...
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.
OK, maybe someone will build a bioweapon that does that for real. :P
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
For every person that thinks about creating the HIV-like deadly pathogen, there will be millions more thinking about how to defend people against such pathogen, how to detect it faster before symptoms arise, how to put up barriers to creating them, and possibly even how to modify our bodies to be naturally resilient to all similar pathogens. Just like what you're doing here. I don't think we should mark knowledge or intelligence itself as the problem. If that's true then we should be making everyone dumber.
We were woefully under prepared for COVID despite many people predicting that very event. At the very least, we should have had stockpiles of PPE from the beginning.
It's not enough for a handful of people to predict something. You have to get the entire nation onboard to defend against it.
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.
Centralizing power is dangerous and leads to power struggles and instability.
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
Really very sure that wasn't one of the conditions. I didn't remember that from 2012, and looking now it wasn't included in the merger agreement. They did write:
> We believe these are different experiences that complement each other. But in order to do this well, we need to be mindful about keeping and building on Instagram’s strengths and features rather than just trying to integrate everything into Facebook.
>That’s why we’re committed to building and growing Instagram independently. Millions of people around the world love the Instagram app and the brand associated with it, and our goal is to help spread this app and brand to even more people. -- https://about.fb.com/news/2012/04/facebook-to-acquire-instag...
I can see the argument if you’re familiar with poetry terms, then of course that naming makes sense, but I think proper names occupy a different part of the brain for people which inhibits the ability to make that connection. But also the jump from sonnet to opus is not as big as haiku to sonnet even though the names might imply such a jump (17 syllables -> 14 lines -> multi page masterpiece does not capture the difference between the models)
> I can see the argument if you’re familiar with poetry terms,
I think they mean "if you're familiar with Anthropic's family of models". They've had the same opus > sonnet > haiku line of models for a couple of years now. It's assumed that people already know where sonnet 4.6 lands in the scheme of things. Because they've had that in 4.5, and 4.1 before it, and 4 before it, and 3.7 before it, etc.
reply