Everybody is acts so surprised as if nobody (around here of all places!) read the sama tweet in which he was hiring the Head of Preparedness... in December.
Besides that i'm not reading x, what has this arbitary random tweet todo with antrophic, the yt talk about Opus quality Jump to find exploits no one else was able to find so far?
A theoretical random tweet and a clear demonstration are two different things.
If you don't need to switch versions at runtime (ABR), you don't even need to chunk it manully. Your server has to support range requests and then the browser does the reasonable thing automatically.
The simplest option is to use some basic object storage service and it'll usually work well out of the box (I use DO Spaces with built-in CDN, that's basically it).
Yes, serving an MP4 file directly into a <video> tag is the simplest possible thing you can do that works. With one important caveat: you need to move the "MOOV" metadata to the front of the file. There are various utilities for doing that.
Yea, honestly you probably just don't understand. FE frameworks solve a specific problem and they don't make sense unless you understand that problem. That TSoding video is a prime example of that - it chooses a trivial instance of that problem and then acts like the whole problem space is trivial.
To be fair, React is especially wasteful way to solve that problem. If you want to look at the state od the art, something like Solid makes a lot more sense.
It's much easier to appreciate that problem if you actually try to build complex interactive UI with vanilla JS (or something like jQuery). Once you have complex state dependency graph and DOM state to preserve between rerenders, it becomes pretty clear.
One of my projects does have a complex UI and is built with zero runtime dependencies on the front end. It doesn't require JS at all for most of its functionality.
I just render as much as possible on the server and return commands like "hide the element with that ID" or "insert this HTML after element with that ID" in response to some ajax requests. Outside of some very specific interactive components, I avoid client-side rendering.
I agree with you. It’s baffling to see websites (not web apps) refusing to show anything if you disable JS. And a lot of such web apps don’t need to be SPA (GitHub,…)
SPA was mean for UI that relies on the client state mostly, not on the server data (figma and other kind of online editors).
> Most integration tests are not thread safe and make assumptions about running against an empty database. Which if you think about it, is exactly how no user except your first user will ever use your system.
Dangling state is useful for debugging when the test fails, you don't want to clean that up.
This has been super useful practice in my experience. I really like to be able to run tests regardless of my application state. It's faster and over time it helps you hit and fixup various issues that you only encounter after you fill the database with enough data.
> It feels a little tricky to square these up sometimes.
In my experience, this heavily depends on the task, and there's a massive chasm between tasks where it's a good and bad fit. I can definitely imagine people working only on one side of this chasm and being perplexed by the other side.
> There must be a really good reason for this, such as Rust doesn’t interop well with C++
Yea, I'd bet it's that. Ideally, you'd want to stop writing C++ and continue with Rust on all new code, but Rust has stricter semantics, so the interop is somewhat "easy" in one direction, but very hairy in the other direction.
This means that in practice, you want to start porting from leaf components and slowly grow closer to the root, which stays in C++ for quite some time and just calls into Rust through C API (or something close to it).
If you're curious about the topic, there's a interop library called Zngur (https://hkalbasi.github.io/zngur/), which is built on this assumption. They have a pretty good explanation of the concrete problems on the homepage.
Lot of DOM APIs are like that. You have methods like element.[parent|children]() which implies circular structure, and then you have APIs like element.click(), which emits a click event that bubbles through the DOM - which means that element has to have some mutable reference to the DOM state. Or even element.remove(), which seems like a super weird api to have on an element of a collection, from Rust API design point of view.
You can model these with reference counting, but this turned out unfeasible in browsers. There's a great talk from when Blink (Chrome) transitioned from reference counting to GC, which provides a lot more details about these problems in practice: https://www.youtube.com/watch?v=_uxmEyd6uxo
> I'd get it if they were writing the UI in this, but the rest of this post is about the JS engine.
I think this might be the reason they started with the JS engine and not with some more fundamental browser structures. JS object model has these problems, too, but the engine has to solve them in more generic way. All JS objects can just be modeled as some JSObject class/struct where this is handled on the engine level.
DOM and other browser structures are different because the engine has to understand them, so the browser developers have to interact with the GC manually and if you watch the talk above, you'll see that it's quite involved to do even in C++, let alone in Rust, which puts a bunch of restrictions on top of that.
> Modern C++ pretty much solves the safety issues.
I always wonder how can one come to such a conclusion. Modern C++ has no way to enforce relationship between two objects in memory and the shared xor mutable rule, which means it can't even do the basic checks that are the foundation of Rust's safety features.
Of course, this statement is also trivially debunked by the reality of any major C++ program with complexity and attack surface of something like a browser. Modern C++ certainly didn't save Chrome from CVEs. They ban a bunch of C++ features, enforce the rule of two, and do a bunch of hardening and fuzzing on top of it and they still don't get spared from safety issues.
FWIW Chrome includes third party libraries like freetype and lots of bugs are in javascript. I imagine defensive checks in javascript will be controversial since performance of javascript is controlled by webdev, not by browser.
Note that Chrome is replacing[1] FreeType with Skrifa[2], which is a Rust-based library that can handle a lot of the things FreeType is being used for in Chrome. A lot of Chrome's dependencies are being rewritten in Rust.
I don't see the idea is visual tools, I never even heard somebody to talk about it like that. The plan is to target enterprise customers with advanced features. I feel like you should just go and watch some interviews or something where talk about their plan, Evan You was recently on a few podcasts mentioning their plans.
Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.
But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.
https://www.youtube.com/watch?v=1sd26pWhfmg
Looks like LLMs are getting good at finding and exploiting these.
reply