Not all of the functionality is in the firmware though. You can put stuff in the silicon itself that allows backdoors.
It's very difficult to inspect a laid out chip for nefarious elements - there's too much of it to do manually. Having a secure supply chain is probably the best way to prevent that happening.
Which is not to say that I support this rule - it sounds like another import weapon trump can swing against people who aren't his friends.
Right but access to those keys will be available in an unhardened location then? Otherwise you're serving encrypted data. So if the system accessing the data and using the keys is compromised, which we can assume is the case if the data is compromised, then access to the keys is as well?
Maybe I'm being an idiot but it seems like a lot of extra complexity to protect against really only physical attacks where someone directly steals the data storage.
Aren't we legislating the wrong problem here then though? I'd argue prioritising the physical security of your drives over encrypting them is a better aim for services. As if someone can physically steal your drives they've still DOSed your system even if they cannot accesd the content.
It is VERY different. One company now has complete control of the activities of the team developing these tools. Contributing to Python (money or time) gets you some influence, but doesn't allow you to dictate anything - there's still a team making the decisions.
I'm amazed no-one has used the term "Regulatory Drawbridge". It's a classic thing that happens in a number of industries - the big players push for more and more regulation. It costs them money and time, but it makes a massive barrier for new incumbents who don't have the cashflow and manpower to work through the regulatory process.
Medicine is the classic example, but it's happening in the tech industry too. The FAANGs of the world took advantage of an unregulated landscape, but now that they're in the castle they're pulling up the drawbridge behind them.
(sidenote - this is why regulation like the Digital Markets Act in the EU should be great. It's only a cost to larger businesses. In practice we're not yet seeing the changes that it should create).
What you're describing is the every day reality but what you WANT is that if your implementation has a race condition, then you want a test that 100% of the time detects that there is a race condition (rather than 1% of the time).
If your test can deterministically result in a race condition 100% of the time, is that a race condition? Assuming that we're talking about a unit test here, and not a race condition detector (which are not foolproof).
> Assuming that we're talking about a unit test here
I think the categorisation of tests is sometimes counterproductive and moves the discussion away from what's important: What groups of tests do I need in order to be confident that my code works in the real world?
I want to be confident that my code doesn't have race conditions in it. This isn't easy to do, but it's something I want. If that's the case then your unit test might pass sometimes and fail sometimes, but your CI run should always be red because the race test (however it works) is failing.
This is also hints at a limitation of unit tests, and why we shouldn't be over-reliant on them - often unit tests won't show a race. In my experience, it's two independent modules interacting that causes the race. The same can be true with a memory bug caused by a mismatch in passing of ownership and who should be freeing, or any of the other issues caused by interactions between modules.
> I think the categorisation of tests is sometimes counterproductive
"Unit test" refers to documentation for software-based systems that has automatic verification. Used to differentiate that kind of testing from, say, what you wrote in school with a pencil. It is true that the categorization is technically unnecessary here due to the established context, but counterproductive is a stretch. It would be useful if used in another context, like, say: "We did testing in CS class". "We did unit testing in CS class" would help clarify that you aren't referring to exams.
Yeah, Kent Beck argues that "unit test" needs to bring a bit more nuance: That it is a test that operates in isolation. However, who the hell is purposefully writing tests that are not isolated? In reality, that's a distinction without a difference. It is safe to ignore old man yelling at clouds.
But a race detector isn't rooted in providing verifiable documentation. It only observes. That is what the parent was trying to separate.
> I want to be confident that my code doesn't have race conditions in it.
Then what you really WANT is something like TLA+. Testing is often much more pragmatic, but pragmatism ultimately means giving up what you want.
> often unit tests won't show a race.
That entirely depends on what behaviour your test is trying to document and validate. A test validating properties unrelated to race conditions often won't consistently show a race, but that isn't its intent so there would be no expectation of it validating something unrelated. A test that is validating that there isn't race condition will show the race if there is one.
You can use deterministic simulation testing to reproduce a real-world race condition 100% of the time while under test.
But that's not the kind of test that will expose a race condition 1% of the time. The kinds of tests that are inadvertently finding race conditions 1% of the time are focused on other concerns.
So it is still not a case of a flaky test, but maybe a case of a missing test.
Your logic is circular though. You are saying that there won't be much speedup for the sort of things people already do in WASM - but the reason they're doing them in WASM is because they're not slowed down too much.
What you don't get much is people doing standard SPA DOM manipulation apps in WASM (e.g. the TodoMVC that they benchmarked) because the slowdown is large. By fixing that performance issue you enable new usecases.
IMHO the bang-for-the-buck proportion isn't right. The core problem is slow string marshalling in the JS shim (because it needs to create a JS string object from an ArrayBuffer slice - where the array buffer is the WASM heap).
Integrating the component model into browsers just for faster string marshalling is 'using cannons to shoot sparrows' as the German saying goes.
If there would be a general fast-path for creating short-lived string objects from ArrayBuffer slices, the entire web ecosystem would benefit, not just WASM code.
Claude Code Enterprise: you pay per token
reply