This article claims that these are somewhat open questions, but they're not and have not been for a long time.
#1 You sign a blob and you don't touch it before verifying the signature (aka "The Cryptographic Doom Principle") #2 Signatures are bound to a context which is _not_ transmitted but used for deriving the key or mixed into the MAC or what have you. This is called the Horton principle. It ensures that signer/verifier must cryptographically agree on which context the message is intended for. You essentially cannot implement this incorrectly because if you do, all signatures will fail to verify.
The article actually proposes to violate principle #2 (by embedding some magic numbers into the protocol headers and presuming that someone will check them), which is an incorrect design and will result in bad things if history is any indication.
Principles #1 and #2 are well-established cryptographic design principles for just a handful of decades each.
You’re right, but I think the commenter you’re replying to is also right.
The OP is using unreadable hex strings in a way that obscures what’s actually going on. If you turn those strings into functionally equivalent text, then the signatures are computed over:
(serialized object, “This is a TreeRoot”)
and the verifier calls the API:
func Verify(key Key, sig []byte, obj VerifiableObjecter) error
(I assume they meant Object not Objector.)
This API is wrong, full stop. Do not use this design. Sure, it might catch one specific screwup, but it will not catch subtler errors like confusing a TreeRoot that the signer trusts with a TreeRoot that means something else entirely. And it requires canonical encodings, which serves no purpose here. And it forces the verifier to deserialize unverified data, which is a big mistake.
The right solution is to have the sender sign a message, where:
(a) At the time of verification, the message is just bytes, and
(b) The message is structured such that it contains all the information needed to interpret it correctly.
So the message might be a serialization of a union where one element is “I trust this TreeRoot” and another is “I revoke this key”, etc. and the verification API verifies bytes.
If you want to get fancy and make domain separation and forward-and-backward-compatibility easier, then build a mini deserializer into the verifier that deserializes tuples of bytes, or at most UUIDs or similar. So you could sign (UUID indicating protocol v1 message type Foo, serialization of a Foo). And you make that explicit to the caller. And the verifier (a) takes bytes as input and (b) does not even try to parse them into a tuple until after verifying the signature.
P.S. Any protocol that uses the OP’s design must be quite tortured. How exactly is there a sensible protocol where you receive a message, read enough of it to figure out what type (in the protobuf sense) it contains such that there is more than one possible choice, then verify the data of that type? Are they expecting that you have a message containing a oneof and you sign only the oneof instead of the entire message? Why?
No, they propose just concatenating it with the data received from the network
> it makes a concatenation of the domain separator (@0x92880d38b74de9fb) and the serialization of the object, and then feeds the byte stream into the signing primitive. Similarly, verification of an object verifies this same reconstructed concatenation against the supplied signature.
> Note that the domain separator does not appear in the eventual serialization (which would waste bytes), since both signer and receiver agree on it via this shared protocol specification. Encrypt, HMAC, and hash work the same way
You are, of course, right. And this distinction is important for this chain of comments.
Though, in fairness, that is /kind of/ like transmitting it---in the sense that it impacts the message that is returned. It's more akin to sending a checksum of the magic number, rather than the magic number itself. But conceptually, that is just an optimization. The desire is for the client to ensure the server is using the same magic number, we just so happen to be able to overload the signature to encode this data without increasing the message size.
> Note that the domain separator does not appear in the eventual serialization (which would waste bytes), since both signer and receiver agree on it via this shared protocol specification.
But saying it's about wasting bytes is a little confusing, as you observe that isn't really the point.
Hmmmm. I agree that an ad-hoc implementation with protobufs can go wrong. But presumably, 1 canonical encoding for the private key constitutes the Horton principle?
It seems like Horton Principle just says "all messages have ≤1 meaning". If a message signed by key X must be parsed using the canonical encoding, then aren't we done?
There is still room for danger. e.g., You send `GetUserPermissionLevel(user:"Alice")` and server responds with `UserNicknameIs(user:"Alice", value:"admin")`. If you fail to check the message type, you might get tricked.
Maybe it's nice if it was mathematically impossible to validate the signature without first providing your assumptions. e.g., The subroutine to validate message `UserNicknameIs(user:"Alice", value:"admin")` requires `ServerKey × ExpectedMessageType`. But "ExpectedMessageType" isn't the only assumption being made, is it?
You might get back `UserPermissionLevel(user:"Bob", value:"admin")` or `UserPermissionLevel(user:"Alice", value:"admin", timestamp:"<3d old>")`. Will we expect the MAC to somehow accept a "user" value? And then what do we do about "timestamp"?
Maybe we implement `ClientMessage(msgUuid: UUID, requestData:...)` and `ServerResponse(clientMsgUuid: UUID, responseData:...)`, but now the UUID is a secret, vulnerable to MITM attack unless data is encrypted.
It seems like you simply must write validation code to ensure that you don't misinterpret the message that is signed. There simply isn't any magic bullet. Having multiple interpretations for a sequence of bytes is a non-starter (addressed in the post). But once you have a single interpretation for a sequence of bytes, isn't it up to the developer to define a schema + validation logic that supports their use case? Maybe there are good off-the-shelf patterns, but--again--no magic bullets?
No, it’s completely wrong. It’s a very minor refinement of a terrible yet sadly common design that merely mitigates one specific way that the terrible design can fail.
See my other comment here. By the time you call the OP’s proposed verify API you have already screwed up as a precondition of calling the API.
What if (and this is perhaps to big an if), you only ever serialize and de-serialize with code generated from the IDL, which always checks the magic numbers (returning a typed object(?
It's a big if because the threat model normally includes "bad guys can forge messages". Which means that the input is untrusted and you want to generate your own domain separation bytes for the hash function, not let your attacker choose them.
In my experience agents tend to (counterintuitively) perform better when the business language is not English / does not match the code's language. I'm assuming the increased attention mitigates the higher "cognitive" load.
This actually sounds a bit like a C/C++ argument. Roughly: Yes, you can easily write incorrect code but when some basic coding conventions are followed, UAF/double free/buffer overflows/... are just not a problem. After all, some of the world's most complex software is built with C / C++. If you couldn't write software reliably with C / C++, that could never be the case.
I.e. just because teams manage to do something with a tool does not mean the tool didn't impede (or vice versa, enable) the result. It just says that it's possible. A qualitative comparison with other tools cannot be established on that basis.
While VHDL makes a fun academic toy language, it has always been Verilog in the commercial settings. Both languages can develop hard to trace bugs when the optimizer decides to simply remove things it thinks are unused. =3
How does this compare to chisel [1] , i never could get around the whole scala tooling - seemed a bit over the top.
Though i guess it is a bit more mature and probably more enterprisey
> i never could get around the whole scala tooling
scala is popular in places like Alphabet, that apparently allow go & scala projects in production.
However, I agree while scala is very powerful in some ways, it just doesn't have a fun aesthetic. If one has to go spelunking for scalable hardware accelerators, a vendors linux DMA llvm C/C++ API is probably less fragile.
For my simple projects, one zynq 7020 per node is way more than we should ever need. =3
> While VHDL makes a fun academic toy language, ...
I spent the first half of my career working at some of the largest companies at the time on huge communication ASICs that were all written in VHDL, there was no Verilog in sight.
As much as I prefer to write Verilog now, VHDL is without question a more robust and better specified language, with features that Verilog only gained a decade later through SystemVerilog.
There's a reason why almost all major EDA tool support VHDL just as well as Verilog.
I disagree. We've produced numerous complex chips with VHDL over the last 30 years. Most of the vendor models we have to integrate with are Verilog, so perhaps it is more popular, but that's no problem for us. We've found plenty of bugs for both VHDL and Verilog in the commercial tooling we use, neither is particularly worse (providing you're happy to steer clear of the more recent VHDL language features).
VHDL still dominates in medical, military, avionics, space etc. and it's generally considered the safer RTL language, any industry that requires functional safety seems to prefer it.
It's also the most used language for FPGA in Europe but that's probably mostly cultural.
I wouldn't be surprised if e.g. all these paper-thin synthetic (plastic) disposable parts and fabrics used in labs shed microplastics way more than e.g. synthetic fabrics designed to be survive a machine wash a few dozen times, or upholstery meant to withstand tens of thousands of sitting cycles, nevermind solid plastics (e.g. reusable food containers, furniture surfaces).
Sure, but then the issue would be in the kind of content, not the medium. There are plenty of non-violent video games, and plenty of violent hobbies that aren't video games.
I don't see any consistent argument to single out video games.
A car's assumed lifecycle is around 15-20 years. Practical suburban EVs have been around for around half that, practical ICE-replacement EVs for about a third. Consequentially, EVs have not yet arrived in the econo-shitbox segment of the used car market, and it will still take some time for them to get there - this is simply a lifecycle question and not a "new product introduction question" (which most of the press gets wrong for obvious incentives).
That being said, there's an argument that even basic EVs are often much more pleasant to drive and less hassle overall, which could be a reason for them to command a sustained premium on the used market.
How are EV's going to get to econobox/shitbox levels when the batteries go bad in less than half the time you mentioned and it costs ~£5000 for a new one?
I saw a Nissan Note ev around here for £600 - the battery is good for around 24 miles - which exceeds what I'd do in a day on school run, gym run and shopping.
I would need to pay for a home charging point, but that would be a long term investment.
For me that Note would likely do me another 4 years of easy and cheap driving. An ice car of the same price would have more to go wrong and I'd be lucky to get 2 years driving from it. We are getting to the usable 2nd hand market already, and it is only going to get better.
"An ice car of the same price would have more to go wrong and I'd be lucky to get 2 years driving from it. We are getting to the usable 2nd hand market already, and it is only going to get better."
This is a conception primarily based around the Nissan Leaf battery, which combined poor BMS, a badly chosen chemistry and no thermal management. (People sometimes claim that the batteryleaftime is because they're passively cooled, but there are other, similarly old EVs, with passively cooled batteries, that have nowhere near the battery degradation that the Nissan EVs had).
Because newer batteries are not degrading as fast due to better thermal and load management. Because newer cars use newer chemistries that are less prone to degradation.
Moreover, just like some cars are good enough for people now, the cars with some degraded batteries will be good enough for some second hand buyers.
It would be really interesting to know what's so special about these UK units that they can be "damaged" by being fed from the "wrong" side (as per some other article), considering that the only place where these behave like that is an island north of France.
These are not just circuit breakers/MCBs, they are RCBOs which combine an MCB + RCD in a single unit. RCDs traditionally only measure - and protect - current flow is one direction, so if you are using them for solar you need a bi-directional unit for full protection. The device will not be damaged, it just won't protect you.
However in the case of a UK home, where you may have a single ring circuit connecting all the sockets on the whole floor, what's in the breaker panel isn't going to protect you with plug-in solar anyway. Better hope what you are plugging in meets UK standards and isn't just some Chinese rubbish that claims it does.
Outside the UK, neither RCDs nor RCBOs (type A/AC) are generally distinguished by bidirectionality (all search results about this being .co.uk), since the RCD part of these devices is just a current transformer driving a trip solenoid; there is nothing in it that's powered by the line, nor something which could sense net power flow direction. The situation is different for AFDDs or type B RCDs, since those have active, powered electronics in them which need to be fed from the line side.
After some research the main reason seems to be two-fold:
Answer #1: Many UK RCDs/RCBOs are actually single-pole devices and don't disconnect the neutral. In the simplest case, this means pressing the test button might burn out the test resistor when backfed. I don't imagine this to be a problem in practice, since grid-tie inverters shut down very quickly if the grid disappears under them, especially plug-in inverters. RCDs/RCBOs elsewhere are virtually always disconnecting the neutral, so don't care about this.
Answer #2: It looks like some/many one-module wide UK RCBOs _do have_ electronics in them, even if type A, because they're actively driving the trip solenoid of the MCB part, and if you sketch this out and do it in a very cheap way it's easy to see how you could burn that out if backfed (i.e. powering the trip solenoid during a fault is assumed to disconnect in a very short amount of time, but if backfed for longer than the disconnect time that might be enough to toast the solenoid or the driver).
Notably neither of these has anything to do with the direction of power flow.
> Answer #1: Many UK RCDs/RCBOs are actually single-pole devices and don't disconnect the neutral.
This is not correct; all type AC and type A RCDs used in British consumer units disconnect the neutral as well. Some RCBOs do not disconnect the neutral and this is a problem in some circumstances. The datasheet I linked for Wylex NHXS1 RCBOs explains that these ones do disconnect the neutral.
> Answer #2: It looks like some/many one-module wide UK RCBOs _do have_ electronics in them [...] but if backfed for longer than the disconnect time that might be enough to toast the solenoid or the driver
This is correct. For an example of this construction in an RCBO, see [1]. This illustrates that if the supply is connected to the "To Load" part of the schematic (toward the end of the video), as it would be if the supply is a solar PV inverter with battery storage, then it can continue powering the electronics and be shunted out by the thyristor after it has supposed to have tripped, very quickly burning itself out.
Bidirectional RCBOs are not designed in this manner. They have more complicated circuitry that makes them more expensive to manufacture, but are absolutely required in situations like this if you don't want your protective devices to burn and/or explode when they operate.
> Notably neither of these has anything to do with the direction of power flow.
Yes it does, because if the power is flowing backwards to how they designed it, that is backfeeding it, keeping its circuitry powered after it should have been disconnected.
The situation in germany is essentially the same, but that's why net supply by these is limited to 800 W. I don't think anything changes w.r.t. earth leakage, why would the presence of the solar supply change anything from the RCD and fault point of views, respectively?
Not expert but one difference is that in Germany the standard wiring is radial circuits with 16A MCBs while in the UK it's ring wiring with 32A MCBs.
So in the UK we have 2.5mm^2 wires in a ring on a 32A MCBs... Of course a 2.5mm^2 wire is rated ~20A so any issues with the ring (sockets still work since connected from the other branch) can burn the wire before the MCB trips...
The "standard" wiring is 1.5mm² on 16A MCBs which are rated to trip at 1.13-1.45x nominal current (so 18-23 A). So this is already mildly improper because you can pull elevated currents continuously and dramatically shorten the life of the insulation.
We would call it "a serious code violation." It's prohibited in the NEC and always has been, it's objectively less safe.
From what I understand the UK allowed it because of a severe postwar copper shortage and it persists to this day because it's allowed and a bit cheaper.
> From what I understand the UK allowed it because ...
I'd say "severe post-WWII money shortage". After wartime expansion, the global copper industry could physically meet peacetime demands. But the UK was very close to national bankruptcy. And the Luftwaffe had turned an awful lot of their prewar housing into rubble. So - any cost that could be cut, was.
If your generator is plugged into their own circuit, it wouldn't change much.
If you plug it into an overloaded ring final (which is not uncommon in the UK - half our house's sockets are on a single ring), you have to rely on the generator being able to detect faults to protect that circuit.
You could also overload that circuit's wiring. If you have a a 16A Ecoflow, plug it into a 32A ring, you could draw 48A before tripping the grid circuit breaker, potentially causing significant heat in the wires. Dinky 3A generators won't do that but I don't think they're the limit our government are talking about.
#1 You sign a blob and you don't touch it before verifying the signature (aka "The Cryptographic Doom Principle") #2 Signatures are bound to a context which is _not_ transmitted but used for deriving the key or mixed into the MAC or what have you. This is called the Horton principle. It ensures that signer/verifier must cryptographically agree on which context the message is intended for. You essentially cannot implement this incorrectly because if you do, all signatures will fail to verify.
The article actually proposes to violate principle #2 (by embedding some magic numbers into the protocol headers and presuming that someone will check them), which is an incorrect design and will result in bad things if history is any indication.
Principles #1 and #2 are well-established cryptographic design principles for just a handful of decades each.
reply