This is a bit like saying sound waves have no attribution layer for music.
The Internet is a transport medium. It sounds like you are asking if it possible to (somehow) associate universal, intrinsic, and immutable attribution metadata with some or all (not sure what "viral" distinguishes in this context) Internet _content_ and have all receivers of that content accept the implications of that attribution metadata.
I think the failure of pretty much all the various digital rights management efforts applied on a MUCH smaller scale to infinitesimal subsets of content types that are now being schlepped across the Internet would suggest that no, there are no technical approaches that would realistically work.
And music, film, and television ownership/credit being tracked carefully? In certain law abiding environments, it's possible the owners/creators get a small fraction of what they believe they are entitled to, but in the majority of the world, not so much.
That’s a fair point, and I agree DRM-style control over content distribution hasn’t worked very well.
The distinction I’m thinking about is less about restricting the movement of content and more about tracking provenance. In other words, not trying to prevent copying or remixing, but making it easier to identify where something first appeared and who originally created it.
Music is an imperfect example, but the infrastructure around identifiers, registries, and rights databases at least creates a shared reference point for attribution. Internet-native media doesn’t really have anything comparable yet.
So the question I’m curious about is whether a similar kind of reference layer could exist for images, memes, and short-form media, even if the content itself continues to move freely?
Was wondering how long it'd take you to come in and trash talk DNSSEC. And now with added FUD ("and once you press that button it's much less likely that you're going to leave your provider").
This is a topic I obviously pay a lot of attention to. Wouldn't it be weirder if I came here with a different take? What do you expect?
I don't think I'm out on a limb suggesting that random small domains should not enable DNSSEC. There's basically zero upside to it for them. I think there's basically never a good argument to enable it, but at least large, heavily targeted sites have a colorable argument.
Actually I think it probably is suspicious to have the exact same opinion after studying something over a long period of time. My opinions are more likely to remain consistent, rather than growing more nuanced or sophisticated, if all I've done is trot out the same responses over a longer period of time.
I've struggled to think of an especially unexamined example because after all they tend to sit out of conscious recall, I think the best I can do is probably that my favourite comic book character is Miracleman's daughter, Winter Moran. That's a consistent belief I've held for decades, I haven't spent a great deal of time thinking about it, but it's not entirely satisfactory and probably there is some introduced nuance, particularly when I re-examined the contrast between what Winter says about the humans to her father and what her step-sister Mist later says about them to her (human) mother because I was writing an essay during lockdown.
> Actually I think it probably is suspicious to have the exact same opinion after studying something over a long period of time.
This seems really odd, probably fundamentally incorrect. "Believing something over time means it is less likely that you are engaging in good faith"? Totally insane take.
On the contrary it's suspicious if I happened to guess exactly right with much less data and so have the same conclusion after learning more. I suggest that the more likely reason is that I didn't learn anything at all.
> On the contrary it's suspicious if I happened to guess exactly right with much less data and so have the same conclusion after learning more.
No it isn't? If I guess what time it is and then look and see that it's around sunset, which is evidence towards my initial guess being right, it is not "suspicious". This is just a fundamentally broken model of evidence.
> I don't think I'm out on a limb suggesting that random small domains should not enable DNSSEC. There's basically zero upside to it for them.
DNSSEC is great for super tiny sites. I only run a single server, but it's strongly recommended that every domain has at least two independent nameservers, ideally with anycasted IPs. DNSSEC lets me fully self-host my DNS, while also letting me add secondary mirrors to get the additional independent nameservers.
Of course, you can add secondary mirrors without DNSSEC (and this is still quite common), but DNSSEC means that I don't have to trust these mirrors [0], since DNSSEC means that they can't forge invalid responses without my private key. I'd almost argue that if you're using secondary mirrors without DNSSEC enabled, then you're not "really" self-hosting, since you're completely reliant on the third-party mirrors being trustworthy.
For larger sites that can afford multiple independent nameservers or for anyone who wants to use a hosted DNS service, then DNSSEC probably offers fewer benefits, since in those cases you're presumably able to trust all your nameservers.
[0]: Well, I still need to trust them a little bit for non-DNSSEC-supporting clients, but most of the major resolvers support DNSSEC these days. And even then, this makes an attack much more detectable than it would be otherwise.
The vast majority of Let's Encrypt installations don't use CAA records or anything in DNS. Or they host the DNS along with the HTTPS servers.
So if the router between the web server and the Internet is compromised, it can just get trusted certs for all the HTTPS traffic going through it, enabling transparent MITM to inject its payload.
"The web server"? Which web server? Are the HTTP flows with executable content going to the web server or coming from it? I'm sorry, you haven't really cleared this up.
Any web server. Just imagine a worm getting onto a company's router and starting to transparently MITM traffic. Jabber.ru experienced such an attack, apparently.
I touched on this in the parallel comment where you linked this, but worth noting that DNSSEC does not solve this threat model, because re-routing the destination of legitimate IP addresses does not rely on modifying DNS responses.
It does solve it. Unless you know my private key, you can't fake the DNSSEC signatures. The linking DS records in the TLD are presumably out of your control and in future can be audited through something like Certificate Transparency logs.
So even if you fully control the network path, you will somehow have to get access to my private key material.
It would make them more secure and less vulnerable to attacks. But lazy sysadmins and large providers are too scared to do anything, in no small part due to your ... incorrect arguments against it.
No it wouldn't? How exactly would it make them more secure? It makes availability drastically more precarious and defends against a rare, exotic attack none of them actually face and which in the main is conducted by state-level adversaries for whom DNSSEC is literally a key escrow system. People are not thinking this through.
That entire post is that you should enable DNSSEC because it's "more secure", and there are no reasons not to.
"More secure" begs the question "against what?", which the blog post doesn't seem to want to go into. Maybe it's secure from hidden tigers.
My favourite DNSSEC "lolwut" is about how people argue that it's something "NIST recommends", whilst at the same time the most recent major DNSSEC outage was......... time.nist.gov! (https://ianix.com/pub/dnssec-outages.html)
You keep waving this blog post from 2015 at me. Not only have we discussed it before, but it was a top-level HN post with 79 comments, many of them from me.
Please don't stealth-edit your posts after I respond to them. If you need to edit, just leave a little note in your comment that you edited it.
Yes it did hit HN and you just said, "I stand by what I wrote." and then complain about buggy implementations and downtime connected to DNSSEC. As if that isn't true for all technologies, let alone /insecure/ DNS. DNS is connected to a lot of downtime because it undergirds the whole internet. Making the distributed database that delegates domain authority cryptographically secure makes everything above it more secure too.
I rebutted your arguments point-by-point. You don't update your blog post to reflect those arguments nor recent developments, like larger key sizes.
So: I wrote a blog post in January of 2015, and 7 months later you wrote a blog post responding to it in August of 2015, and 10 years later you're still angry that I didn't update my blog post to point to the post that you wrote?
I write things people disagree with all the time. I can't recall ever having been mad that people didn't cite me for things we disagree about. Should I have expected all the people who hated coding agents to update their articles when I wrote "My AI Skeptic Friends Are All Nuts"? I didn't realize I was supposed to be complaining about that.
I advocate for DNSSEC in my personal life and you happen to jump on every DNSSEC HN submission and repeat your claims. So I post a link to my article debunking them. You won't engage in the substantive points here but insist that you have in the past and that you stand by your post. So I suggest your update your post to address my critiques.
I'm frustrated that you seem to blow me off and insult me when I try to engage in good faith discussion, but I'm not angry at you. I just ran into this post while procrastinating at work and here we are, in the same loop.
I think we are both trying to make the internet a safer place. It's sad we can't seem to have a productive conversation on the matter.
I advocate against DNSSEC in my personal life. I write about DNSSEC on HN because I write on HN a lot, and because this is a topic I have invested a lot of time in, going back long before the existence of HN itself. You can find stuff about it from me on NANOG in the 1990s. Your frustration seems like a "you" problem.
That doesn't make it correct. Imagine if someone had said, "We don't need to secure HTTP, we'll just rely on E2E encryption and trust-on-first-use". I would really like it if we had a way to automatically cryptographically verify non-web protocols when they connect.
But there is no money in making that a solution and a TON of money in selling you BS HTTPS certs. There is a lot of people spreading FUD about it. It's a shame.
Mark Shuttleworth paid for his ride to the space station by selling HTTPS certs.
The sad thing is that Mozilla and others have to spend millions bankrolling Let's Encrypt instead of using the free, high assurance PKI that is native to the internet!
It's not really free, though. Rather, the costs are distributed rather than centralized, but running DNSSEC and keeping it working incurs new operational costs for the domain holders, who need to manage keys and DNSSEC signing, etc. And of course there are additional marginal costs to the registrars of managing customer DNSSEC, both building automation and providing customer service when it fails.
It's of course possible that the total numbers are lower than the costs of the WebPKI -- I haven't run them -- but I don't think free is the right word.
I mean, I guess the costs are paid for by the domain name fee. But at least it doesn't have to be a charitable activity covered by non-profits. The early HTTPS certs were especially worthless and price-gouging.
> But at least it doesn't have to be a charitable activity covered by non-profits.
LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/
Anyway, I think there's a reasonable case that it would be better to have the costs distributed the way DNSSEC does, but my point is just that it's not free. Rather, you're moving the costs around. Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.
> LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/
I mean, Mozilla got the ball rolling and it's still run on donations (even if they come from private actors).
> Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.
The PKI is already there: we have 7 people who can do a multisig for new root keys. There is a signing ceremony in a secure bunker somewhere that gets live streamed. The HSMs and servers are already paid for. Cert transparency/monitoring is nice but now it's hard-coded to HTTPS instead of being done more generically. There's a lot of duplicated effort.
> > LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/
>
> I mean, Mozilla got the ball rolling
Among others:
Let’s Encrypt was created through the merging of two simultaneous
efforts to build a fully automated certificate authority. In 2012, a
group led by Alex Halderman at the University of Michigan and
Peter Eckersley at EFF was developing a protocol for automatically
issuing and renewing certificates. Simultaneously, a team at Mozilla
led by Josh Aas and Eric Rescorla was working on creating a free
and automated certificate authority. The groups learned of each
other’s efforts and joined forces in May 2013.
...
Initially, ISRG was funded almost entirely through large dona-
tions from technology companies. In late 2014, it secured financial
commitments from Akamai, Cisco, EFF, and Mozilla, allowing the
organization to purchase equipment, secure hosting contracts, and
pay initial staff. Today, ISRG has more diverse funding sources; in
2018 it received 83% of its funding from corporate sponsors, 14%
from grants and major gifts, and 3% from individual giving.
Except for the period before the launch when Mozilla and EFF
were paying people's salaries, including mine, it was
never really the case that Let's Encrypt was primarily funded
by non-profits.
> and it's still run on donations (even if they come from private actors).
I agree, but I think it's important to be precise about what's
happening here, and like I said, it's never been the case
that LE was really funded by non-profits.
> > Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.
>
> The PKI is already there: we have 7 people who can do a multisig for new root keys. There is a signing ceremony in a secure bunker somewhere that gets live streamed. The HSMs and servers are already paid for. Cert transparency/monitoring is nice but now it's hard-coded to HTTPS instead of being done more generically. There's a lot of duplicated effort.
I think this is a category error. The main operational cost for
DNSSEC is not really the root, which is comparatively low load,
but rather the distributed operations for every registry/registrar,
and server to register keys, sign domains, etc.
One way to think about this is that running a TLD with DNSSEC is
conceptually similar to operating a CA in that you have to take
in everyone's keys and sign them. It's true you don't need to
validate their domains, but that's not the expensive part. Operating
this machinery isn't free, especially when you have to handle
exceptional cases like people who screw up their domains and need
manual help to recover. Now, it's possible that it's a marginal
incremental cost, but I doubt it's zero. Upthread, you suggested
that people are already paying for this in their domain registrations,
but that just means that the TLD operator is going to have to absorb
the incremental cost.
That's fair! My primary gripe was about the need for non-profits to step in to begin with. Sorry if I didn't communicate that well.
However, I'm don't feel sorry for registrars or TLDs. Verisign selling HTTPS certs while running the root TLDs is a conflict of interest and I believe the perverse incentives are a big part of the reason why DNSSEC and DANE are stalled out. TLDs are a monopoly business and ICANN is quasi-commercial entity that should never have been a for-profit business.
I certainly think it is fair to ask them to pay for all this.
I actually agree with you that in an abstract architectural sense a DNSSEC-style solution for authenticating they keys for endpoints is better. The problem from my perspective is that for a number of reasons that we've explored elsewhere in this thread, there is no practical way to get there from here.
To put this more sharply: in the world as it presently is with ubiquitous WebPKI deployment, the marginal benefit of DNSSEC strikes me as quite modest, even if it were universally deployed. Worse yet, the incremental benefit to any specific actor of deploying DNSSEC is even lower, which makes it very hard to get to universal deployment.
> However, I'm don't feel sorry for registrars or TLDs. Verisign selling HTTPS certs while running the root TLDs is a conflict of interest and I believe the perverse incentives are a big part of the reason why DNSSEC and DANE are stalled out. TLDs are a monopoly business and ICANN is quasi-commercial entity that should never have been a for-profit business.
>
>I certainly think it is fair to ask them to pay for all this.
I also do not feel sorry for registrars. However, it's also not clear to me that if somehow they were forced to incur incremental cost X per domain name, they would not find a way to pass it onto us. With that said, I also don't think that's really why DNSSEC and DANE are stalled out; rather I think that it's the deployment incentives I mentioned above.
Note that despite the confusing naming and the fact that VeriSign was once a CA, they no longer are and have not been since 2010, as described in the second paragraph of their Wikipedia page. https://en.wikipedia.org/wiki/Verisign. In fact, in my experience VeriSign is very pro-DNSSEC.
You're not providing any explanation for why I wouldn't trust OP on DNSSEC. And the FUD is pretty reasonable if you've had a lot of experience setting up certificate chains, because the chain of trust can fail for a lot of reasons that have nothing to do with your certificate and are sometimes outside of your control. It would really suck to turn it on and have some 3rd-party provider not implement a feature you're relying on for your DNSSEC implementation and then suddenly it doesn't work and nobody can resolve your website anymore. I've had a lot of wonky experiences with different features in EG X.509 that I've come to really mistrust CA-based systems that I'm not in control of. When you get down to interoperability between different software implementations it gets even rougher.
Which is exactly what happened to Slack, and took them offline for most of a business day for a huge fraction of their customers. This is such a big problem that there's actually a subsidiary DNSSEC protocol (DNSSEC NTA's) that addresses it: tactically disabling DNSSEC at major resolvers for the inevitable cases where something breaks.
As if DNS isn't a major contributing to A LOT of downtime. That doesn't mean it's not worth doing not investing in making deployment more seamless and less error prone.
> As if DNS isn't a major contributing to A LOT of downtime. That doesn't mean it's not worth doing not investing in making deployment more seamless and less error prone.
Ah yes. Let's take something that's prone to causing service issues and strap more footguns to it.
It's not worth it, because the cost is extremely quantifiable and visible, whereas the benefits struggle to be coherent.
DNS underlies domain authority and the validity of every connection to every domain name ultimately traces back to DNS records. The amount of infra needed to shore up HTTPS is huge and thus SSH and other protocols rely on trust-on-first-use (unless you manually hard-code public keys yourself - which doesn't happen). DNS offers a standard, delegable PKI that is available to all clients regardless of the transport protocol.
With DNSSEC, a host with control over a domain's DNS records could use that to issue verifiable public keys without having to contact a third party.
I ran into this while working on decentralized web technologies and building a parallel to WebPKI just wasn't feasible. Whereas we could totally feed clients DNSSEC validated certs, but it wasn't supported.
Thanks for the explanation. It seems like there are two cases here:
1. Things that use TLS and hence the WebPKI
2. Other things.
None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.
That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.
> None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.
It would benefit the likes of Wikileaks. You could do all the crypto in your basement with an HSM without involving anyone else.
> That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.
But do they? That requires adding support for another protocol.
I would like to live in a world where I don't have to copy/paste SSH keys from an AWS console just to have the piece-of-mind that my SSH connection hasn't been hijacked.
In practice, fleet operators run their own PKIs for SSH, so tying them to the DNSSEC PKI is a strict step backwards for SSH security.
There may be other applications where a global public PKI makes sense; presumably those applications will be characterized by the need to make frequent introductions between unrelated parties, which is distinctly not an attribute of the SSH problem.
And for everyone else that just wants to connect to an SSH session without having to setup PKI themselves? Tying that to the records used to find the domain seems like the obvious place to put that information to me!
DNSSEC lets you delegate a subtree in the namespace to a given public key. You can hardcode your DNSSEC signing key for clients too.
Don't get me started on how badly VPN PKI is handled....
Yes, modern fleetwide SSH PKIs all do this; what you're describing is table stakes and doesn't involve anybody delegating any part of their security to a global PKI run by other organizations.
The WebPKI and DNSSEC run global PKIs because they routinely introduce untrusting strangers to each other. That's precisely not the SSH problem. Anything you do to bring up a new physical (or virtual) involves installing trust anchors on it; if you're in that position already, it actually harms security to have it trust a global public PKI.
The arguments for things like SSHFP and SSH-via-DNSSEC are really telling. It's like arguing that code signing certificates should be in the DNS PKI.
No, we run a fleet with thousands of physicals and hundreds of thousands of virtuals, of course we don't hardcode keys in our SSH configuration. Like presumably every other large fleet operator, we solve this problem with an internal SSH CA.
Further, I haven't "moved on to another argument". Can you answer the question I just asked? If I have an existing internal PKI for my fleet, what security value is a trust relationship with DNSSEC adding? Please try to be specific, because I'm having trouble coming up with any value at all.
We also have thousands of devices accessible over SSH and we maintain our own PKI for this purpose as well. We also use mTLS with a private CA and chain of trust, for what it's worth.
Actually, does it? Yes, the obvious upside when I type in slack.com instead of 123.45.56.67 is very good. Does this same upside apply to addresses I don't type in? What's actually the advantage of addressing one of foobarcorp's infinitude of servers uasing the string "123-45-57-78.slp05.mus.foobar.com" instead of "123.45.57.78"? It seems to just waste bytes. And most communication is of the latter sort - an app talking to its own servers managed by the same company.
BGP can be hijacked. Anycast IPs exist. Rolling out a new release when one of your IPs is unavailable could be a severe challenge. SVC records are actually kinda neat.
All of that's a problem with DNS too, even updating the IP. You could still use it to get the initial entry point if you wanted. But when you serve a webpage with an automatically generated pointer to image3.yourdomain, the only reason not to make that an IP is HTTPS, and LE just started issuing IP address certificates. Think about it - it saves a few round trips.
I signed up for a Pro account yesterday, now when I try to access it, I get shuffled off to the 'create a new account' page.
Trying to access a human in support to understand wtf has been going on, perhaps unsurprisingly, has been a study in infuriation. Their AI support bot has been as useless as most other AI support bots.
Sending email to their support email address triggers the same AI support bot. It suggested waiting 24 hours and trying to access my account again. And then closed the issue after 4 hours.
Probably not related to the email delivery issue (I keep getting the link via, it just redirects to the create new account page), but perhaps indicative of something seriously broken and a lack of interest in actual support (even for paying customers).
As others have pointed out, using 'tmptest' works until someone buys tmptest -- unlikely, but people will buy anything these days.
I always use the ISO-3166 "user-assigned" 2-letter codes (AA, QM-QZ, XA-XZ, ZZ), with the theory being that ISO-3166 Maintenance Agency getting international consensus to move those codes back to regular country codes will take longer than the heat death of the universe, so using them for internal domains is probably safe.
> The UN is structurally designed to give China and Russia outsized influence.
An interesting assertion. I presume you are implying outsized influence over the US (or do you mean every other country?). I'm honestly curious: can you describe this structural design?
The thing that jumps to mind is the Security Council, which they can parley into diplomatic favours from other people. And the whole point of the UN is that it was the victors of WWII explaining to the rest of the world how international affairs were going to work, so I'd be pleasantly surprised if the special privileges stopped there.
And even without that, the UN isn't really set up to handle technical matters. It is a diplomatic club. The point is to give people a seat at the table without considering their competence.
The Security Council is controlled by the US and its allies (3 out of 5 permanent seats). And the Security Council does not decide on matters of public health like the WHO does. The WHO is staffed by very competent people, certainly more competent than RFK.
The UN has handled several technical matters successfully, including global vaccination programs.
> Is the presence of a human driver keeping you from using Uber/Lyft/taxis more than you currently are?
Yep. A couple of bad experiences with Uber/Lyft drivers put me off using them. Waymo is honestly more comfortable/less stressful for me. Similarly, I just read an article discussing parents making use of Waymo to schlep their kids to sportball practice/friend's house/wherever kids hang out these days, even though it is against Waymo's terms of service. The article indicated those parents didn't trust their kids to be in a car along with a strange human, but were ok with an automated system (and violating the ToS of that system).
> please explain how exactly our city landscapes, namely parking lots, will be revolutionized in any way, shape, or form other than zombie lots occupied Waymos
Today parking tends to be located near the shop/restaurant/office people want to go to. If people no longer need to park to go to where they want to go, parking (for charging) can relocate and be concentrated, thereby freeing up the parking spaces for other uses.
Thanks for the reply. The perception of safety in attended ride shares is masking the larger economic constraint. So let's assume for sake of conversation that your safety concerns are warranted. I'd ask you to consider how much money additional money you're willing to spend on ride shares. The urban utopia of autonomous vehicles is often championed, yet fully unconsidered in a capitalist regime. How much additional money do you expect most Americans to spend toward ride shares, to the degree that they abandon vehicle ownership? What degree of broad behavior and spending change do you expect to occur as result of unattended ride shares?
reply