I love Hetzner. That said their IPv6 support is poor. A server gets a /64 only, if you want a /56 (allowing 256 container networks) then you have to pay €15. As for virtual networks: they only support IPv4!
In practice this can be made to work but a networking expert can probably explain better than me why splitting a prefix into chunks smaller than a /64, and assigning them to virtual networks within a host is a bad idea.
In Hetzner's specific case: they won't give me one or more additional /72s: only a /56 if I pay for it. Per server.
splitting things out in a smaller prefix then a /64 breaks a couple of things.
SLAAC will not work, and slaac is actually a really neat usecase for containers.
Not having the overhead of DHCP for container addressing is neat.
Also, smaller blocks then /64 makes things like prefix delegation (usually) also break from a provider.
A container should absolutely not even need a /72. The traditional reason for /64 is for slaac but you most certainly don't need that for one container (if at all honestly).
Indeed, a host should be able to request a /64 via DHCPv6-PD and split that between millions of container networks. But you can't do that on Hetzner (or anywhere else).
Yeah that obviously only works on /56 and above because networks should be a minimum of /64. I use k3s and each host has a /64; cilium just gives each pod a /80 and the host does NDP and stuff. Works fine, no need to require dhcp6.
Why do you need ipv6 on your internal network? Is 10/8 really not enough/overlap? For 99.99% of people it's fine for the internal interfaces and if anything actually simplifies configuration.
The purpose of a network is to allow any two consenting parties to communicate. IPv4 cannot deliver that if either party has an RFC1918 address. NAT is a foul perversion of this foundational principle of the Internet Protocol.
On your *internal* network e.g the thing between your postgres VM and your webserver (or whatever). Not arguing against it on the public/wan connection.
For a lot of use cases a major advantage of IPv6 is to get away from ambiguous rfc1918 addressing.
You can then just put an allow rule between arbitrary v6 addresses anywhere on the internet when you need connectivity without any other hacks like proxies, NAT, etc and the associated complexity and addressing ambiguity/context dependence of rfc1918 addresses.
So fex you can just curl or ssh to your mycontainer.mydomain.net or you can put an allow rule from mycontainer.mydomain.net to a vm or laptop on your home network.
The context in the GP comment was generally getting v6 connectivity for containers.
"Internal" is a context dependent term that you introduced. But to give a use case for that, for example you might want to have (maybe at a future date) two hosts on your networks on AWS and Hetzner talk to each other, still without allowing public connectivity.
If your containers have a Global Unicast Address then it's possible to look at connetion logs and figure out which container made a particular request, for instance.
It doesn't impeed observability for goodness sakes. It does however impeed accidentally opening up your internal network because you don't really understand your firewall/virtual router/whatever.
Of course it impedes observability. With IPv6, I can see the IP addresses of the containers that connect to a service. With IPv4, I get (at best) the IP address of the container host, thanks to NAT.
Are you also afraid of port forwarding? Have you considered that your ISP could choose to send your router packets destined for RFC1918 addresses?
At least they're not as bad as Azure... :)