Hacker News new | past | comments | ask | show | jobs | submit login

My problem with K8s: the network abstraction layer just feels _wrong_.

It's an attempt to replicate the old model of "hard exterior, gooey interior" model of corporate networks.

I would very much prefer if K8s used public routable IPv6 for traffic delivery, and then simply provided an authenticated overlay on top of it.




> My problem with K8s: the network abstraction layer just feels _wrong_.

> I would very much prefer if K8s used public routable IPv6 for traffic delivery

shudder... nothing could feel more wrong to me than public routable IPv6, yuck.


Publicly routable is wonderful. My first job was a company that happened to have somehow acquired a class B, so all our computers just had normal real addresses, they always had the same address whether you were on a VPN or a home network or whatever and remoting into the company network just worked.


Same! It was incredibly easy to obtain address space in the 80's and 90's. I have a /24 ("class C") routed to my home!


Why? It neatly separates concerns. Routing and reachability should be handled by the network. The upper layers should handle authorization and discovery.

Public IPs also definitely don't need to be accessible from the wide Internet. Border firewalls are still a thing.



It's still an overlay network.


How do you suggest talking to ipv4-only internet hosts and supporting ipv4-only containers?


Via the border load balancers, just like we do it now.


So basically i need to run another piece of infra that does NAT64 and DNS64 and limits my deployment options quite a bit (can't do DSR)? Totally unnecessary in cloud... Not sure how that's better for users but probably better for vendors ;)

Btw, overlay is not the only option to do CNI - Calico, Cilium and few others can do it via l3 by integrating with your equipment. Even possible in cloud but has serious scale limitations...


No, you misunderstand me. My dream infrastructure would run IPv6 with publicly routable IP addresses for the internal network, for everything.

IPv4 is needed only for the external IPv4 clients, and for the server code to reach any external resources that are IPv4-only. The clients are simply going to connect via the border load balancers, just as usual.

For the external IPv4-only resources, you'll need to use DNS64. But this is not any different from the status quo. Regular K8s nodes can only reach external resources through NAT anyway.

I'm actually trialing this infrastructure for my current company. We got an IPv6 assignment from ARIN, so we can use consistent blocks in US West and US East locations. We don't use K8s, though. AWS ECS works pretty great for us right now.

> Btw, overlay is not the only option to do CNI - Calico, Cilium and few others can do it via l3 by integrating with your equipment. Even possible in cloud but has serious scale limitations...

It's still an overlay network, just in hardware.


> It's still an overlay network, just in hardware.

It really isn't, at least not in commonly understood sense. See [0] for example - you can use this with dual-stack and route everything natively even with ipv4 using rfc1918 cidrs. No ipip/gre/vxlan tunneling required. Does require setting up BGP peering on your routers.

[0] - https://cloudnativelabs.github.io/post/2017-05-22-kube-pod-n...


There are ways to unbolt the native networking stack and roll your own. Tons of options available: https://github.com/containernetworking/cni

I don’t agree with your approach (curious as to why you would want this) but I believe it’s possible.


I've felt similarly. Possibly because I was online pretty early, pre-NAT... there was public IPv4 everywhere.


How would that work with load balancing and horizontal scaling?


Just like it works currently. Either via dedicated load balancers or by using individual service endpoints.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: