Hacker News new | past | comments | ask | show | jobs | submit | kuhsaft's comments login

> the end user could be MITM'd already with a root certificate maliciously installed on their device

If a malicious root certificate is installed, then the user’s system is already compromised and signature validation won’t help.


Not in the strict sense if it's state-mandated MITM (you are forced to install a specific root certificate to legally connect to the internet).

But also in the other case, not all is lost: Not every malware can (or even tries to) defend itself against any antivirus software in existence. The machine might be compromised, but being able to retrieve the correct upadate for the hypothetically unaffected malware scanner can still give you the signal that your machine is infected and you should reinstall it.


The signature would use asymmetric encryption, so unless the attacker had access to the signing key, it would be impossible for the attacker to sign a modified version of the payload.

EDIT: I see what you mean. radicaldreamer stated that a malicious root certificate is installed, but signature validation wont help there. But, it will help when downloading from mirrors or HTTP.


One major usage of services meshes that I’ve come across is for the transparent L7 ALB. gRPC, which is now very common, uses long-running connections to multiplex messages. This breaks load-balancing because new gRPC calls, within a single connection, will not be distributed automatically to new endpoints. There is the option of DNS polling, but DNS caching can interfere. So, the L7 service mesh proxy is used to load balance the gRPC calls without modification of services.

https://learn.microsoft.com/en-us/aspnet/core/grpc/loadbalan...


That is false. Creation of threads/processes is still available within a container. Docker will run the entry point as PID 0 and that process can create as many subprocesses as it wants. If you’re talking about cores, Docker utilizes Linux cgroups which permits usage of multiple cores. You can limit the CPU quota and assign the container to specific cores. See https://docs.docker.com/config/containers/resource_constrain...


But, they do. Any electromagnetic wave can refract.

https://en.wikipedia.org/wiki/X-ray_optics#Compound_refracti...

3Blue1Brown has a very nice series on optics: https://www.youtube.com/watch?v=Cz4Q4QOuoo8


Not a very usable color picker. The this color picker is limited to a slice of the HSV colorspace with a V of 1. Also, the outputs are not copyable.

The HSL/HSV color space also isn’t great for generating color combinations due to the deformation of lightness. A much better colorspace is the OKLCH colorspace which is has uniform lightness. The downside is that the colorspace is not perfectly cylindrical, so some values have no real displayable value. It also supports P3 and Rec2020 colors.

I think, something like https://oklch.com/ with color combinations would be more useful.

https://atmos.style/ is the most useful color tool I’ve used. Unfortunately, it’s not FOSS.


How does this handle multiple containers for a Pod? In a container runtime k8s, containers within a pod share the same network namespace (same localhost) and possibly pid namespace.

The press release maps pods to machines, but provides no mapping of pod containers to a Fly.io concept.

Are multiple containers allowed? Do they share the same network namespace? Is sharing PID namespace optional?

Having multiple containers per pod is a core functionality of Kubernetes.


You can use mount namespaces, or even containers in your VM. Maybe that's how?


Fly.io claims it’s “just a VM”. But, Fly.io Machines are an abstraction of microVMs using Firecracker. Building upon that, the FKS implementation is an abstraction on top of Fly.io Machines. So what I’m asking is how, if even, does the FKS implementation support multiple containers for a pod? Using FKS, the abstraction is no longer a VM.

It seems that Fly.io Machines support multiple processes for a single container, but not multiple containers per Machine [0]. This means one container image per Machine and thus no shared network namespace across multiple containers.

[0] https://community.fly.io/t/multi-process-machines/8375


You can run Docker on a Fly Machine, and run arbitrary numbers of containers inside of it. Or you can run lots of small Fly Machines. FKS is just one model for deploying things.


Right, but what is the point of FKS then? It’s no longer Kubernetes if it doesn’t support a core behavior of Kubernetes.

If you only support deploying single containers with single processes on FKS, then you might as well use flyctl.

It’s a solvable issue of course. The virtual-kubelet implementation would need to create a Machine running a container runtime image that would then run a the pod containers to match the pod configuration.

I think that some disclaimers of the limitations of FKS compared to standardized Kubernetes should be present and highly visible.


We're actively working on the ability to run multiple processes with different images because it's something people using our platform want and it just happens to also be something needed for us to make FKS a more standardized Kubernetes offering.


Seems like Fly.io Machines are trying to reimplement Kata Containers with the Firecracker backend [0], but also abstracting away the host hypervisor machine infrastructure.

Kata has a guest image and guest agent to run multiple isolated containers [1].

[0] https://katacontainers.io/

[1] https://github.com/kata-containers/kata-containers/blob/main...


The article discusses what you get by using K8s alongside Fly.io. If you want to bin-pack containers onto Fly Machines, you can of course just boot up your own K8s cluster here; that has always been an option.


It should discuss what you don’t get compared to the standard behaviors of Kubernetes.


The problem is not the need to bin-pack, the problem is completeness. Sidecars and multiple containers are used for logging, backups, etc. Not to mention that if you grab a manifest or chart for an app, it is going to have pods with multiple containers (whether that's strictly needed or not), and those won't work on fly.io.

This is a critical feature available in every Kubernetes offering, and as such people rely on it. Trying to say that maybe you can do without is missing the forest for the trees.

Like saying that your C compiler doesn't need arrays because it has pointers: sure, maybe, but now good luck compiling any existing code. Maybe don't call it a C compiler if no C program will work on it unmodified.


Again: Fly Machines are just Linux VMs, and you have root on them.


I'm discussing the capabilities of Fly Kubernetes, not Fly Machines. This is good news though, it means they might get there in the future.


Why should you do this - sounds like an antipattern to me


It’s used widely in the Kubernetes world and is known as sidecars [0].

[0] https://kubernetes.io/blog/2023/08/25/native-sidecar-contain...


Widely? I lived in this helm, kubernetes, pulumi world the past 4 years and we followed the simple rule of one service/container per pod. Why add complexity where it’s not needed. Like running a dB a and a service in the same docker container - a no go for me and many.


Kubernetes official documentation states, "A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers."

Running more than one container in a pod is a fundamental concept of Kubernetes. Init containers and sidecars allow for a separation of concerns, which is essential for non-cloud-native workloads. Logging and telemetry are just a couple of features which may be designed and built into cloud-native applications, but legacy applications need this flexibility without modifying the application itself.

The fact you ran K8S for four years without it demonstrates only that it is not required by your workload -- not that it is "unnecessary complexity" or an "anti-pattern."


The thing is you might not be adding the sidecars yourself.

Kubernetes has a resource called MutatingAdmissionWebhook that allows mutations of the Pod object before creation.

I think the most common use case is for service meshes. Your Deployment might not have any sidecars, but the service mesh controller will automatically add a network proxy container to your pod via the admission webhook.

Another use case would be OpenTelemetry or some other observability service sidecar injection for auto-instrumentation.


Ok that was something inward missing on my end. Thanks for all the explanations and sorry for me being wrong


Nobody would suggest this. But what about a metrics scraper for your db pod? That’s where sidecars come in.


Like helm, it's a widely used anti-pattern


Its because you have multiple processes (containers) that work together in their little pod. You could stick them all in a single image somehow but that would be much more work and less flexible.


I think there’s potential here.

It is Kubernetes since they are running k3s as the control-plane. It’s not just an implementation of the Pod API, it’s an implementation of kubelet which handles logs/exec/etc APIs. The rest of the Kubernetes API is part of the control-plane on k3s.

The only major issue I see is persistent volume support, but persistent volumes in Kubernetes were always a bit flaky and I’ve always preferred to use an externally managed DB or storage solution.


DX might be better I suppose, since you don’t have to fiddle with node sizing, cluster autoscalers, etc.

Someone else linked GKE Autopilot which manages all of that for you. So if you’re using GKE I don’t see much improvement, since you lose out on k8s features like persistent volumes and DaemonSets.


You wouldn’t use affinity rules anymore. The pods are scheduled on a single virtual-kubelet node, so if you use anti-affinity scheduling would fail.


> You wouldn’t use affinity rules anymore

Point being: what if I wanted to do this? How could I achieve making sure services were running according to the antiaffinity rules I provided? E.g. not on same physical machine; not on same VM; not in same datacentre; not in same region; etc.


If there were a virtual kubelet per unit of granularity (datacenter, in their case?) then you would be able to use affinity rules just fine.


Right. Though, the virtual-kublets can be running on the same machine actually. They just need to be configured to have different node names.

The press release states that your k8s API is actually running on a single machine with k3s and a virtual-kubelet. So, I’m not sure if it’s one “cluster” per region, or one “cluster” with multiple virtual-kubelets for regions.

Either way, your FKS cluster control-plane would sit in a single region.


How do you forbid running two instances of the same service on one node without anti affinity?


Traditionally, each node is its own machine. virtual-kubelet creates a virtual node that is a proxy to some other pod infrastructure. In the case with FKS, each pod in the virtual node is a machine (a node in the traditional sense), so it’s equivalent of having an anti-affinity on all pods with an infinite node pool.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: