Hacker News new | past | comments | ask | show | jobs | submit | mrAssHat's comments login

Integrate that with kubernetes and I'm sold.


There are NVidia's Kata containers: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator... . I'm not sure you need the physical GPUs to run them though. Most likely not.

I'm wondering though what value will Kubernetes add beside integrating with existing (presumably Kubernetes-based) infrastructure? At least, this is my understanding of the rationale for Kata containers. Other than that, it seems like it'd be just getting in the way...



I believe this work originated at Intel as "clear containers" (which I believe started life from an acquisition (but could be mixing this up...my memory isn't what it used to be). Either way it's great they are being used like this and at Nvidia (I know Alibaba cloud also use this tech)


Yes, Kata started as clear containers. And yes, the main purpose is compatibility with containers -- though generally speaking, adding layers to the cloud stack never helps to make a deployment more efficient. On kraft.cloud we use Dockerfiles to specify app/filesystem, but then at deploy time automatically and transparently convert that to a specialized VM/unikernel for best performance.



I think it's from the AWS team, they made firecracker (micro VM) So it does exist.

Funnily that's what fly does: take your container uncompress it to a full micro VM and run it on their infra


fly.io uses Firecracker. Firecracker is Open Sourced with an Apache 2 license. It's faster than LightVM mentioned in the post.

Firecracker also has containerd support (https://github.com/firecracker-microvm/firecracker-container...).

There are a few ways to run Kubernetes with Firecracker, including FireKube.


Is it really faster? I thought firecracker boot times were something like 100ms. LightVM claims 2.3ms?


Back when we did the paper, Firecracker wasn't mainstream so we ended up doing a (much hackier) version of a fast VMM by modifying's Xen's VMM; but yeah, a few millis was totally feasible back then, and still now (the evolution of that paper is Unikraft, a LF OSS project at www.unikraft.org).

(Cold) boot times are determined by a chain of components, including (1) the controller (eg, k8s/Borg), (2) the VMM (Firecracker, QEMU, Cloud Hypervisor), (3) the VM's OS (e.g., Linux, Windows, etc), (4) any initialization of processes, libs, etc and finally (5) the app itself.

With Unikraft we build extremely specialized VMs (unikernels) in order to minimize the overhead of (3) and (4). On KraftCloud, which leverages Unikraft/unikernels, we additionally use a custom controller to optimize (1) and Firecracker to optimize (2). What's left is (5), the app, which hopefully the developers can optimize if needed.


LightVM is stating a VM creation of 2.3ms while Firecracker states 125ms of time from VM creation to a working user space. So this comparing apples and oranges.


I know it's cool to talk about these insane numbers, but from what I can tell people have AWS lambdas that boot slower than this to the point where people send warmup calls just to be sure. What exactly warrants the ability to start a VM this quickly?


The 125ms is using Linux. Using a unikernel and tweaking Firecracker a bit (on KraftCloud) we can get, for example, 20 millis cold starts for NGINX, and have features on the way to reduce this further.


We ended up doing this over at https://unikraft.io :-)


Hacking all those things together feels empowering, like a complex construct that can be built from simple things we are already used to. This article has a very "hacky" spirit, love it!


Why not have both? A single server/service AND easy-to-install modules providing atomic functionality? Just make the install/uninstall process easy (like just downloading and optionally unpacking a modules archive into installed_modules Dir)?


Yes! Since these will be easy and used _as needed_, the plan is to have those be set by this `.env` var: https://github.com/bewcloud/bewcloud/blob/2d70a3817de1fd6108... thus not requiring download/install/uninstall, but not bloat as they won't be used if they're not enabled.


With the exception of a handful of core modules isn’t that exactly what Nextcloud does?


Yes! Their "problem" is that they have to support a lot of legacy and extensibility, thus even just the "barebones" Nextcloud uses a lot of resources (CPU/Memory/IO).


On the contrary, I didn't find Syncthing setup very easy, as I don't want to be dependent on external resources when doing a sync over LAN and thus I had to setup a coordinator myself and it's a bit confusing with all these long tokens that I needed for some reason.


Thanks for sharing your experience! Is there something else you use right now?


It uses direct udp peer discovery on Lan though?



Thanks for the reference! Seems like a really good resource. I disagree with the reasoning about pipefail though. If I expect a command to return non-zero exit code I'd rather be explicit about it.


Testcontainers aren't even compatible with kubernetes, that's a tool from the past.


We use [kubedock](https://github.com/joyrex2001/kubedock) to run testcontainers in kubernetes clusters. As long as you're only pulling the images, not building or loading them (explicitly not supported by kubedock), it works pretty well.


Why'd you run them in kubernetes? Seems like extreme overkill for launching a short lived container for an integration test. What could kubernetes possibly add to that?


Because we are a big company and would like to utilize resources better.

We also want homogeneity in tech when possible (we already heavily use kubernetes, we don't want to keep docker hosts anymore).

Teams of testers need to be accounted in terms of resource quotas and RBAC.

What exactly do you see as an overkill in wanting to run short-lived containers in kubernetes rather than in docker (if we already have kubernetes and "cook" it ourselves)?


That reasoning seems more like one from policy/cargo cult rather than reasoning specific to your org. For something short lived and meant to be isolated I wouldn't want to subject them to even more infrastructural dependencies outside their control.


Better resources utilization definitely sounds like cargo cult, riiight.


It's overkill because these containers typically have a lifetime counted in single digit seconds, and it takes kubernetes not only more time but also more compute resources to decide where to allocate the pod than to actually just run the thing.


ROAC "runs on any computer", which k8s does not.


Another one? Why?


...cough, htop, cough...


...cough, ps aux, cough...


...cough, pneumonia, cough...


I hate the animations like the one in the description for being too fast to comprehend.

Looks like just a fzf that is able to traverse across branches, but it is not obvious which branch the selected file is in until you switch to it.


Thanks for the feedback on the animations. I didn't want to keep too large of a .gif in the repo but that's the only format that's straightforward enough to have displayed inline so I had to keep the animation terse (and admittedly a bit too fast).

It is indeed using fzf in a specialized way so that it only searches for git worktrees and nothing more. The use case I have is that I have dozens or hundreds of repos in various locations on disk and this tool makes it easy to instantly jump into any of them.

That's why it's called "gcd" -- git + cd.


More than what?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: