Hacker Newsnew | past | comments | ask | show | jobs | submit | osigurdson's commentslogin

Perhaps I misunderstand your comment but when you run docker / podman in Windows, you are using WSL / HyperV.

running docker on windows runs docker in wsl (which is a HyperV guest). The project offers running docker images as wsl instance (which is a HyperV guest), no docker involved

You said - "A bit more overhead since they now run as a VM instead of a container"

To which osigurdson seemed to be noting that WSL2 itself is a VM (meaning if you launch 1 or 100 WSL2 instances, a single Linux VM will be spun up), and when you run docker, it runs using exactly that same VM (optionally, though it's the default now to just use WSL2's backend).

Can you clarify what you meant by "A bit more overhead"? Running a container via docker or directly via WSL2 will use the same underlying VM, and there will only be that one VM regardless of the number of WSL2 or docker instances.


For a single container the difference depends on your exact setup. I typically run docker in the same WSL instance as my other wsl stuff, so starting a single docker container adds 0 additional VMs, while starting the container as a wsl instance will add one VM. If you use the "docker for windows" package you may be adding a VM just for running docker, depending on your setup.

Once you start the second container the difference becomes more obvious: running $N containers in docker uses one VM with one linux kernel, no matter how many containers you add. Running $N containers as separate WSL instances runs $N VMs and $N linux kernels. That's the "bit more overhead" I was referring to


"Running $N containers as separate WSL instances runs $N VMs and $N linux kernels."

But it doesn't, and this is what I'm disagreeing with.

If you instantiate WSL2, it launches a Linux VM. A single Linux VM. If you then run Docker with WSL2 integration (the default and hugely recommended), it uses that Linux VM as its VM as well, so you're still at 1 VM.

If you run 100 WSL2 instances, they will all use that single Linux VM, each doing namespacing for isolation with their own filesystems. If you run 100 Docker instances, they will all use that single Linux VM.

If you run 100 WSL instances, and 100 Docker instances (assuming, again, WSL2 integration which is the default), they will all be using that single Linux VM.


The linked project includes a very different way to launch docker containers.

I'm not sure if this is what you mean but in some ways it would be nice to have tighter coupling with a registry. Docker build is kind of like a multiplexer - pull from here or there and build locally, then tag and push somewhere else. Most of the time all pulls are from public registries, push to a single private one and the local image is never used at all.

It seems overly orthogonal for the typical use case but perhaps just not enough of an annoyance for anyone to change it.


In some situations, yes, others no. For instance if you want to control memory or cpu using a container makes sense (unless you want to use cgroups directly). Also if running Kubernetes a container is needed.

You have to differentiate container images, and "runtime" containers. You can have the former without the latter, and vice versa. They are entirely orthogonal things.

E.g. systemd exposes a lot of resource control as well as sandboxing options, to the point that I would argue that systemd services can be very similar to "traditional" runtime containers, without any image involved.


Well, I did mention "or use cgroups" above.

And what I've said is that there are more options. You don't have to use cgroups directly, there are other tools abstracting over them (e.g. systemd) that aren't also container runtimes.

Macs were already much better value than other laptops in my opinion. I just wish it were easier to run Linux on them.

Hi all. I'll maybe try to see if I can get this added to the list of search tools above but here is a tool that allows you to search (text and semantic / tags) as well as chat. Some people found this useful in previous months so posting again.

Basic search: https://nthesis.ai/public/hn-who-is-hiring

Chat: Click the "Chat" button or use the following link (https://nthesis.ai/public/b28f0eb9-f5ac-4152-9a4c-152253d698...)


Right now, I'd say the best language for AI is the one that you can review the fastest and rarely changes. Go is fairly readable imo and never changes so it is probably a good contender. But, I can't see any reason for anyone to learn it if they don't feel like it. Same goes for other "language X is good for AI" type posts.

"Waterfall" was primarily a strawman that the agile salesman made up. Sure, it existed it some form but was not widely practiced.

I think we might as well just go all in at this point: "LGTM, LLM". The industry always overshoots and then self-corrects later. Therefore, maybe the right thing to do is help it get to a more sane equilibrium is to forget about the code altogether and focus on other ways to constrain it / ensure correctness and/or determine better ways to know when comprehension is needed vs optional.

What I don't like is the impossible middle ground where people are asked to 20X their output while taking full responsibility for 100% of the code at the same time. That is the kind of magical thinking that I am certain the market will eventually delete. You have to either give up on comprehension or accept a modest, 20% productivity boost at best.


There is a famous advice for balancing in game design by Sid Meier: "double it, or cut it in half" and I think it fits there.

https://www.benguttmann.com/blog/double-it-or-cut-it-in-half...


The productivity boost entirely depends on the way the software was written.

Brownfield legacy projects with god classes and millions of lines of code which need to behave coherently across multiple channels- without actually having anything linking them from the written code? That shit is not even gonna get a 20% boost, you'll almost always be quicker on your own - what you do get is a fatigue bonus, by which I mean you'll invest yourself less for the same amount of output, while getting slightly slower because nobody I've ever interacted with is able to keep such code bases in their mind sufficiently to branch out to multiple agents.

On projects that have been architected to be owned by an LLM? Modular modilith with hints linking all channels together etc? Yeah, you're gonna get a massive productivity boost and you also will be using your brain a shitton actually reasoning things out how you'll get the LLM to be able to actually work on the project beyond silly weekends toy project scope (100k-MM LOC)

But let's be real here, most employees are working with codebases like the former.

And I'm still learning how to do the second. While I've significantly improved since I've started one year ago, I wouldn't consider myself a master at it yet. I continue to try things out and frequently try things that I ultimately decide to revert or (best case) discard before merge to main simply because I ... Notice significant impediments modifying/adding features with a given architecture.

Seriously, this is currently bleeding Edge. Things have not even begun to settle yet.

We're way too early for the industry to normalize around llms yet


While I too am only seeing a boost on the order of 20% so far, I think there are more creative applications of LLM beyond writing code, that can unlock multiples of net productivity in delivering product end to end. People are discovering these today and blogging about them, but the noise about dark factories and agents supervising agents supervising agents, etc, is drowning out their voices.

Every one of us is a pioneer if we choose to be. We have only scratched the surface as an industry.


Yes exactly, of all the uses cases for LLMs "writing code" is easily my least favorite. Theres so many other cool things for "stochastic contextual orchestrators"

The problem with this is when something breaks and your manager says “why haven’t you figured it out yet” as you spend hours digging into the 200 PRs of vibe slop that landed in the past day.

Now you could say that expectation has to change but I don’t see how—the people paying you expect you to produce working software. And we’ve always been biased in favor of short term shipping over longer term maintainability.


yep, as is always the case, it has to break before you can fix it. Bandaiding something along just makes it more painful for longer.

I personally prefer the Podman CLI however as you don't need the daemon running in the background and prefer Kubernetes like yamls for local development. I definitely don't need a polished desktop GUI that shows me how many images I have though - I've never understood the use case for that.

Same. I switched to podman just so I don't have to troubleshoot why the docker daemon isn't running again.

Used docker for over a decade, never ran into this docker daemon intermittently stops running issue.

This is hindsight but managing your domain separately from your cloud provider might be a good idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: