Hacker News new | past | comments | ask | show | jobs | submit login
Docker as an Integrated Development Environment (2019) (medium.com/ls12styler)
108 points by albertoCaroM 7 days ago | hide | past | favorite | 91 comments





Personally I'd prefer going with NixOS to achieve the same result. that way you don't even need a docker installation. As a bonus you can actually install the Nix package manager on osx if you're not into Linux (and this way there is no need for virtualization if you're on a Mac)

More info: https://nixos.org/


As an added bonus, Nix can still easily output a container image for docker use containing the same shell environment.

I love the proposition, but frankly it seems to take too much time to learn even the basics. It would be awesome for someone to build a Docker-like experience on top of Nix, though.

Such is the life of a professional.

If you want to reach the areas where you can really improve your productivity, you're gonna have to take weeks or even months to learn something from the bottom up. There is no way around it, and it's the same in many industries. There is no "30 minutes to get more productive than anyone else" in reality, only hard work, understanding and application of your knowledge in the real world.


I’ve been working with Nix for years and I still find it to be disappointing. I don’t like Docker and I really love the Nix concept, but the execution seems poor. As an example, I spent half of a Saturday unsuccessfully working with people on the Nix Discord to get VS Code configured with some Rust plugins on MacOS. Similarly, if you need to write your own package, it might be easy or it might be a bottomless rabbit hole tar pit in which you have to package the entire dependency tree including a bunch of obscure C libraries with no build docs. More often than not it seems to be the latter. Further still, nixpkgs is horribly documented and dynamically typed and poorly organized, so every time you’re looking at a definition for a certain package and want to know what “shape” its dependencies have, you have to grep around for another package that uses it, then try to work out what that package is passing in as the dependency; however, if the dependency object is returned from another function defined in some other file you’ll just have to rinse and repeat until you come across the original definition. This is well below the bar for “professional work”. Problems like these arise every single time I try to work with Nix, and it becomes an enormous time sink. Docker is pretty shit, but there’s a pretty clear upper bound to the time I’ll spend struggling with it. Not so with Nix.

I agree actually, you describe the issues I've had with nix quite head on. I've not ran into such issues every single time i try it, but I have hit upon a good deal of issues of the kind you describe. Nix is by no means perfect, and if I was smarter I'd try to remake it with a stronger type system, but the basic premise is useful enough on it's own to warrant usage IMO

I’ve thought a lot about a type system for Nix. Typing the mix expression language would improve some things, but since so much of what people do with the Nix expression language is writing scripts that take in files and transform them into other files, you would really want a type system for the derivations which could describe the “shape” of the input and output files. I’m not aware of any such type system, but it would be a really interesting area of research. Would love to know if others have thought about this or not.

Can you link the Nix Discord?

As far as I know, the only official chat channels is #nixos (and #nixos-* friends) on Freenode (https://nixos.wiki/wiki/Get_In_Touch). There is an unofficial Discord, but I'd advice you to join the IRC channels instead, fast help and the people contributing to Nix actually hangs around as well.

I've actually had terrible success getting help through the official channels: it's why I asked.

So you just shifted your dependency from Docker to Nix. It might be more fun and an intetesting learning experience but also it's more complicated (or at least it's not that widely used like Docker is)

> It might be more fun and an intetesting learning experience but also it's more complicated

I can't imagine anyone using both and thinking Nix is more complicated that Docker. And it's not close.

> at least it's not that widely used like Docker is

"Which has more users" would not enter the top ten reasons I'd choose between tools like this.


I'm a major user of both and I can definitely say that Dockers basics are easier to learn and teach than Nixs basics. Maybe it has to do with the simple nature of the Dockerfile with just lines of commands, compared to Nix where you first need to learn the Nix language, then the idioms, then the package manager and now you can understand the OS. There is a couple of layers before things start making sense. While a Dockerfile is almost like a line-by-line shellscript.

At least that's my experience. Although I do prefer Nix over Docker any day, I'd definitely say the learning curve is steeper with Nix, but it's so worth it long-term.


>"Which has more users" would not enter the top ten reasons I'd choose between tools like this.

Size of the community, especially if its quality, means you just increased your surface area at getting help for issues you encounter. Which is a very non-trivial concern in a work environment.


> "Which has more users" would not enter the top ten reasons I'd choose between tools like this.

Why not? A big community means that you don't have to solve problems on your own, because someone else has probably solved the same problem before you.

If anything, it should be the <<first>> factor when making these decisions.


> I can't imagine anyone using both and thinking Nix is more complicated that Docker. And it's not close.

Well for starter Nix has a wider syntax because it covers much more than Docker + docker-compose. Which it's perfectly fine because they are two different tools that intersects in few use cases.

Syntax to install a few packages:

environment.systemPackages = with pkgs; [ wget vim nano zsh file ];

Including trailing semicolon. Dockerfile on alpine:

RUN apk add vim nano zsh file

I mean, maybe I'm too used to the latter but I really struggle to find the former simpler.


It means the ecosystem for Docker is more robust than for a Nix, which is patently true (see my comment here: https://news.ycombinator.com/item?id=26689378). I want to like Nix (and indeed I use it for some things), but it’s wildly immature, and maturity is a fine reason to choose one tool over another.

Also “imagine thinking...” is just vapid snark. If you must snark, at least make it substantial.


Yes, learning nix is definitely an investment, but it's more powerful than docker, allowing you to create your entire setup, including drivers, desktop environment, and any userspace apps you might need, all declarative, and contained in a config file (or several if you prefer)

> So you just shifted your dependency from Docker to Nix

OTOH you don't have to write all this boilerplate code that's suggested in the article and your Nix environment is truly reproducible, whereas rebuilding a Docker image might not reproduce it faithfully. (Try running `apt-get install` in a Ubuntu container without running `apt-get update` first.) On top of that, if you've ever had to use more than two languages (+ toolchains) inside a single project, maybe even on multiple architectures[0], you'll appreciate Nix taking care of the entire bootstrap procedure.

[0]: Lots of dependencies that are easy to install on x86 need to be installed/compiled by hand on arm64.


It looks impressive but it isn't clear how much you have to pay for services from them. It isn't free and you aren't in control. Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers.

They certainly should monetise, but not making it clear is what I object to. I've raised as an issue for clarification in their community wiki.

https://github.com/nix-community/wiki/issues/34


Mmmm, no. Nix is not a SaaS that builds your snapshots that you have to pay for (or that "they need to monotize", what's wrong with people?), it's a whole ecosystem ranging from a full OS (NixOS) down to a package manager (Nix Package Manager) and programming language (just "Nix).

Not sure what needs clarification here, it's pretty up-front about it's mission and features already.

> Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers

Not sure where you get this from. Snapshots are stored locally unless you specify otherwise, and then you'll get to chose whatever storage servers you want to use. Absolutely no "hidden" costs with Nix as it's a MIT licensed project and I don't think they even offer any "value-added" services nor paid customer support.

Edit: reading the issue you just created (https://github.com/nix-community/wiki/issues/34), I'm even more confused. "Given the need to monetise" is coming from where? Nowhere do they say that they have to monotize but don't know how/where, so where do you get this from? Not everything is about money.


I'm really impressed with it and am ONLY seeking clarity. I'll be extremely happy if I can use it standalone. Copyright, licensing are all I'm looking at.

e.g. Something like AGPL is considered copyleft and not compatible with 'open source' ethos. Still 'free' but the additional non-compete cloud service clause is both a sensible move, but something I'd just like to understand.

Your statement of 'what's wrong with people' is what is getting me. Borderline defensive. I'm definitely not knocking the product, quite the reverse. Its so good I want to embrace it wholeheartedly.

I have no problem paying for things, contributing voluntarily to a great product.

Its good to see that various organisations are committed to funding the infrastructure costs etc. (which negates my comment about storage servers).

As to monetisation,kind of irrelevant but was referring to paid services at the bottom of https://nixos.wiki/wiki/Nix_Ecosystem


> e.g. Something like AGPL is considered copyleft and not compatible with 'open source' ethos. Still 'free'

I’ve had to read this several times and still don’t understand what you’re talking about. AGPL is a Free Software license. All Free Software licenses (to my knowledge) provide source access. AGPL is also OSI-approved, so it is also an open source license.

It’s unclear what point you’re trying to make?


Issue 34 is now closed. My thanks to the patience of people there who have clarified and improved the visibility of licensing.

As noted below, my initial wording was unhelpful and misplaced.

The infrastructure costs are covered by sponsors. The 'They should monetise' was a 'they' that I now understand are external organisations rather than nixos themselves.


Hmm i don’t think you can compare IntelliJ which is a fully featured IDE with refractor functionalities, full text search, debugger and so on to a vim setup with plugins in docker container, just because they both edit text. This what the author did at the end of the write up. It’s like comparing jQuery to NodeJS, yeah they both generally are for JavaScript but serve a different purpose.

I agree, the title of the article is a bit misleading. In general, you’d still need an IDE installed on the host machine, which can then connect to a runtime on the container. With VS Code and remote containers, it’s quite easy.

Alternatively, maybe it’d be possible to have the container expose an IDE over http (possibly vscode through the browser?).


VNC/RDP via Guacamole or some other VDI solution can provide this with relatively little effort. But it sort of defeats the point. Using your local IDE to connect to a container allows for a snappier, more responsive development experience than a VDI can provide. Plus remote containers is basically just ssh. So a lot less bandwidth and general overhead than a full on VDI solution, and you don't need to have a full VNC server set up on your containers.

The same concept still works; they could have installed IntelliJ and copied over all of the plugins and settings?

vim, gdb, grep, cscope, ...

Just because you are not familiar with his setup doesn't mean it's not at feature parity and more.

Hell, he might even be running intellij in docker if he wishes.

Saying this as I have a similar setup with emacs.


For me what you just listed are a couple of tools that fit a specific workflow - seems like C or C++ development.

IDE is an Integrated Development Environment. So I think strapping a few tools together with no real influence over one another doesn't seem to constitute an IDE.

Unless you wanna argue that putting these tools into a Docker image makes that container an IDE, then I don't know, maybe? Whatever floats your boat in the end.

Using a JetBrains product you pretty much just pay for the support and "works out of the box" features. You can roll your own LSP, AST Analyzers, shell scripts that bundle it all together and call it an IDE, but I would still be on the side saying it's just a bunch of tools and they're not "integrated"


I am suggesting - vim + plugins or emacs + libraries that use those tools. All integrated as you say.

Your IDE also uses gdb or similar for debugging, common compilers like llvm for compilation / code indexing, common analyzers.

You will have references, definitions, code search, debug, completion, etc all within the editor (in my case emacs)

It all works great and much faster then out-of-the-box IDEs, works within terminal too... no x11 needed.

That still requires some fiddling initially to get it working but once you have your setup its a matter of just pushing it to docker.

Now... the drawback is the time spent to get it, but what i'm saying is don't be dismissive, try and imagine what's possible with old tools.

New tools are also nice in that they avoid that setup work, you can push these too in the docker container and use them the same way.


"Choosing a base image can be quite daunting. I’m always a fan of Alpine Linux for my application containers, so that’s what I chose."

Just be aware that means the musl libc, which is often fine, but not always. Software that expects glibc can crash or have unpredictable behavior. The JVM is a good example, unless you get a JVM that was originally built against musl.

And sometimes also issues with busybox, where it differs from other implementations of the same tools.


Case in point: "Using Alpine can make Python Docker builds 50× slower", https://pythonspeed.com/articles/alpine-docker-python/

That article should be titled, “Most Python packages do not upload an Alpine-compatible wheel to PyPI.”

If you know where to find Alpine-compatible wheels, or host your own, Alpine has no build-speed penalty.


Where do you find Alpine-compatible wheels? Many ones uploaded to PyPI aren't, because e.g. both GitHub actions [0] and Azure pipelines [1] don't have Alpine images. Is it reasonable to expect maintainers of small, open-source libraries to write and maintain their own Alpine runners?

I guess if you know your company's workflow depends on Alpine, you can build and cache them yourself. But the wider community doesn't benefit from that.

[0] https://docs.github.com/en/actions/using-github-hosted-runne...

[1] https://docs.microsoft.com/en-us/azure/devops/pipelines/agen...


I was not aware of prior work so I maintain Alpine wheels for packages I use personally in a GitHub org.

https://github.com/alpine-wheels/index

Besides my own needs, only one other person has requested additional packages, which I was happy to add. Maintenance is minimal, mostly just approving pull requests created by Dependabot.


That's pretty slick! Thanks

Exactly zero packages upload Alpine-compatible wheels to PyPI.

The manylinux specification assumes glibc, so there is no valid distribution format which would allow the publication of non-glibc wheels.

However there is an in-progress PEP to support it: https://www.python.org/dev/peps/pep-0656/


I've used alpine images for python in production and not run into any of these issues. The key is to know how to use multistage builds to cache layers.

Yeah, this is why I use debian slim as an image for most projects unless I'm prioritizing small size. Its small and popular enough that a lot of other images already use it and chances are that you dont have to redownload it.

Does anyone use docker for full-fledged development on OSX? I am a Linux user and tried setting up a dev environment for my colleagues on OSX but file system I/O was extremely slow and completely unusable.

Yep, I use it. There are two tricks to mitigate this:

1. Using a :delegated or :cached flag when using a bind mount can speed it up a bit

2. For folders that need a lot of RW, but don’t need to be shared with the host (think node_modules or sbt cache), I bind a docker volume managed by docker-compose. This makes it extremely fast. Here's an example: https://gist.github.com/kamac/3fbb0548339655d37f3d786de19ae6...


IIRC there are some mount options that might help if you search the docker docs, but for me I just create some local docker volumes to hold my code and mount those instead of mounting host folders. It feels a little weird and unnatural that your code is 'hidden' in docker's volume folder (under /var/lib/docker/volumes) in the VM instead of in a folder on your host machine. But it gets you into more of a mindset that this is just a temporary checkout and the real persistent home for the code is your source repository (github, etc.), so you don't let things linger without being checked into a branch somewhere.

Lots of useful information already have been given, I’d also like to add docker-sync [0] for local development environment.

[0] http://docker-sync.io/


We ran into this as well. The I/O would cripple the system. It seems like if you have a compiled language in which the sources are compiled into a single binary that runs on your Docker VM, things are not so bad, but in an interpreted language with the sources on the host’s disk and the interpreter running on the guest, the VM needs to reach across that host/guest VM boundary all the time. We also tried all the tricks to speed it up, but we ultimately gave up and just used native MacOS processes.

Yes I do. I am trying to develop on OSX since a few days, using Docker. But between the low amount of ram on the laptop and the bad IOs performances I decided to give another try to Github Codespaces and I'm very pleasantly surprised. It feels fast enough and I can switch from computer without thinking about it.

My new company's restrictive software whitelists are making Codespaces a very attractive proposition but lead times for invites have been in the months for some people.

Yes. Check out what ddev is doing, there’s some options for using NFS mounts that are acceptably fast.

Thanks for the tips guys. Will definitely try them out.

Go the next step and run a local kubernetes cluster with kind or k3s (it will take you 30 seconds to have a k8s cluster going). IMHO the kubectl CLI is a lot more logical than docker's CLI. You can create all your local storage volumes ahead of time, create a pod that attaches to it, and then just kubectl exec into the pod vs. writing a long fiddly docker command line string (or crafting a docker-compose.yml). It's easy to adjust the pod as necessary while it runs too, like adding a service to expose ports without rerunning the container.

But if you do like the idea of docker dev environemnts, check out a tool like batect: https://github.com/batect/batect It's somewhat like if docker-compose had make-like commands you could define. Your whole dev environment and workflow can be defined in a simple yaml config that anyone can use.


Won’t setting up a k8s cluster require writing resource definitions? I imagine you’d need to write a statefulset. How’s that better than writing a docker-compose? I’m not sure how vscode does it, but it allows you to publish ports in real time as well.

Nope you can make a simple pod definition and not worry about a deployment. For a local cluster you'll just have one node and everything is effectively a statefulset. IMHO it's easier to write k8s yaml, there's tons of tooling, clear schemas, etc. You could even script calling the API server directly.

Check out k9s also, it’s much nicer to use than kubectl for edits and shells and things.

Update: The original post is from January 2019.

Ashley Broadley's github page at https://github.com/ls12styler sadly doesn't contain a repo with his rust dev work to date (I will ask him as it has some really good stuff in the article.)

----

Very nice. I'm doing similar at the moment. Maybe take a look at

https://www.reddit.com/r/rust/comments/mifrjj/what_extra_dev...

A list of useful cargo built in and third party sub commands.

As you note, common recommended app crates (source) should be gathered separately.

I have several other links and ideas eg supporting different targets such as x86_64-linux-unknown-musl but too long for this post!


Tutorial on declarative and reproducible developer environments with Nix: https://nix.dev/tutorials/declarative-and-reproducible-devel...

Cool stuff. A few months ago I looked into building a CITRIX alternative based on the idea that you would run application frontends in a local secure docker container while running the backend in the cloud. E.g. you could run VSCode locally while actually compiling in a kubernetes cluster. I eventually ruled out the idea for business reasons, but from a technical perspective it's doable and probably useful. At the time I thought the primary advantage would be reduced cloud costs and lower latency.

I'm currently testing fedora silverblue, which uses toolbox in a roughly similar way. https://github.com/containers/toolbox

One thing that bugs me is that I can't (or don't know how) get my current state into a Textfile, from which I can reproduce.

It's also not fun for embedded development. Guess what, I need to access USB devices, serial, mass storage, hid - super annoying with this setup.


if you want an OS as ide... wouldn't be easier to install emacs?

Until you reimplement every language as transpilers to elisp, that's not the same thing at all. In this respect, Emacs is actually in the same tool category as VSCode or Vim. As you'd set up Dev Containers in VSCode, you'd set up TRAMP in Emacs to ssh into a Docker environment, or (more likely for Emacsians I guess) access a Guix environment or Nix shell.

I find its easier to setup environments via Nix. Its no longer just a bunch of shell scripts.

What never been clear to me, is how does one pin a specific version in nix? For example, say I want python 3.9.2 exactly?

You need to find the Nixpkgs GitHub commit with that specific version – in some cases it won't necessarily be in the binary cache so you have to compile it yourself because otherwise they'd have an insane amount more binaries to host. You can use Cachix to get your own binary cache (or alternatively just straight up host your own), and check out Niv if you want to make it easier to manage pinned Nixpkgs.

Niv looks interesting, I’ll have to check it out.

Trawling through git commits, hosting my own binary cache, these are all awful stories. Package versions are the normal pattern, and this lack of support for them in Nix has turned me off the two times I’ve explored using it. I kinda get why they aren’t supported, but it is a huge dissonance from the rest of the software world.


This is a very good idea to have the whole IDE as one docker container. We do this as well at Yazz:

https://yazz.com/visifile/download.html


Nice, similar thing to what I do, a few more things:

1. If you want X11 (haven't figured out audio yet)

"-e DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro"

2. Firefox

> --shm-size=2g

and start with: firefox --no-remote

3. Entering container

Just map a command to enter the container with the name as parameter / optional image type as second.

That way you get a new fresh environment whenever you want

Command would just start if it's not started or exec otherwise. I go extra length to have docker start up a ssh server inside container and I just ssh into it.


For audio, maybe you can use pavucontrol on your host to make pulseaudio accept network connections, and then use PULSE_SERVER=your_host_ip command_which_plays_audio in the docker container? (I haven't tried it).

or better (because there's no nasty reachability from browsers or the net) just use an unix-domain-socket and bind mount that and the secret cookie to the container (I use that with bubblewrap for appimages)

I actually do do this - https://github.com/mikadosoftware/workstation

I like the idea of using k8s as suggested upthread. I just have not had much time to push changes / work on it recently. One thing worth thinking about is i have moved to podman - seems a lot slower to start up but is user space which seems sensibke


That is a nice write up. We use a very similar set up to containerise our embedded tool chains.

Additionally, we use a wrapper script to symlink the current containerised project to the same location in the host system. This ensures, that outputs in the containerised environment points to valid paths in the host:

E. g.: docker mount: /home/me/dev/proj1234 > /workspace

symlink in host: ln -sfn /home/me/dev/proj1234 /workspace


I run VSCode and Eclipse Theia in Docker containers and access them via browser. Depending on what I do I start either with a Java, Node, or Python container.

When I run docker on my laptop, the fans turn on full speed and stay there (recent macbook pro). I kill docker any time I don't expect to use it in the next hour. If I'm not on AC power, it halves my battery life. I wouldn't even consider keeping docker running constantly. Is this not the normal experience?

Same here with Docker Desktop for Windows.

Unless you have a Linux-based operating system, Docker behaves very poorly.

NB: The Hyper-V backend behave a bit less poorly than the WSL backend.

I've found that Docker Desktop uses a lot of Disk I/O whenever you use volumes, or pull an image, or anything else that touch the hard drive.


Can you run ansible or terraform inside a docker container?

There are two parts of the dev environment - the programmer preferences and the project libraries and other infrastructure. What I would like is to have a way to compose those two and ideally something that would work the same way inside a docker container as in a full VM.


Grab the ansible-runner image from dockerhub and it's a great slimmed down image to run ansible: https://ansible-runner.readthedocs.io/en/stable/container.ht...

To provision stuff _inside_ your docker container from ansible I've found packer is the easiest way to do it: https://www.packer.io/docs/provisioners/ansible-local There was apparently a tool called ansible-bender that did something similar but was abandoned. Packer makes it easy to define a container that's provisioned with a local ansible playbook.

Ultimately though I think using ansible with containers is a code smell. If you provision in a container with ansible you have to pull in an entire Python install, and that blows up your container size fast. You can do multi-stage builds and carefully extract the stuff you need but it's a real pain. IMHO minimal shell scripts that run in very tightly defined and controlled environments of the dockerfile (i.e. they don't have to be super flexible or support any and every system, just this container) are the way to go.


I have a co-worker who had the idea of stuffing Ansible into a container. This would allow anyone to easily run any Ansible playbooks without having to deal with dependencies and versions. It’s absolutely terrible to use. You end up having wrapper scripts to make it even remotely usable.

Mounting things in the right locations is a nightmare, even minor changes becomes a hassel. For Ansible, just learn to use virtualenvs.

Terraform may be a little better.


We do the same thing but I wouldn’t call it a nightmare /hassle. It’s exactly one helper script to start up the container with the right volumes and a few aliases to make commands (ansible / ansible-playbook / etc) work seamlessly. Some good tips here: https://jonathan.bergknoff.com/journal/run-more-stuff-in-doc...

Yes to terraform. We use it at our work to setup some dynamodb tables for testing locally. We mount the .terraform folder.

Nix and Guix are both answers to that combined need.

If you want to do this on Windows, I'd recommend giving https://scoop.sh a try.

Especially when doing IT support, having docker on the machines is rather unlikely. This sounds like a good usecase for appimages.

Using schroot for years here.

Can do the same but having access to host easier and so to hw devices.

Moving it around my config is easy as having dotfiles around


99% of the time a directory with your projects is all that you need.

If you really need a "container", debootstrap + systemd-nspawn does the job and provides much better sandboxing with 10x less complexity.

You don't need Docker or Nix.


I only get a new laptop once every several years. Doesn't really seem worth it to me personally. I also sort of like starting fresh in a way. Granted I have my dot-files on github to make that part easier. But I don't mind running the install command for things as they come up.

I'm curious if there are other benefits to this approach though besides just saving time when setting up a new machine. The article mentioned "you end up with (in my opinion) a much more flexible working environment." Any ideas what they might mean?


Reproducible dev environment. Easier to reproduce some bugs for fixing. More certainty that it doesn't just work on your laptop because you have an undocumented dependency installed. Easier to test the setup process on a clean machine and vary things about the machine setup. To test what happens if you have Python available system-wide vs. if you don't. More precise development history since you have docker-compose.yml under Git, making "time travel" easier.

There's all kinds of little benefits that don't seem that important until you have use for them. Of course Guix and Nix go closer to being actually reproducible, but Docker is better than nothing.


To add maybe one more point to this - it’s so much easier to run parts of the pipeline for devs who aren’t familiar with the environment. It can also serve as a documentation for what’s required to develop.

I now go a bit further. I used to keep my dot files around as well but last time I decided to go completely fresh and I learned about powerline10k (more performant than powerline9k) and sdkman (an installer for multiple and different sdk flavours) and had a nice evening to boot. If I hadn’t started over I would just have used the old config and not enjoyed the new benefits.

I think I share your line of thinking.

What are the benefits? Are there down sides to being operating in the docker container for everything?


anybody has experience with VS Code inside Container? the feature https://code.visualstudio.com/docs/remote/containers looks nice.

Why not just use Vagrant?

I use vagrant + ansible to configure my development environment. In the Vagrantfile I specify also the mount of the workspace containing my project. I then edit the code using vscode installed on the host (or vim from inside the box).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: