Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Lapdev, a new open-source remote dev environment management software (github.com/lapce)
230 points by lyang2821 8 months ago | hide | past | favorite | 65 comments



This looks pretty good. Being able to use devcontainers on local server hardware without monthly fees (and/or hetzner servers) sounds great.

Up until now we’d been making do with docker-compose and JetBrains’s remote SSH dev; this should be significantly better.


I spent more than a year living with the numerous downsides of high level CDE (containerized development environment) tools. I must admit that I am very skeptical about all options available right now and that I have rolled back to using plain, old school, docker compose based CDE.

Both .devcontainers and .devfile managed to create more effort than they took away. Some key points:

- long lived containers

- abstracting away the fact that you run stuff inside containers in the IDE and abstracting away the container life cycle

- cryptic error states and error messages, especially when setting up new projects from scratch. This often boiled down to bad plugins (even Microsoft's own VSCode plugins wet the bed often)

- only superficial support for podman

- bad support for arm based HW (and arch translation issues, like a tool requesting arm containers but the host running docker is x64)

At this point I consult teams to try out a plain compose.yml as a CDE and skip the "enterprise" stuff.


This seems to be podman focused from what I see.


No, not really. A big part of the problem is half baked software on the client side.

For devcontainers even the reference implementation CLI is significantly incomplete.



You still cannot stop, remove or update a Dev container from CLI and there at numerous issues with the rest of the implemented features.

[0]: https://github.com/devcontainers/cli?tab=readme-ov-file#cont...


> JetBrains’s remote SSH dev;

Does that require a static ip on remote?

I have skimmed the Jetbrain FAQ [1] and it says "no relay servers are involved"

[1] https://www.jetbrains.com/help/idea/faq-about-remote-develop...


No, it's able to use your local ssh_config. For example, I use this to connect to a host running on aws via ssm. The vm isn't reachable at all directly.

You can use something like the following in your ~/.ssh/config:

    Host devhost
        ProxyCommand        aws --profile DevProfile ssm start-session --target i-0123456789abcdef0 --document-name AWS-StartSSHSession --parameters "portNumber=22"
You then tell intellij to connect to "devhost". This also works under recent versions of Windows (those which ship with openssh).


ipv6 is perfect for this use case, every development VM is directly reachable.


If this is what I think it is, Visual Studio Code has an extension that does this as well.


I am interested in remote dev environments, but I'm not super excited about managing yet more software in the cloud.

There were some headaches around the exact specifics since it wasn't designed for this, but I liked the idea of using skypilot to launch dev machines in the cloud since it has plugins to all the cloud APIs and so you don't need to manage a k8s cluster to launch a dev machine. Admittedly it worked better for launching a Jupyter server than a "full" dev machine, but a full dev machine seemed to be just a few ssh/vs code configurations away.


there are products that do this out of the box fairly well. code spaces is an example with a single node but there are others - all of which are slightly different


Sorry it’s not meant to be a dig at you, but in I wish we could stop talking about the “cloud” and software running “in the cloud”.

There is no cloud… it’s just software running on someone else’s computers that you don’t control. And that someone else is usually a megacorp like Microsoft or Amazon.


Not the person you are responding to, but I'm aware that the cloud is just other people's hardware. But that's usually what I intend to mean when I say that. My own hardware I have to power, maintain, reboot, dust, cool, etc. And if I talk about "the cloud" I usually am taking about an environment I can maintain fully through software, none of the messy hardware failures and temperature management I might need to think about at home.


A lot of remote dev environments have limitations when it comes to certain types of development. For example, ios and android app development can be tricky. Or game development where you need to have GPUs and build artifacts may be slow to download to your machine.

Are there any guidances for how to fix this?


Founder of coder (https://github.com/coder/coder) here. We choose Terraform as our provisioning layer so that users can provision full blown VMs as their development environment.

We have many teams using GPUs with Coder for ML workloads but doing GUI/Game remote development where interactivity is essential remains elusive.


You can access GPUs within containers using CDI (Container Device Interface): https://docs.nvidia.com/datacenter/cloud-native/container-to... No additional tools (e.g., nvidia-ctk) are needed. Docker has recently added support for CDI in version 25.0.


I'm very interested to learn more about this class of tool. I had seen[0] Coder including alpha support for .devcontainer, but I'm not aware of other OSS options.

0. https://coder.com/docs/v2/latest/templates/devcontainers#dev...


Having deployed a few of these over the last month or so, I feel like the devcontainer spec is very annoying. The alternative is what Coder does -- write some arbitrary terraform to bring up or down a workspace. I think this is better because I tend to need other things to go in a workspace (like an IAM role to access dev databases, associated Kubernetes resources, etc). With terraform I can configure whatever infrastructure I want to go along with workspaces.

The main downside I can see is that users have to write their workspaces for a particular deployment target. This would be a problem for e.x. open source projects trying to check in a workspace definition file of some kind. We standardize on Kubernetes across clouds and bare metal so it's not an issue for us, but it makes sense that it would be an issue for other use cases.


It's always interesting to see new approaches but I don't see this replacing Vagrant for my projects any time soon:

- Vagrant already supports VM or Container environments, and has a well defined system for building and distributing "base" boxes;

- Vagrant uses a Ruby file to define the environment, so it's much more powerful than a Yet Another Migraine Looming file.


Devcontainers bring notions such as configuring plugins for your IDE, getting « features » from other repos / registries, managing environment variables that you pass from host to devcontainer, and finally either code locally or in the context of a remote environment with more resources or just simply in the context of the rest of your application (very handy for complex network or security setups).


IDEA/IntelliJ based IDEs already support having their native config committed to a project repo and support a `.editorconfig` file; I'm not sure I need a third way to do so.

Like I said, a `Vagrantfile` is ruby, so besides having all the power of Vagrant and its plugins, you can also just do straight up ruby stuff, or even shell out to do other stuff.


This isn’t open source; one of the directories is licensed under a proprietary, subscriptionware nonfree license.

The remainder is AGPL, which many people (including myself) consider nonfree as well.


Just two weeks ago we have outsourced Daytona (daytonaio) under an Apache license. We had long and deep discussions but finally decided that's the only thing that made sense.


Unfortunately you've also started spamming people about it, including those who previously unsubscribed from your marketing comms.


We’ve ramped up our outreach because experience shows it’s critical for building the contributor base and fast-tracking project evolution—our flurry of repo activity is a testament to this strategy. On the email front, a technical snag during our provider’s migration led to the mix-up where some subscribers were left enrolled. We’re on it. Meanwhile, we believe Daytona’s as a product will make up for the noise. Check our Friday release v0.7.0 for the proof in the pudding.


AGPL is Free Software according to the FSF, so I'm not sure who at all would contest that it isn't.


https://news.ycombinator.com/item?id=30495737

https://news.ycombinator.com/item?id=30496091

The FSF are anticapitalist zealots that think the ability to run a SaaS business using free software is a "loophole" in the GPL that needs to be closed. The AGPL is an unenforceable mess that's trying to be an EULA (but can't be).


How is freedom anticapitalist?


Restricting the freedom to keep local files local (that is, privacy) when operating a services business is the opposite of freedom.

It makes sense that you need to distribute sources when distributing binaries. It does not make sense to force people to disclose private internal business operations when operating a service. In fact, software licenses can’t do that - only EULAs can.

Services are not software.


Another implementation in that space is https://devpod.sh


https://devenv.sh/ and nix in general are great for setting up dev environments.


I don't understand. It's installed on a remote server, okay.

But does it provide remote environments or local environments ?

And what's an environment in this context ? A Docker compose file and a .env? Code or vim settings ? A vm à la vagrant ?


Hi, Lapdev dev here. Let me try to answer your question.

It's installed on a remote server so it provides remote environments. If you use VSCode remote, then you can "open" it through VSCode remote ssh.

The environment that Lapdev provides essentially is a container (other format is on the roadmap) with things pre-installed as defined in Devcontainer(https://containers.dev/) format.


This is totally new to me, so let me ask an extremely basic question.

The way I'm hearing what you're saying is: Lapdev sets up a remote environment that I access with my terminal via SSH, and do editing in using something like VSCode running on my local machine, accessing the remote environment with something like VSCode's Remote-SSH extension.

So by using lapped I can replicate the remote environment I normally access through those things easily on remote servers and cloud services? Is that right?


Yes that's correct.

To start with, I would suggest you to try VSCode Remote out with your own Linux box if you've got one, just to get a feel of "remote" development. You might like it or might not.


Ah, I see. Thanks for the explanation !

I haven't yet made the jump to remote/cloud development, I don't have a clear mental picture of how the pieces fit.


Remote environment with a thin client that can interface with it. Client is local, but it all ends up running remotely.


Can Labdev spin up a codeserver instance to access vscode from the browser without having to have vscode locally installed?


We've been focused on developing remote-first solutions for years (Codeanywhere). However, we ultimately concluded that, with Daytona, local will always be the primary environment, with remote serving to enhance specific needs like scaling and offloading is the right way to go. Therefore with Dayton you can spin your dev env on different targets and providers.


Nice! Small design nitpick/tip : center the text on your buttons to make them feel more like buttons. Left aligning them makes them look like labels to some. Small tweak, but can result in better conversion.


So, I know very little about this devcontainer spec.

Can I just ask, what value does this spec provide that a simple docker image containing the necessary tools does not already provide?

Why do we need another layer on top? What am I missing?


Well, developers seem to love writing "configuration" rather than "code" these days. But basically a container + the necessary tools IS a devcontainer. It's just a way of automating the "putting in the necessary tools" part especially if you need things that might need to be added to a base container, or services that need to be configured differently based on the external environment that you don't want to bake in for some reason.

If you've ever had to cut and paste a 50 line docker run command snippet but you forgot that one volume mount or port or ENV var that someone added a dependency on last week then you pretty quickly realize just doing complex docker things by hand is a pain. Another example, if you have a script that you want to run to fetch the latest authentication token from a vault after the container launches because you don't want to store it inside the container. Sure, you could write a bash script to run all these steps inside the container after you launch it but it's nice to have a config file to share with another dev and just say: use this.

And the secondary benefit is that having a config file for the editor (like VSCode) so that plugins can manage all of that stuff better. Generally a dev container runs the VSCode Server, and they know how to talk to each other which can make remote development easier. For example, now I can launch the same dev environment locally or on the 56 core xeon 1TB ram server at the office and it's exactly the same as far as the editor is concerned.

It looks like this project is an alternative to the VSCode Server. My team generally uses docker-compose for this since not everyone uses VSCode.


For the first bit, all I can think is a compose file. Also podman can run k8s configs locally, which I personally hope all of that eventually washes into the same thing. It feels like we already have the tools to make this a "solved" problem, is what I'm trying to say. I just include an additional .env that the compose file pulls in so it's not committed to git.

For the second point, ok this makes a little bit more sense, I've heard of Codespaces or OpenShift Dev Spaces but I guess I still question the value of additional complexity on top of the container (a simple dockerfile in my mind) your vscode instance's terminal is running in.

Thanks for the info.


It makes it easy to point the tool at a Git repo, have it automagically create a containerized environment for that repo with all its dependencies, and open Visual Studio Code on the codebase inside that remote containerized environment.

Devcontainer was created by Microsoft to support Visual Studio Code's remote development features, so it works best in Visual Studio Code. Inasmuch as other IDEs support it, that's up to the IDE vendor.


So this is a config standard for the infrastructure underneath something like remote vscode / devcontainers?


Pretty much, yeah. It contains all the info necessary to tell Docker how to build/deploy the container, and how to configure the editor to work in it. The goal is turnkey setup of the software, its environment, and the user's IDE so that developers don't have to waste days doing that by hand.


One angle is to simplify the setup of what you described. You can do this manually with Docker already, but the DevContainers config means your editor will do it for you.

Another angle is rent-seeking and locking you into a proprietary, expensive ecosystem. Big Tech has successfully convinced most companies to overpay by orders of magnitude for compute and bandwidth, but so far local development machines were excluded. This aims to tackle that shortcoming and make sure you enjoy all the "benefits" of the cloud even during development.


JSON, everything has to either be YAML or JSON.

If I didn't invent the thing, then we shouldn't use the thing. Dockerfiles fall into this class too, they are just a shitty homegrown DSL.


> If I didn't invent the thing, then we shouldn't use the thing.

What about Linux? Or... the internet?


Looks like Kasm Workspaces [1] for developers, I'll give it a try.

[1] https://kasmweb.com/


Seems great, coming from having to install code-server(hosted VSCode on remote machines) along side SSH via local VSCode. A better managed experience for both would be pretty neat!


Main current main pain point with devcontainers is to run a gui app remote, wathever I do, the gui opens only on the server. I'm wondering if this solution can export gui remotely?


Can I connect to the environment over https? I'm looking for a good solution to use nvim from the browser on my iPad on the go.


I haven't yet tried lapdev but have read somewhere that they have included a web ide, just like ours daytona comes out of the box with open vs code web ide.


Nice, but with a Proxmox setup I can clone an existing VM/container (and there are already dozens of templates), point VSCode at it, and I’m done.

What does this add, really? It’s not automation (I can automate a couple of clicks in Proxmox), it’s not resource management (Proxmox handles storage, etc.). Is it developer identity? Because that’s the only thing I’d need a (relatively simple) script to deploy SSH keys to an environment.


A few things Lapdev can probably do better:

Devcontainer spec is probably more developer centric than Proxmox templates

I would guess with your Proxmox setup in a team setup, you'd need to assign a hostname/ip for different developers, where with Lapdev you have a single point of access.

Lapdev can proxy http traffic to the workspace containers with authentication, which makes it easy to preview things with security.


Thanks, that is the kind of useful answer I was looking for. But I can share VM hosts and IPs among developers, that's trivial. The auth piece, not so much.


Do you work for Promox? Because it looks like a very corporaty over engineered solution.

Lap dev looks like a much simpler tool


It looks like a much simpler, incomplete tool.

But you could have gotten your answer by just checking my profile. But no. I just use Proxmox to manage all of my infra: https://taoofmac.com/space/blog/2023/12/17/2000

Don’t assume people have business motives for stating technical facts - and don’t downvote because they ask pointed questions. There are good reasons for pointed questions.



It isn't easy to open-source your baby. It wasn't easy for us when we went Apache, so I can relate. It is hard to build a truly open-source business model.


Yeah I wonder why they didn't go with Apache or MIT


Requirements: Postgres

This immediately killed it for me. I need to install a Postgres server just to try out a tool? And if I recommend it to my team, do we need to run and maintain a Postgres instance?

OP- you are creating too much work for me. And don’t ask me to use your hosted version, the easiest solution is to use Vs code devcontainers


Running a Postgres container from docker compose (or whatever method you want) is up there with easiest one-and-done 2 minute development setup tasks I can think of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: