What's particularly impressive about Tramp is that other Emacs packages tend to work well with it. For instance, you can Magit over Tramp --- or, better put: Magit just works in Tramp buffers. Same with language server stuff. It's kind of wild when you think about what's happening under the hood.
And it's still transferring files. It's not remotely editing.
I had to figure out how to do rectangular copy/paste in vscode, and it took just as long as it did to figure it out in emacs.
"Most people don't know how to use vim and emacs" is both incredibly controversial around these parts and totally true.
I would say that the dev client/server setup you're describing and what TRAMP provides are different things overall as well. TRAMP really just provides a way to get a file from a remote, edit it locally, and on save write it back to the remote system. I would not consider it a valid use case for remote dev especially now with how prevalent things like LSP's are and I don't know of a major mode that is designed around a remote LSP I'd just do X forwarding or some other screen share at that point. I would agree that overall it's a gap for emacs that VS Code does better.
Trivia: the original Emacs (written in TECO for PDP-10s running ITS) also had transparent access to remote filesystems using the same syntax (host:path).
It was free though: remote files were accessed over the net via a FUSE-like userspace process.
In the mid 1970s.
https://emacs-lsp.github.io/lsp-mode/page/remote/ suggests that LSP under TRAMP is basically there, though I haven't had occasion to try it.
any remote command
I haven't used VS code yet, simply because of lack of time in relearning another editor. In which particular way do you feel like VS code remote plugin is superior to the alternatives? And is there anything lacking in your VS code experience as of today?
To me, emacs is a great editor with variable quality IDE-like capabilities, highly dependent on workflow.
VScode is sort of the opposite. It's an at best ok editor with a strong suite of IDE capabilities that are mostly consistent.
In emacs I have to either add pretty complicated scripts to my .emacs just to get stuff to play together if it's even possible at all, or stay in the terminal and run it all on the remote (and put up with the lag, and re-mount/upload my configuration when a new instance starts).
For the longest time I used emacs on the remote and pycharm/jetbrains locally (and was a vscode skeptic) - that changed once I saw what the remote dev plugin was capable of (jetbrains doesn't have an equivalent). I still use emacs in the terminal on remotes for quick text editing, but for project work vscode works better specifically because it's easier to resume on disconnect (one-click restore of all state) and easier to configure. I use tmux in the vscode terminal to resume remote shell sessions.
More importantly, it's a lot easier to onboard others to vscode because the IDE as a whole is more discoverable, more user-friendly, and follows platform conventions more closely compared to emacs or vi.
The one big feature that I miss in vscode is tab key behavior/intelligent indentation. Emacs does this way better - tab just does what I mean, instead of inserting a useless literal tab or spaces.
Even python, if I M-x run-python while on a remote file, it runs python on that remote machine.
That said, this was a few years ago now. Things may have improved in 26.1 when threads were introduced, and async got even easier.
Tramp is great for editing some remote files here and there, but to match vscode you will have to put a lot of effort to make everything feel equally fast and make all your packages work. Even then it won't feel as seamless as vscode because it "cheats" by installing a remote component, and I don't find that to be a valid complaint since you are already installing your whole dev environment in the remote server.
Having said that (I'm not a vscode user), what I always do is use Emacs on the remote server inside tmux. For me that's better and superior to the vscode remote plugin, my dev environment is local to my editor.
I confess that you need a solid ssh connection. But for the most part it has been great for a long time.
Microsoft has done a lot better job promoting VSCode than GNU has promoting Emacs for the past few years. More mindshare among influential developers / evangelists has lead to massive increases in adoption which leads to better extensions which in turn fuels more adoption.
It probably doesn't hurt that VSCode uses the MIT license.
ᐅ time ssh myserver exit
Executed in 1.66 secs
ᐅ time ssh mastodon exit
Executed in 55.89 millis
But the ssh connection method transfers the files inline, using base64 or uu encoding, and then you do not need a new connection each time the file is read or written.
That's interesting. I know some places go to great lengths to keep developers from accessing production without some sort of break-glass procedure through a jump host. I'm curious if they all know about this sort of loophole.
1. You don't have to expose a jump host at all, which is one less exposed asset to manage and worry about.
2. Your security team should already be collecting Cloudtrail logs, so they get auditing of SSM/SSH "for free".
3. You can control SSM access via your SSO provider, which means you can trivially enforce a bunch of policies all in one place vs having to configure SSHD.
4. You can control SSM access via IAM.
5. You can limit session duration easily.
6. No more SSH agent hijacking, at least I don't think.
I also wouldn't call this a loophole, you have to explicitly have permissions to use SSM.
Perhaps not the best wording on my part. I was aware of SSM, but not aware of the SSH tunneling features. I'm wondering if that's common. Is the SSH tunneling controlled separately, or on by default if SSM is on?
SSM Session Manager is one of the (if not the) preferred way to manage SSH access to instances in AWS. It's kinda hairy to set up, but it removes the need for bastion hosts/jump boxes for most use cases. From my experience I would say it is quite common.
If there's some tricky bug in production, then one can create some sort of debugging service that runs on another port and deploy it to investigate the bug, or use management and monitoring tools. Copying files up to production is something that should be only done by an automated deployment script.
If you are under time pressure to fix an escalation from a high profile customer, and you don't have such a service yet, do you make the customer wait for you to write one, or do you just use command line access? Or else, if you already have such a service, but it doesn't contain the necessary diagnostics to investigate this particular problem, do you make the customer wait for you to enhance it, or do you just use command line access? Or you make your debug service totally generic – allow it to run arbitrary code supplied by the user – in which case it can do anything the command line can, but how is that actually any more secure than more standard means of command line access? Plus, it is going to be adding friction which may slow down resolution.
> or use management and monitoring tools.
Often these work fine for some problems, and then you get a problem which they don't cover adequately, and you need to go beyond them.
Seems to be at odds with
>then one can create some sort of debugging service that runs on another port and deploy it to investigate the bug
In many cases, that's just SSH. In most cases, I'm not copying files around, I want to connect to the real environment where firewall rules, API keys, permission systems, overlay networks, etc are in place. If there's a stuck process (let's say, lock contention) it's much easier to just SSH on and run gdb and check the stack to see what it's doing. Some languages like Java have pretty rich tooling out of the box for remotely connecting to processes. Others, like Python and Ruby, you just use gdb
Either way, there's no copying data necessary--you just need access to the running process. For a large system with hundreds of identical servers, I don't want to deploy a debug service everywhere; I just want to connect to the one with an issue and check that.
Snapshotting works sometimes, but I used stuck processes as an example since that's usually where all this remote/log/etc stuff falls apart. And, as-it-so-happens, things like lock contention tend to be really hard to recreate in synthetic or simulated environments that don't have real, authentic load.
Keep in mind that doesn't mean "go crazy with `root` in production". You can combine that strategy with scripting and tooling to drain/isolate/quarantine servers where the stuck process is still running but they don't have live traffic being routed to them.
I see this "ZOMG NO ONE TOUCH PROD" mentality a lot in highly regulated environments but it's usually more sustainable to try to isolate in-scope system's functionality as narrowly as possible to avoid bringing unnecessarily large amounts of things in scope (e.g. put the billing functionality in a microservice to limit PCI scope)
But what about when things don't work like they should and ought to?
Want to debug network connectivity issues? See which process is hogging CPU? Investigate installation/delpoy problems? Reinvent the wheel, or use what's already there.
Tramp does not need scp to transfer files, it can just as easily multiplex them over the shell connection by using base64 or uu encoding.
SSM is definitely not the most secure way. SSM is super complex and super-integrated into the rest of AWS, and also isn't cross-cloud to GCP, Azure, DO, etc, so now everyone needs an account just to log into a Linux server.
Worse, IAM roles are powerful but easy to misconfigure, and that's before getting into how hard they are to apply with any granularity because of the policy length limitations, so you're likely giving everyone access to log into every instance without even knowing it.
I was mentioning that particular misfeature because it was a personal annoyance of mine. Oh well, I suppose everything is about customer lock-in these days.