Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you work efficiently on remote servers?
30 points by filsmick on March 27, 2015 | hide | past | favorite | 81 comments
It seems there are a thousand solutions for remote editing, but none is perfect. Cloud9, git, rsync, rmate, vim or emacs on the server (but you don't have your familiar setup unless you sync it), FTP, scp...

What do you, professional developers, use for you day-to-day work on remote servers? My thought was that vim / emacs / rmate + Sublime for configuration files and git for well-defined projects were the norm, but I'm curious about what people use for their work.




Well, if for work you mean programming, you either work locally with a full IDE and deploy to a server, or if you have to work remotely then zsh + tmux + vim, and put some time in learning all three, and superpower them with the plugins you need. Ideally you do both, that is, learn how to use efficiently zsh, tmux and vim, and maintain your dotfiles, experiment with some plugins to have your curated list of plugins, and then if you need to work remotely you will feel comfortable, and when possible you can work locally on an IDE. If you do that, you will install Vim plugins for your editor of choice anyway (Vrapper for Eclipse or Vim plugins for IntelliJ Idea, or for Atom, Sublime, etc). There are also some plugins to remote-edit files in editors like Atom or Sublime


TRAMP mode in emacs let's you edit remote files, do dired etc through ssh, so that mitigates the dotfile syncing problem.

edit: also sshfs, tho that isn't always a smooth experience


+1 for tramp mode. Nice to have all local .emacs configurations/modes available. Not that I endorse editing on production servers ...


I came to mention TRAMP. A couple weeks ago I started running the server parts of my dev environment on a little laptop that was laying around and using tramp for all source code editing. Its pretty transparent and not having to run any VM's on my poor little aging mac is awesome.


I use ssh + tmux + emacs. Security policy states no local copies of development files are allowed at my workplace without special authorization, which I never bothered asking for.

Migrated my emacs configuration to the remote server as a git repository, along with any programs that aren't available (e.g. git, which I use for unofficial code; official code is managed via perforce). Use GNU stow to move those configurations and binaries to the right places. If I need to move non-confidential information around I collect it in an emacs buffer and then use M-x compose-mail to send it out. The home directory is an NFS mount so I can also mount my home directory and exchange files that way. Etc, etc.

I also set up a lightweight Arch virtualbox VM that I SSH out of, because I really don't like the available Windows terminal applications, and every so often I need a local Unix box for some random reason.

A lot of my coworkers use Sublime and remote file editing, but I work faster when using Emacs (comint-mode is a godsend). We mainly work in Perl, Javascript, and HTML so none of the bigger IDEs are really appropriate.

Edit Since people seem more interested in the Sublime workflow, I asked for a few more details. Coworker replied: "Sublime SFTP [...] you might want to mention that the only advantage to SFTP over remote mounting the server is the save time, as with SFTP, you save local and push in the background, meaning frequent saves don't constantly stall workflows. Otherwise, there is no real advantage over just mounting directly." It is worth noting that he is working on a Mac, which has some severe bugs [1] when it comes to NFS mounts, and Samba has some latency issues in this instance.

[1] https://www.google.com/search?q=nfs%20slow%20mac


  [N]o local copies of development files are allowed
  ... without special authorization
I'm curious what kind of employer would hire programmers, and they tell them they couldn't develop locally as a matter of _security policy_, rather than "it won't fit on your machine". That seems like it would be very constraining.


That's not so uncommon in the corporate world, especially with people using laptops more and more as their daily driver and the potential for those laptops to be lost/stolen.


That's my usual workflow too (ssh + tmux + emacs), although occasionally emacs tramp-mode makes an appearance. http://www.emacswiki.org/emacs/TrampMode


Thanks for answer. I'd love to learn about what solution your Sublime coworkers use, as most people here use emacs / vim and I'm most proficient with Sublime.


Probably they're using the "Sublime SFTP plugin" by wbond... I usually use nano on the remote server and sublime remotely via ssh with this plugin. But to be honest it is just for small tasks. 95% of the time we work in local servers with Vagrant/Docker (shared folder --> git push), since our deployments aren't that big.


I tried it, but I can't get used to navigating the filesystem with the command palette. It just looks weird IMO.


Yep, Sublime SFTP.


I use a patchy python script (pyhon isn't my first language) to accomplish it. It basically just rsyncs the file once it's saved (using `on_post_save`). It's a bit tedious because I need to manually add folders it's going to watch, Also, it's not exactly remote code editing, since there's a local copy of the file (which is downloaded/uploaded). I believe most Sublime users do something similar to this, however I hope I'm wrong and I'd love to hear about an alternate approach!


On projects too small to have a workflow for automated deployment (let's be real: not everything is worth the overhead of setting up all of those tools), I've found the duo of Sublime and ExpanDrive[1] works well. The latter mounts a server that is accessible via SSH as a local drive (it's essentially a nice GUI for something like sshfs).

Otherwise, some combination of tmux and vim/nano/emacs.

1. http://www.expandrive.com/expandrive


I agree, being able to use your favorite local IDE to load and edit remote files can be very convenient.

On OSX, I use Panic Transmit's "Mount as Disk" feature. I've also heard good things about a product called WebDrive for this use case.


> not everything is worth the overhead of setting up all of those tools

I have a hard time disagreeing with this, since quite a bit of my professional life involves writing such configuration files.

That said, even in the cases you describe, I at least try and set up a shell script or makefile to keep things repeatable and documented.


Thank you so much for showing me ExpandDrive. sshfs was slow and buggy, but this looks really nice.


For me, it's a combination of tmux and vim. Deployments are done using Puppet/Ansible, so no need to manually run any git commands. Also, I hardly ever really manually change anything on remote servers and if I do, I make sure to put it in Puppet/Ansible immediately after.


I use rmate + Sublime Text for a pretty seamless remote editing experience when I'm just dealing with a few files. It's really nice when you're editing config files or when you want to skim through log files. I followed a guide I found on Stack Overflow[1].

[1] https://stackoverflow.com/questions/15958056/how-to-use-subl...


I also suggest keeping your .vimrc / .bashrc / .screenrc / whatever other config files you like in some kind of git repo so you can sync your config files to the remote machine nicely. The other nice thing about this is that if you have a remote that you don't use very often, you can either git clone the repo and rake / make to install, or even wget the file and move it in place manually.


Depends on the servers and the project I'm working on.

- Sometimes I mount the remote servers file system locally over SSH (sshfs) so I can use my local development environment. I generally do this when I have more complex projects (eg quantity of files) and there's some prerequisites defined on the server (ie it's easier to develop on the server than it is to mimic the set up on my local machine)

- Sometimes I use tmux and vanilla vi / vim (I don't generally bother with plugins / config for editors so I don't miss them when jumping onto new systems). I generally only do this if it's small changes though. The kind of situation where it would be quicker to SSH in and make a few edits than it would be to follow a stricter development and deployment model.

- Sometimes I develop locally and then push the changes remotely (usually using rsync, scp, git). This is usually for new projects and where there aren't prerequisites specific to the remote host.

My preferred development model would be 1 (as it's ridiculously lazy), then 3 (as it better follows sane development and deployment models). But sometimes a job simply requires you to make quick edits via vi; and thankfully I feel as at home in vi as I do with most GUI text editors.

Just to add, there's only two config files I ever need to worry about:

.tmux.conf which I only put on a small handful of development machines where I'd like detachable sessions from multiple client (eg home/work/etc). But generally I will run tmux locally.

env_servers which is an epic environment script (like .bashrc) that I can call in as and when and it has a whole stack of aliases, functions and such like optimize my workflow. I've also set up an alias on my local machine to auto scp my env_servers script to any server I SSH into. Thus keeping env_servers available and up-to-date.


I only work on a remote 'server', by which I mean my ubuntu desktop. I hate linux gui's so I use a mac laptop and ssh into the linux box, and run emacs in Terminal.app. For the occasional times that I need to step through java code with a debugger, I chromote in and fire up the IDE, but I don't actually use it to code.


Not to detract from the excellent answers, but I suggest you step up a level and think about how to minimize (and ideally eliminate) the amount of remote editing you need to do.

In particular, the Twelve Factor App (http://12factor.net/) (by the founder of Heroku) had a big impact in my thinking of how to remotely deploy and administer. In particular "1. One codebase tracked in revision control, many deploys" and "X. Keep development, staging, and production as similar as possible". Note that these approaches are applicable to any hosting provider and technology.

Now, there are times (especially when you're debugging a breakdown in dev/prod parity) where it is enormously helpful to ssh into a remote machine. But I would endeavor to minimize those occurrences.


Vim, mostly. The customization is remarkably easy, just copy my .vimrc and .vim out to the server. I have a folder containing all of my dot files which I sync between servers, and a simple bash script which creates softlinks to them in my home directory.

However, I don't do remote development as much anymore since I picked up the habit of using Vagrant locally. Now I can edit the files locally (though still with Vim), and debug them on a local VM. Throw in Salt and Ansible for orchestration, and once I get my local VM working, I know my remote end will work as well.


How do you share the files with your Vagrant VM? When we tried that at work, we noticed that trying to have a git repo on a share with Vagrant (rather than having it created within the VM's filesystem) was VERY slow.


I hear you - Virtual Box is getting worse and worse in terms of being able to interact with the host in anything resembling a timely manner. I wish the competition in the free VM space was better.

To get around this shortcoming, the sharing is mostly one way: from the host to the VM, via the default shared folder. I've not had problems with VBFS when managing files in this manner.

There have also been times where I have implemented watch scripts against files on the host, which triggers a rsync to the guest's normal file systems. From there, the file system behavior is more like what we can expect from a typical virtual machine.

To support workflows similar to what I do against non-local VMs, I've worked up some wrapper scripts which allow me to interact with the Vagrant VM via the usual ssh, scp, and rsync. When running "vagrant up", I also write out the VM's SSH configs and a ssh alias concatenates these with my base .ssh/config file.


Correct me if I'm wrong, as I'm not experienced with these tools: when you're done, you upload the image to a server?


No - Salt and Ansible both act as deploy automation tools. That is, you run the scripts on the server and they will install packages, move files, change permissions, etc. Most deploy steps are available using these tools.

Vagrant itself acts as a user-friendly wrapper around the creation and management of local virtual machines.

There are ways to use Vagrant to create and push machine images up to cloud providers, but we're not at a point with our deployments where machines are so easily interchangeable.

http://docs.saltstack.com/en/latest/contents.html http://docs.ansible.com/intro.html


So, if I get it, Vagrant is used to reproduce the production environment locally. Right? Sorry for all the questions!


Pretty much, yup. And with a bit of hostfile magic and using intra-VM networks, you can emulate an entire environment on your local machine if necessary.


FTP? People still use FTP??? That's a protocol from another era, was designed to work in a way that's not compliant with today's best practices. The only FTP server I have runs on a Raspberry Pi file-server I have at home. There's no firewall in that little gem running.

As for tools, well vim is my default editor so:

vim (really lots of plugins, themes, extension support, personal .rc, etc.) + tmux (tmuxinator for projects) + ssh (heavy user, I use sockets to keep the connection up, use keys-only access, fine-tune the algorithms I use, etc.).


>FTP? People still use FTP???

Of course.. people never stopped. Anyone using Wordpress, for instance, will almost certainly require developers to use FTP. For most shared hosts, FTP is the default (if not only) available method for getting files on and off of the server (and yes, people still use shared hosting as well.) I would even venture to guess that "Save in Notepad, FTP to live server and F5 to see if it worked" is still the most common web development workflow in existence.


I meant SFTP - I never use FTP


I do all of my development on a machine in my home office, so I have the same files and tools whether I'm downstairs or across the world. So mostly it's ssh/vim/cscope. I build/test on other machines, so throw in rsync as well - most often directly, but occasionally into an sshfs-mounted directory (actually editing via sshfs is a bit painful sometimes). When things seem particularly flaky I'll use tmux to keep stuff going through disconnections, but usually I can do without.


I find emacs with tramp unrivalled. Set up once then just open & save remote files by name from your local emacs. Switch to sudo as required.

No need for dotfiles or tools on the remote.


I'd argue you shouldn't be working on remote servers at all. Ideally you should be working locally and deploy using Capistrano, Jenkins, Bamboo or git hooks etc.


I work exclusively on remote servers. My remote "servers" are a 500 node cluster, and I do research and development for a parallel and distributed language and runtime system. The remote systems are setup for development, are managed by full-time staff, and continuously backed up.

Not everyone has the same workflow, partly because not everyone has the same work.


"Remote Server" Doesn't necessarily mean "Production Server"


True, but wouldn't you use Vagrant, Doccker or something similar to develop on? Can't speak for the OP's situation, but it seems a strange set-up that's all.


A Vagrant or Docker VM is still remote as far as the local desktop is concerned.


Sure, and hope that you never have to ssh into a hung box to figure out what went haywire.

You're entirely missing the point; there are reasons to manually make contact with a server aside from deploying.


Sure, that's ideal, but what if you have a small laptop? It would be nice to make use of the power of your server to compile more quickly, and speed up your development cycle, right?


have a buildbot/jenkins set up which runs your test suite on a buildslave automatically when you push to a git branch?

I would still run my editor locally (your pick. I use vim almost exclusively).


and do that for every project, even small throwaway experiments?


If it's a throwaway experiment, it shouldn't take too long to compile locally, I would think.


That's a remote workstation, not really a server. You should keep these two separate.


You learn how to use the terminal, and only the terminal.

I recently read a blog post of a guy who started coding on a cloud VM using an iOS SSH app + bluetooth keyboard. He never stopped. The setup forces you to only work with SSH and eliminates distractions.


Oh, yes, that: http://yieldthought.com/post/12239282034/swapped-my-macbook-...

I've read it a while ago. Of course if you only use command-line editors, it's nice. But some people prefer Sublime or other graphical editors.


mosh + tmux + emacs

I can't believe how few people mentioned mosh. I work on remote boxes "across the pond" all the time and it makes a huge difference. Laptop going to standby, VPN dropping, slow connection, all handled perfectly.


Whenever I installed/used mosh (linux RPi/arm7 of FreeBSD x86) the CPU went crazy :-(


Since a few people mentioned Vagrant: can I use it to set up a production environment locally, with all the system dependencies, configuration files, and then pushing it to production? If so, is this the recommended approach?


I'm not a Vagrant expert. I think it's possible to just ship a Vagrant file to the production server and turn it on, but I think the real value comes when it is used with a configuration management solution like Chef or Puppet. Use that to standardize between production and Vagrant, and then you can just deploy the application from one environment to the other without worrying about environment problems.


I'm working locally using full-featured IDE and publishing my work with simple shell scripts. When I need to edit files remotely, I just use vi. I know it well and it works perfectly even over slow network connections.


I basically spend all my time in my terminal anyways so it's pretty similar to how I work locally.

SSH into the box, checkout my dotfiles from github (or SCP them), edit text files with emacs, or whatever else I have to do.


I find WinSCP to be a good way to work with remote servers. It can use whatever editor you may like (ST3 in my case).

Supports full directory synchronization, has a nice UI and can be installed on a flash drive.


If latency is a concern, use a remote file system (Fuse etc type thing). That way you can edit using local software and only write across the wire when you save.


Many editors will periodically check for remote changes (especially when switching away from and back to the editor) which can cause pretty annoying slowdowns.


good point. I wonder if you can ask for notifications rather than checking for changes (eg, inotify, fseventsd) remotely?


I do as much development in a local IDE as possible. From there I just depend on git and ssh. Screen makes things a little more manageable as well.


by 'depend on ssh', do you mean you ssh into the server and use a command-line editor?


For when the shit really hits the fan... a terminal server connected to serial port and remote power control for power cycling are indispensable.


emacs + tramp + mosh + tmux


Tramp is outstandingly useful for remote editing tasks via ssh, telnet, ftp, whatever.

It's "smarter" than sshfs/fuse, because tramp understands which files are remote, and which are local.


Thanks everyone for all the answers! It's really nice from you :)


For Java: Samba share + IntelliJ For non-java: tmux+vim


I use vim + tmux


vim + tmux


I see no reason to develop on a server. If that's what you're doing, that's a very bad sign in most of the cases.


I often do this - although it's not a "server" in the sense of a machine hosting production services. The code I'm writing relies on a high bandwidth/low latency connection, which I don't have from my normal place form work (home). It's much faster to run tests on an EC2 instance than from my laptop.


There is an enormous reason if you work on, or use, systems designed for massive parallelism.


I still don't see the necessity... Develop locally, test remotely.


It's not necessary, it's just extraordinarily convenient. I don't have to ever say, "Oh, wait, I'm on my laptop now, I can't run an application with 100 processes without causing my system to grind to a halt."

Even the sequential performance of the remote systems is much higher than my laptop. The system in front of me is a means to interface with the real system, and has been for a long time. Granted, I have worked on parallel and distributed languages and runtimes for most of my professional life.


Some examples that just come to my mind:

- You need software/dependendencies/licences you don't have in your local machine.

- You need to edit a config file (/etc/hosts or whatever) in your server

- You are in a Windows computer (that's not yours) and you need to make a quick change in your Ubuntu development server.

- You need to edit some files when deploying (databases are no longer in localhost, /home/myname/ is /home/servername/, etc.)

- You launched a large script in your server and need to (view or parse logs | edit the code and re-launch it | check why it didn't work)

- etc.


software deps - this is a possible fair point. I've always managed to solve this by bringing the deps local though as a general sanity rule. config - why not source control it? quick change - why would you do a quick change differently to a non-quick change? edit while deploying - why wouldn't you script / source control this for repeatability / maintainability? log parsing - it sounds like you've got an edit, verify loop going on. Why wouldn't you want to formalize this so that you can see which changes align with which results?


Why you think is that? Could you, please, elaborate?


Sure. Couple of scenarios where I've seen people working on a server and why that's bad:

1) The developers in this company that work with ColdFusion develop their software on a dev server because they would otherwise require expensive licenses for each dev machine. This means that devs often find themselves editting the same file. It also means that one dev can break the software for all other devs. Last but not least, it makes version control a ton harder.

2) Another scenario is where you fix a bug on the production server to roll out a fix as soon as possible. Sometimes this is acceptable, but there's a big risk because you no longer have the ability to test the code first. This also makes version control harder, because the next time you deploy your code, the changes will get overwritten.

3) Working with a remote IDE or just having your own development server: doesn't really make any sense, most devices (laptops, PCs, tablets) can run some kind of IDE/editor locally and allow you to run the code locally. Using remote software only adds a huge dependency on having a working internet/network connection.


Actually, I have a coldfusion app that I look after (small amounts of maintenance). These days it's easiest for me to simply make changes in the production machine and then commit/push when done. It's not a big deal if you're the sole developer and the changes are simple enough.

Regarding the license issue, I believe Adobe have a developer edition that's free to install but is limited to a single ip accessing it. That may have changed now - I've used an open source alternative called Railo for years that away better in almost every way.

My setup day to day is a local vagrant machine I ssh into and do everything from there (vim). Some of my work requires far more powerful machines so I also have machines running at AWS that I use in exactly the same way with exactly the same config.

I just got sick to death of trying to install / configure stuff on OS X where nothing works quite as it should.


This all seems related to web development. A friend of mine codes for genetic research and the data sets are far too big, and the processing too intensive, to run on any laptop. He works directly on the server because that is the only place his code can run.


Well, that is true if you're working on the production code directly and this is not always the case when working on remote server.

But I understand your point of view. :)


I develop & test algorithms on a server. Saves on laptop battery life when I can't plug in.


ssh, tmux, vim, git, more ssh, python, bash.

Depends on project, if it is JavaEE app, or Android, then you're screwed and might as well work on a local version, because you need IntelliJ.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: