Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you install developer tools on a fresh system?
126 points by fratlas on June 20, 2016 | hide | past | web | favorite | 124 comments
I used to have a bash script I would run to install everything (libs/tools) you're average web developer would need (node, sublime, google chrome, , python libs etc etc). Unfortunately I lost it after some time. Does anyone use anything similar?



I don't do fresh installs very often, but when I do, it's generally because I want it to be a genuine fresh install. As such, I don't install a tool until I need it. I find that tools I once thought I absolutely needed are not tools or libraries that I use anymore.

I have three exceptions to this: Vim (my editor of choice), dotfiles (which I store in a git repository and put in place using stow, installed via a simple bash script), and Vagrant, so I can do development testing against a VM.

As Docker matures, I may use it in place of Vagrant, but it's not ready to fill the same role quite yet.


> I find that tools I once thought I absolutely needed are not tools or libraries that I use anymore.

Yes! A new machine is a great time to run a garbage collection cycle on my tools.


> I don't do fresh installs very often, but when I do, it's generally because I want it to be a genuine fresh install. As such, I don't install a tool until I need it. I find that tools I once thought I absolutely needed are not tools or libraries that I use anymore.

Same. The core tools and libraries I use change more often than my development machine so when switching to a new machine it's a fun time to reassess what I need to install so I'm not carrying any baggage. I think the time automating the installation would be a net loss given how much I'd need it.

The setup required for building projects that I work on are automated using things like Vagrant so outside of that I only really need a few tools anyways.


I hadn't come across stow before. I might integrate that into my dotfiles.

Right now I use a bash script I wrote to deploy my dotfiles into ~/, via symlink, including renaming existing files (after prompting the user).

I get what you mean about using that opportunity to do some spring cleaning. I find I need to do that to my Vim installation periodically too. I'll add a new plugin that looks like it will be useful, or add something for a new language or template system, then not use it enough to justify it.


Have you tried the beta for OSX or Windows yet? I haven't needed vagrant in the slightest since it was released.


If they can get xhyve based VMs stable when co-existing with VirtualBox, then yeah, that will go a long way towards replacing Vagrant for me.

That said, I'm not a huge fan of Docker's feature churn; having to re-install, reconfigure, and re-learn tooling that is central to my workflow every other month gets old pretty quickly.


That's currently blocker #1 for us when it comes to the new Docker for Mac/Win. VirtualBox works as a nice abstraction layer on top of 3 OSes (Mac/Win/Linux) which simplifies things with Vagrant. Unfortunately it seems VirtualBox has been in maintenance mode as of lately so that might have contributed to Docker's decision to go with HyperV/xhyve.


Oh god I used "you're" instead of "your" and can't edit it.


+1 to Bug Finding, -1 to QA Skills


The horror!


I have become what I hate :(


You only hate not being able to edit your comment on HN. So do I.


> (node, sublime, google chrome, , python libs etc etc).

Lazy comma usage. Come on!


Could be a Perl programmer, in which case it doesn't matter at all.


I recently reinstalled my NixOS laptop. I just installed the distribution, added my SSH keys, cloned a repository, made a handful of symlinks, and then told NixOS to set everything up.

It's actually a collaborative repository, so that both of us in our company can improve the computer configuration, install new tools or language runtimes, etc etc.

The shared configuration has stuff like: our user accounts and public SSH keys; local mailer setup; firewall; conf for X/bash/emacs/ratpoison/tmux; list of installed packages (including Chromium, mplayer, nethack, etc); fonts and keymaps; various services (nssmdns, atd, redis, ipfs, tor, docker, ssh agent, etc); some cron jobs; a few custom package definitions; and some other stuff.

In Emacs, I use "use-package" for all external package requirements so that when I start the editor it installs all missing packages from ELPA/MELPA.

Aside from dealing with the binary WiFi blob this Dell computer demands, reinstalling was a pleasure.


You can use nixOS as a development machine? How did you install a window manager or desktop environment?


Sure! It's a full distribution. You install window managers the same way as any other package. The NixOS manual is quite comprehensive.


How do you keep track of symlinks?


Manually so far.


I have my dotfiles for that. Split into different categories (brew, npm, pip) together with all the config files I need. brew and brew cask (with brew-bundle [0] for Brewfile support) take care of getting all libraries and applications onto the system.

For the development itself I'm either shipping my entire config (.vimrc for example) or use systems like spacemacs, sublimious or proton that only need 1 single file to re-install the entire state of the editor.

The install script itself [1] is then symlinking everyhing into place and executes stuff like pip install -r ~/.dotfiles/pip/packages.txt.

It takes a bit of effort to keep everything up to date but I'm never worried of loosing my "machine state". If I go to a new machine all I have to do is clone my dotfiles, execute install.sh and I have everything I need.

On servers I am using saltstack [2], a tool like puppet, ansible and friends, to ensure my machines are in the exact state I want them to be. I'm usually using the serverless version and push my states over SSH to them.

[0]: https://github.com/Homebrew/homebrew-bundle

[1]: https://github.com/dvcrn/dotfiles/blob/master/install.sh

[2]: https://saltstack.com


I have no strong opinion here, but I am curious to hear yours. What were the discriminators in choosing saltstack over the alternatives?

My head spins with these tools and every time I pick one I seem to eventually run into a road block that is a no-go. The most recent effort was ansible and the no-go was its strict dependency on python2.7.


I came from puppet and salt felt a good amount "lighter". I'm mostly a python developer and salt states written in pure yaml with jinja2 templates, or alternatively directly in raw python for more complex stuff feel like home.

The yaml makes it very easy to understand even if I come back to a state after months. It's just a list of things that should happen with a few template tags sprinkled in.

Not saying salt is better than puppet or friends. It's purely based on preference. I can't say anything to chef or ansible since I never tried them. Salt has a crazy active community and while there are things I don't like about it like the "name" of a state, it's still doing it's job just fine so I sticked with it.


ha, i moved from chef to ansible for much of the same reasons. at the end of the day, it doesn't matter.

(however, from my experience salt and ansible stay readable because they're yaml and not arbitrary code/DSL, whereas chef "recipes" are ruby and usually devolve into complex programs if you're not careful)


If you consistently use the same Linux distribution, consider building metapackages for that distribution.

I created a set of Debian packages that depend on suites of packages I need. I download and install "josh-apt-source", which installs the source in /etc/apt/sources.list.d and the key in /etc/apt/trusted.gpg.d/ , then "apt update" and "apt install josh-core josh-dev josh-gui ...". That same source package also builds configuration packages like "josh-config-sudoers".


>I created a set of Debian packages that depend on suites of packages I need

You created a Debian installation of yourself. Simply run "apt install josh-core josh-dev...", and Josh will be ready to start developing on the system whenever he is connected.


I wish... I use linux, macos and windows on a daily basis. The best I can manage is to have most of my regular scripts in dropbox, and added to my path symlinked under ~/bin/OS, which actually works pretty well. Even managed to get a bash and CMD based ssh agent running script working, which was interesting (started with the windows/cmd one, and made a bash one to match the logic/intent).

Setting up conemu to autorun a script when opening a cmd, as well as configuring my bash profile. At work (more osx these days), I usually email myself the latest, and have a ~/.auth file thats 700 that I can source in to init my proxy settings (including ssh+corkscrew)... Allows me to only have to edit credentials in one location.


Can I use that with external PPAs?

Or I have to install the PPAs before I can use the metapackage (which kind of defeats the idea of using a metapackage instead of a script)?


While you could have third party repos installed via apt packages the problem is you can't trigger an apt update after they install and before the things that depend on them install easily.


The question asks only about *nix systems, I assume, but it worth mentioning that there is a great tool for Windows too, just in case if someone needs it - https://chocolatey.org/


Automating from-scratch setup with chocolatey: http://www.boxstarter.org/


Looks cool but I would be uncomfortable using this over HTTP, not HTTPS. Don't really want to risk a MITM when installing software on your machine (and giving it UAC _and_ your password).


I'm using a shell script together with the nix package manager for that. The shell script just ensures that all packages are there (e.g. doing `nix-env -i fpp wget iterm2 jekyll ghc ruby nodejs composer php`). I can pin the version of all packages by configuring `NIX_PATH` to point to a specific `nixpkgs` (the package repository) commit. So that all people have exact the same versions of everything.

Package customizations like a .vimrc is also handled by nix (I recently blogged about how I do this: https://www.mpscholten.de/nixos/2016/05/26/sharing-configura...).

The shell scripts together with the package customizations (e.g. my custom vimrc) are managed by git.


I also use Nix, with sets of packages managed by git. I use four "levels":

- System-wide packages, systemd services, users, cronjobs, etc. are managed by /etc/nixos/configuration.nix

- My user profile has one package installed, which depends on all of the tools I want to be generally available (emacs, firefox, etc.). By using a single meta-package like this, I can manage it using git and it makes updates/rollbacks easier

- Each project I work on maintains its own lists of dependencies (either directly in a .nix file, or converted automatically from other formats like .cabal), which are brought into scope by nix-shell

- I also have a bunch of scripts which use nix-shell shebangs, so their dependencies are fetched when invoked, and available for garbage collection once they're finished


I've recently learned you can put all the packages in one custom Nix expression in ~/.nixpkgs/config.nix, like below, then load (and reload/update/change) it with one `nix-env -i all`, or faster with: `nix-env -iA nixos.all`:

    # ~/.nixpkgs/config.nix
    {
      packageOverrides = defaultPkgs: with defaultPkgs; {
        # To install below "pseudo-package", run:
        #  $ nix-env -i all
        # or:
        #  $ nix-env -iA nixos.all
        all = with pkgs; buildEnv {
          name = "all";
          paths = [
            fpp wget iterm2 jekyll
            ghc ruby nodejs composer php
          ];
        };
      };
    }
and then keep only this one file in git. (Though still working on how to possibly also keep .inputrc, .bashrc, .profile, etc. in it.)


I've found ansible to be okay at setting up my environment. I'm able to configure everything from my zsh themes, terminal font size, window manager shortcuts, thunderbird logins, and so forth. The playbook takes about 30 minutes to run and after that I have almost everything ready.

Unfortunately I don't have a public GH repo I can point at as I don't want to expose everything I use to the internet. However the principle is the same as provisioning servers with ansible.

The only thing different I do is I use GPG keys to decrypt and untar things like thunderbird profiles rather than using Ansible vault. I restore GPG keys + SHH keys from offline, encrypted USB backups.


Have you considered using Gitlab? I really like GitHub, but can't justify the prices right now (just starting out in my career). But while Gitlab isn't as popular, or maybe quite as polished (though it's getting there fast), it does have free private repos. I've used this to store private data like this.

Check it out if you haven't already.


I use both GL and GH regularly (I have private repo at GH as well). I'm moving towards GL slowly but my experience is that it is much slower than GH atm.


Use a configuration management tool (I picked https://saltstack.com/ , mostly because of the docs & community support) but there's lots to choose from - Chef, Puppet, Ansible, and so on.

There's a learning curve, and plenty of 'where did my afternoon go?' rabbit holes you can lose yourself in. But the upside is that you can have consistent, repeatable, and rapid builds, with modularity as a bonus.

Don't be afraid with any of these kinds of tools to brute force complex components if you're in a hurry - ie. ignore the pure / idiomatic way, and use the tool's primitives to dump a shell script on a remote box and then run it.


I've found with Ansible at least, I was initially tempted to make large complicated roles for things like "application server" or "development desktop" but what ended up working much better was very granular roles such as "nginx server" and "emacs" (often just a single task such as "yum: name=nginx state=installed") that can be combined in playbooks. This makes it easier to avoid duplicating tasks in different roles, or having a lot of complex conditional cases in your roles.


This is the "roles and profiles pattern" in puppet, it generally makes sense.

Also, +1 for using CM to set your workstation back up, I really need to get around to writing some Puppet manifests to do that for mine


This is also the best way to do it in Chef.


That's a very good reason to only install apps with a real package installer.

On OSX, this is a no-brainer with brew[1] and brew cask[2]

# On my old mac

  $ brew list
  $ brew cask list
=> then I save relevant parts for future references

  brew install npm
  brew install zsh
  brew cask install sublime-text
  brew cask install google-chrome
  brew cask install intellij-idea

[1] http://brew.sh/ [2] https://caskroom.github.io/


>brew list

Protip: "brew list" will list all installed packages, including dependencies, which you might not want. What you probably want is "brew leaves", where it lists all installed packages that are not dependencies of another installed package.

This makes a difference in cases where a dependency is no longer needed in the latest version.

On a related note, why does the majority of package managers make the common and simple task of "list all manually installed packages" so incredibly hard?

For a fun brain twister, try to list all the manually installed packages on your system by just reading the man pages and no internet. Ubuntu is nightmare mode for this challenge.


For Ubuntu, look no further: http://askubuntu.com/a/492343/145754


On arch, looks like this will do:

pacman -Qe


> real package installer

> brew

That right there is irony. You intended to give good advice about using reliable methods for installing software, then immediately recommend the Devils ass crack of package management tools.


I just install stuff when I run into something I need but don't have. Keeps my system slim.


Setting up a new system is where NixOS really shines. Once you have one system working it is trivial to duplicate it on new metal.

1. Install NixOS

2. Copy configuration.nix*

3. Copy dotfiles

4. # nixos-rebuild switch

5. Enjoy your old setup on new hardware--no secret sauce needed!

*A hardware-configuration.nix should have been generated by the installer. By default this is sourced by configuration.nix, in which case configuration.nix shouldn't need editing.


I've been interested in Nix and Guix for some time; would you be able to comment on the two?


Probably the definitive document making the case for Nix (and Guix by extension) is Eelco Dolstra's PhD thesis [0] . The introduction is a good read by itself.

I have never used Guix/GuixSD but have heard good things about it. Behind the scenes it uses the Nix package manager and offers a subset of Nix packages licensed as free software. Whereas Nix/NixOS uses the Nix expression language, Guix/GuixSD uses a Guile Scheme front-end. I've heard the Guix CLI is quite nice and a bit more polished than Nix's but Nix is currently in the process of overhauling the CLI. See [1] for a more detailed comparison.

Both of these projects have very active development but don't have the volume of listings as you would find in e.g. AUR or the Debian repos. That said, I've been pleased and actually surprised at just how many packages exist. If you can't find what you want, contributing new packages isn't too hard (often just a case of finding something similar in the repos and changing the relevant values). You can check the current Nix [2] and Guix [3] packages here to see if enough of your needs are covered before giving one of them a try.

[0] http://grosskurth.ca/bib/2006/dolstra-thesis.pdf

[1] http://sandervanderburg.blogspot.com/2012/11/on-nix-and-gnu-...

[2] https://nixos.org/nixos/packages.html

[3] https://www.gnu.org/software/guix/packages/


Thanks!


Manually. I wipe rarely, and change tools often for various reasons (even ignoring version upgrades), making building and maintaining an installation script not worth it.

For awhile I did maintain a windows batch script that installed things off of a share at work. I was dealing with pre-release Windows 8, and wiped frequently for upgrades. Even that probably wasn't worth it, but I didn't have a second machine at the time, and wanted to run it overnight instead of blocking my ability to work.


I have a repository[0] that holds all my configuration and installs some language-specific tools. Otherwise I just manually install any packages I need. I may consider automating this at some point but I don't use that many tools so it hasn't been particularly onerous.

[0] https://github.com/michaelmior/dotfiles


ditto. In addition to dot files, my repo has a `system_setup.sh` which installs everything that can be installed on the command line and sets up symlinks and and and. Every time i add a new tool to my arsenal (brew install x usually) i also add it to that file. This repo means i can be up and running on any new system in under 30 mins. Most of that time is for some manual downloads, and git checkouts I have to do too.


ditto


I use a shell script for a new debian[0] installation and also have other scripts for kubuntu[1], opensuse[2] and other software installations. I store my dotfiles[3] and other useful scripts that I can customize for each development environment. Hope that helps!

[0] https://github.com/svaksha/yaksha/blob/master/yksh/apt-debia...

[1] https://github.com/svaksha/yaksha#2-folders

[2] https://github.com/svaksha/yaksha/tree/master/yksh

[3] https://github.com/svaksha/yaksha/tree/master/home


Just wanted to compliment you on the documentation in those scripts. I'm that documentation stickler guy at work, so seeing it in the wild (especially for personal files) makes my day.


Thanks :)


Depends on the system.. for OSX, first VSCode, second Homebrew, after that VS Code. I use Homebrew to install the version-switchers for my language of choice (usually node/nvm).

From there, I'll setup a ~/bin directory with various scripts as utilitarian. I may source some of them in my profile script.

----

Windows Git for Windows, Git Extensions, ConEmu, Visual Studio (Pro or Community, depending on environment), VS Code. I should look into chocolatey, but admit I haven't. NVM for windows.

----

Linux/Ubuntu generally apt, and ppa's as needed.

----

FYI: I keep my ~/bin symlinked under dropbox, as I tend to use the same scripts in multiple places. I will separate ~/bin/win, ~/bin/osx and ~/bin/bash, and have them in the path in appropriate order... linux/bash being default. I'll usually use bash in windows these days too, and set my OSX pref to bash. It's the most consistent option for me, even with windows /c/...



windows 10 comes with packet manager, so you can just execute the following statements on an fresh windows install (powershell):

http://pastebin.com/HmiqDDbi


Interesting. This looks like NuGet on steroids.


Boxstarter (http://boxstarter.org/) automates setting up windows machines even more. It utilises Chocolatey to install 3rd-party software and also can install windows updates and take care for reboots etc.

Once you've set up a box-script, you may run it on a freshly installed windows, go to lunch, and when you return everything's set up.


Yep, this is what I do for Windows systems, and it works well for me. I have a *.bat file. The first line installs chocolatety, and the subsequent lines install packages that I want. For example... http://pastebin.com/cpbdbfAN


I use Ansible with Brew and Brew Cask. I've found using Brew for everything makes it easier to upgrade all applications for security reasons and it also gives a high level view of my system. Here's the relevant config file of the things I install:

https://github.com/arianitu/setup-my-environment/blob/master...

The ansible script also links to my dotfiles, which can be found at:

https://github.com/arianitu/dotfiles


I use a similar setup and find it quite dependable. I notice your bootstrap script does not automate the XCode installation. Here's how you can automate that in case you're curious: https://github.com/timsutton/osx-vm-templates/blob/ce8df8a74...


This is great, just enough moving parts to get it done, doesn't try to be a meta framework for solving more than it's intended use, really exemplary, thanks for sharing.


I find I end up with a lot of cruft and my tools of choice change over time, so I don't worry about it. A decent package manager makes this approach tolerable.

Homebrew and Homebrew Cask on OS X handle at least 90% of what I want to install.


As a rails developer, I've used and recommended https://github.com/thoughtbot/laptop :)


We've adapted this at 18F. It's not a fork but it is based on and inspired by Thoughtbot's original project. Strong recommend. https://github.com/18f/laptop


Highly recommended by me too. You can add a .laptop.local file for extending their script to install/update other dependancies.


Not directly related, but this seems like the right thread to ask: I've been trying to move from Linux to OS recently, and the one thing I can't stand is the .DS_Store and other files which OSX just throws all over the place, in every directory whether local, network or external drive.

Is there a way to stop it? (installable on a fresh system? I've been experimenting with reformatting, so that's not a problem)


This might have what you're looking for, anyway it's a good starting point for a lot of commands to customize an OS X instance. https://github.com/mathiasbynens/dotfiles

Edit: AFAIK .DS_Store is only created when you use Finder. In the past couple of years I've only used the Finders to drag'n drop stuff between ~/Desktop and ~/Downloads, terminal for the rest, so .DS_Store files might not be a problem in practice.

Edit2: Google helps answer your original question http://stackoverflow.com/questions/18015978/how-to-stop-crea...


Thanks. I was aware of asepsis, not aware of DeathToDSStore, but neither works on El Capitan unless you turn off SIP.

I find it so surprising that there's no way to disable this.


I've not seen any way to stop it, but I used to use a tool that would clean it all up on the first instance and move them on every additional time. There was something about moving them (and not deleting them) that stopped them regenerating, but I may be wrong.


A few days old but I wanted to contribute anyways:

I do a really fresh/clean install. I install tools as I need them and find myself leaving tools behind often and trying things out. I get setup on new machines pretty often: whether I installed a new linux distro on my chromebook, finally dual-booting Linux on my PC, or just setting up a dev VM.

Anyways, I recently abandoned using Tmux because I prefer using i3 as my window manager and it works for me much better because now I can not only tile terminal windows but also other utilities.

I've also moved on from using CMDer straight to using ConEmu on my Windows machine. Every reinstall is basically doing inventory of my tools.

Obviously, I keep a repo of dotfiles and other settings files.


I use the Nix package manager. I use it on both Linux and OSX.

Setting up tools is quick and easy:

1. Install Nix

2. Copy my config to ~/.nixpkgs/config.nix

3. Run "nix-env -i all"


I've been using https://github.com/superlumic/superlumic for setting up my Mac machines because it supports a few more constructs that are common for Macs such as plist file modifications. Since it's Ansible-based you use YAML files to configure everything and it works well with Homebrew at least. I spend enough time working on configuration management professionally and don't want to spend any more time than I must to keep sinking time into my workstation's configuration than I have to, so more complicated DSLs / systems like Puppet, Chef, Salt are out for me despite working with those professionally.

In the future I'd like to try NixOS for managing OS X but it seems rather immature at this point for people that want stuff to Just Work primarily.


I usually develop in VM's where the machines are very well defined so it's easier to build a build machine. In the end though this usually means that the absolute package requirements are held by the project repo so what needs to be installed is based on that.

So if I need to start a project or rebuild a system to do a project I can generally use the project itself as guidance on what to install. After two or three builds I have usually cleared all the dependency hurtles. The only thing that really breaks down is not having your dotfiles available, but for me those are backed up/in SCM.

If you are familiar with building projects from scratch it becomes a lot easier to understand dependencies and grow the system to what it needs to be, and with VMs it is that much easier to start with a blank slate that you are capable of blowing away and not even trouble secondary workflows.


I did do ansible https://github.com/CraigJPerry/home-network but I've decided it's overkill for my dev machines.

I still like ansible as a layer on top of AMI images when I spin projects up in the cloud though. I don't want to have to go install iotop when I want to use it, I want to just know that my tools are present and ready to go. But I consider this a different type of machine from my dev hosts.

As I see it, the best solution on a dev machine is to make it easy to install software I might need in future. That means having a package manager available and access to my preferred configuration. On linux I just need my dotfiles repo available on the box, on windows I also need chocolatey installed.


I keep a series of basic setup scripts for Ubuntu on my github[0]. I do wget and pipe them to bash, which I'm not tremendously proud of...

[0]: https://github.com/cjjeakle/devbox-setup


I use bash+rpm.

RPM's are insanely easy to create, and the work helps with deployment/version tracking on the end machine. For example I have an .spec that downloads a given version of codeIgniter, unzips it, tweaks some permissions, and then rolls it into an RPM and tosses it into a network accessible repo. The RPM has a pretty complete list of dependencies, so they automatically get pulled in.

So my devel script looks something like:

  cat <<EOF
  "EOF" > /etc/yum.repos.d/local-repo.repo
  [local-repo]
  name=local-repo
  baseurl=http://xxxx
  enabled=1
  gpgcheck=...
  EOF

  PACKAGES="codeigniter otherstuff"

  for PKG in $PACKAGES; do
   sudo yum install -y $PKG;
  done


Not as Bash script but some time ago I documented complete environment setup for Windows / Mac OS X / Linux

https://github.com/Corsaair/redtamarin/wiki/DeveloperEnviron...

The Windows part have a Batch script setup to automate the install of cygwin, apt-cyg, wpkg, etc.

It is very specific to the Redtamarin project, compiling C++, compiling Java, compiling AS3 to bytecode, etc.

there are also other doc for hardware setup, SSH to/from Windows, running "remote" build from the LAN, etc.

but the setup should work for about anything, comments to improve it welcome :)


I use https://github.com/denisphillips/boxible it's like GitHub's Boxen project but instead of Puppet it uses Ansible.



At work, we support umpteen OSes in our build (a lot of C++ stuff, so it's picky about where it builds and runs). For the Linux side of things (where most of the dev work is actually done), I'm in the process of building Docker images for our range of build machines, so that a developer can put up a releng-equivalent build environment in a couple minutes, pulling images from a locally-hosted registry.

Our current system is built on a combination of VM templates and documentation for the devs to set up their own build machines manually. Anything that I can't containerize will be stuck there, too.


I'm on Archlinux and I keep a list of the packages installed from pacman and from AUR and some config files in https://github.com/Keats/dotfiles (https://github.com/Keats/dotfiles/blob/master/zshrc#L39-L40 for the aliases).

I haven't reinstalled in a long time so the install script might be broken though


I think you might like this (https://github.com/snwh/ubuntu-post-install/) project a lot.

Well, personally, I made my own project somewhat similar to the one I linked to but mine wasn't that savvy. Plus mine also installed stuff from npm (linters mostly), ruby-gems (jekyll blog), pip and grabbed some sources and compiled them and symlinked them to proper places using GNU Stow.

PS: He's the developer of Paper GTK Theme.


I use Arch Linux and follow the basic guidelines for installation and modules. The rest is synchronized with my Dropbox and I created a bash script to remove the configuration files and symbolic link them to their respective folders in my sync folder. Thus I can also maintain the same configuration on my other computers, too.


Debian 8 has been very brittle, what gave me some experience on creating fresh systems at home...

I have my configurations on a version controlled puppet repository. This helped a lot, and I'd recommend anybody to forget /etc versioning, and use a proper configuration control system even when it's not exactly a requirement. I'm about to ditch my /etc backups now.

A certain source of pain is software that must be kept up to date. I have Firefox and GHC on this category. Both hurt, but it's not worth it to repackage them.


I maintain Ansible playbooks for my own systems should the worst happen.


Just to make sure I understand, your play book isn't for a VM but a host os?


Yep. You can use 'connection: local' for example.


I use Ansible to install all my desktop basically. This coupled with my dotfiles (that includes a bash for gnome configuration) makes my workstation ready in 15 minutes or so.

This is my Ansible repo for reference: https://gitlab.com/edgard/ansible-ubuntu

And dotfiles: https://gitlab.com/edgard/dotfiles


I'm one of those guys with an install.sh file that I got from my mentor. I just run it on every new install. Last time being when I installed Ubuntu 16.04.


Mine is called "apt-all.sh" and installs gcc, clang, qemu and some libraries:

https://github.com/vmorgulys/sandbox/blob/master/stackcity/t...

It's a big list of "apt-get -y install".


I started to use Vagrant together with https://puphpet.com/ to create a dev machine. I have a MacBook Air and won't pollute it anymore with a dev environment. With a virtual machine, everything stays separated in a nice and tidy way.

Installing dev tools on OS X are a matter of minutes then, because everything else comes with the Vagrant box.

Edit: I'm a web dev guy.


I use Chef, Ive got a cookbook here that uses policy files to install homebrew and homebrew casks and set up some stuff. It slao tests it in test kitchen, converging an os x system in virtulbox! https://github.com/willejs/chef-workstation


(OSX) I keep a bash script in a gist on GitHub that I can copy down and run on a new system. It installes Homebrew, rvm, nvm, and then a bunch of homebrew and homebrew-cask packages which is pretty much all of my development environment. Every so often I'll `brew list` and paste in the latest dependency list to keep it up to date.


Question: why are you doing a fresh install? If you install everything through your package manager, you don't need a fresh install to clean things up. My operating system has lived through several different machine upgrades. I just do a `cp -a` to copy the files from one hard drive to another, set up the bootloader and go.


Theoretically that may be the case, but its far from reality.


It's not the same hard drive, but the files in /var/log/installer/ show I installed Ubuntu 10.10 on 24 October 2010. It's been upgraded since, and copied to a new drive at least once.

/etc/popularity-contest.conf has the same timestamp, so I'm curious whether Ubuntu (or Debian) keep any statistics on the lifetime of a system.


The cases where it diverges from reality generally give me more reason to use `cp -a` to install the new system, rather than less.


It's about an even split between installers from an MSDN account, and a ninite bundled installer for the rest.


Ninite bundled installer has made setting up new computers for family members so mindlessly easy. So glad it exists.


I use a ShutIt script:

https://github.com/ianmiell/shutit-home-server/blob/master/S...

it's platform independent and automates the install of everything I need.




I created a personal list, but half of the install is manually made: https://framagit.org/briced/conffiles/raw/master/INSTALL.md


    sudo apt-get install emacs


OP asked how to install developer tools, not how to install an OS ;)


I only have two emacs installs, home and work. But I keep them in sync, kinda, using:

https://github.com/jwiegley/use-package

So my .emacs/init.el looks like

  (use-package helm-projectile
    :ensure t
    :config (helm-projectile-on)
  )


Then discover that since last time I installed emacs on a fresh computer, somebody with an insane opinion about indentation "worked on" python-mode. Struggle with it a couple of days, then try to find an old enough emacs package for my latest Ubuntu that python-mode isn't broken.

This has happened twice.

This upgrade cycle magit (my main reason for using emacs in the first place) broke all my muscle memory and added a ton of featurea I don't want or use.

I wish I could just freeze emacs development...without getting left behind when I want to install a new package... oh well...


  emerge vim


I have a bash script and a PowerShell script stored in a web-accessible place. I pull it down and it bootstraps Chef to run chef-solo on the machine from a repository stored in a trusted location.

Dotfiles are stored in Dropbox, too, which is handy for keeping zsh and Sublime synced.


I'm very interested in a way to containerize (runing them within a vm) these tools that way nothing needs to be installed, and could go easily from machine to machine, without polluting the filesystem.


Honestly I think this is overkill. For a dev machine, I install all my tools locally that way "yum update" or equivalent keeps everything up to date easily.

Use VMs or containers for local simulation of your deployment targets.


I've had a painless time setting up my mac with https://github.com/msanders/cider


I use Thoughtbot's Laptop. https://github.com/thoughtbot/laptop


I tried boxen from github. Like others have mentioned, tools like these have a big learning curve and boxen was pretty high maintenance after setup in my experience.



Ansible is great, we use it at work extensively. But I am sticking with Vagrant for OSS project as it (Ansible) does not work with Windows host.


1. Install Vagrant.

2. Install git.

3. Check out the complete, working development environment and run its Vagrantfile.


choco install as the need arises




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: