Hacker News new | comments | ask | show | jobs | submit login

@lifeisstillgood - in the spirit of mutual embarrassment...



I found it useful to separate OS configuration out from installing non-OS software (third-party)... it's basically a cue/cost that makes me prefer OS-level installations over "manual" installs.



...lots of hardcoded and ugly stuff in there, but I like the idea of it a lot.

My experience has been mixed, but good overall:

1) docker mounts local host's *.ssh info, and mounts "~/host" dir and "~/Git" dir (promotes some of the host FS to top-level directories) ... this is primarily around me coding / working with Git, so it's the right choice for me. It's nice for the auth story to mostly properly follow me around as I move from computer to computer (primary auth is on local host, this image expects my local auth is properly set up).

2) docker FS isolation is interesting... you can "apt install $foo" in different windows/instances and system level changes are effectively isolated and disposable. It allows near instantaneous install of certain packages (ie: "apt install ffmpeg") and the command will be gone after that particular session (unless I decide it's something I use often enough to be added to the docker recipe). Contrast this the cruft on a 5-year old home linux box which has 1000's of packages installed from running random tutorials from the net.

3) startup time is quick and roughly equivalent to booting an old x86 PC. No matter how much I screw up the (inside the docker) operating system, it's just a quick "reboot" to fix it back to the last known-good state

4) VNC/X11 is "meh" and more of a parlor trick. However, there's interesting use cases to making an "appliance docker image" ... Firefox works OK, and is maybe a good idea for carrying around a "paranoid browser", but it definitely feels a bit uncomfortable.

5) I know the non-OS install stuff (ie: heroku, rustc, etc) can likely be done in the initial Dockerfile step, it forces me to make sure the system stays in a consistent state. When something isn't pulled directly from Debian, it's likely that there's a more compelling story to make sure the software stays installable and up to date (ie: vim plugins, etc). I guess it feels a bit like a local version of homebrew recipes?

6) As a hack, I cobbled together a "build/bake" concept which locks down a particular set of the non-OS stuff as well. When "baked" the image doesn't really auto-update (tough to automate version control against certain random sources/installs), but I have some scripts which try to keep the base OS up-to-date, encouraging you to go through the "build-bake" cycle as time goes on, keeping the box "evergreen".

An example victory is when on a windows box, I can get "real" vim, bash, etc. and operate on windows files from within linux (something that WSL can't officially do).

On the mac-side, I can get "sort -R/--random-sort" when processing random data, as well as the ability to quickly pulldown ffmpeg, or imagemagick... again in a "disposable" environment, without seriously jeopardizing my OSX install and building up cruft.

Your 2) is spot on - I think its the major reason we build up this cruft. By making an explicit choice to add something to my docker file I have decided it is worth the 2 minutes mental effort (for me a surprisingly high bar!)

not sure I get 5/6 - will dive in later (just decided this project needs a roadmap)

5/6 is the distinction between:

apt get update && apt get upgrade


curl https://rust-lang.com | grep "new version" && curl "https://rust-lang.com/installer.sh" | sh

I have very high confidence that debian/apt is "immutably available and consistently managed", but many of the non-free pieces of software (corporate-source or "not yet standardized, maintained, and included in debian") can/could require special attention, especially regarding updates.

build-env.sh - the operating system, apt-only, "just a linux box", download and install the heroku tooling b/c I often need it, but it's not in the main debian package pool.

bake-env.sh - "geeze I hate waiting for heroku, rust, and random vim plugins to download... let me freeze this moment in time" ... apt-get update still works and will keep O.S. up to date, but heroku, rust, vim plugins, etc won't "auto-install / auto-update" b/c they don't have properly managed versions or an install/update cycle like the rest of the o.s.

the bake-env.sh seems to just trigger a flag in the Dockerfile

So you have a set of dotfiles, on your url, and if bake-env, those are part of the docker build - and presumably the dotfiles do the rust installs etc, so your docker is ready with rust ecosystem installed (not a rust user so unclear) and then you can happily ignore it till you need to upgrade some rust package ?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact