A clipboard manager is an absolute have to have.
Having the last 40 or so clibboard items available to me instinctively has increased my productivity and the speed at which i can do anything on a computer.
I use the one that is part of the Alfred power pack, but there are plenty of others out there. https://www.alfredapp.com/help/features/clipboard/
For Windows, there's Ditto (http://ditto-cp.sourceforge.net/). It's insanely useful for slinging text around, like when you're copy-pasting a ticket number into several different places in-between other bouts of work that need the clipboard. Also useful for saving small URL/code/SQL snippets and retrieving them with a quick incremental search.
Not knowledgeable about this, but you raise an interesting point.
The Alfred webpage says: "By default, Alfred ignores popular password applications like the macOS Keychain Access and 1Password, so that you don't inadvertently copy a password to your clipboard."
What I mean is, I'm constantly copying passwords to my clipboard, both to generate them and to input them. This is an app which remembers clipboard history. That means all my passwords will be stored in this history, which makes it incompatible for my use case.
The documentation is less than clear about whether it can ignore apps:
By default, Alfred ignores popular password applications like the macOS Keychain Access and 1Password, so that you don't inadvertently copy a password to your clipboard.
I don't want to prevent apps from copying to my clipboard. I want to prevent Alfred from storing the history when I copy from a specific app (my password manager).
Whichever I have tried I always felt like a slave.
The ones with enough experience to hate thoroughly so far are:
- bash
- powershell
- windows cmd/batch
- ansible
- terraform
- azure clis, azure powershell api
- aws python api, cli
- Jenkins
- MS Team Foundation Services
- Docker (one of the nastiest ones)
- Rancher
I always managed to solve the tasks with the tools at hand (one of these), but all have so many weak points one must always work around, that I cannot say any single one made me feel like a boss when I had to use them for any non-trivial task.
Many tools are indeed not the 'feel like a boss' type, but the 'thoroughly hate it' kind. Having tried plenty of tools for cfg mgmt for example, GPO, Ansible and CFengine basically make you want to jump out of a window, as well as diverse puppet versions. Same goes for SCM/VCS systems that can't even get on Subversion-level, which by itself nowadays is a reason to not take a job.
- Workspaces! These help with focus and isolate the "blast radius" of getting randomized
- Spectacles: windows manager to assign windows to left/right side of screeen, across monitors, or left, middle or right 1/3
- CopyClip: I don't use this clipboard manager all the time, but when I do, it SAVES MY BUTT
- Sublime: speaking of saving my butt...Sublime has never once lost a single file. I love Vim but Sublime has flawless reliability
- Cmd+0: Global hotkey to "new browser window" If you do something 20+ times a day, make it super stupid fast (I know it conflicts with "reset zoom" but I can just manually adjust zoom back to 100%)
- Shell scripts: If I'm typing more than 3+ commands over and over again, just script it. Scripting isn't scary and more developers need to be doing it. (I seem to be the only one on my team, which is why I say that)
For me, Keyboard Maestro[1], which is probably the one app I couldn't live without.
With it, I can do things like highlight any git sha (or any sha range) in the terminal, press a keyboard command and it will launch my diff tool with the diffs for just that commit.
I also use it to do inline transpiling of different code. I have keyboard commands for transpiling my highlighted Sass or ES6 code and replacing it with the result.
It's so powerful, there's very little it can't do.
I haven't gotten around to making it more generic (right now, it's really only supporting iTerm, but it shouldn't be too hard to do the equivalent for Terminal as well).
I'm using Kaleidoscope for my git diffs, but you can edit the first action to being whatever command you want.
Hopefully that works for you, but let me know if you have any questions :)
Jenkins is fantastic because you can plug it into just about everything, and trigger your job that does $whatever either on a timer, manually or with an HTTP GET request. When I started at this company, Jenkins was nothing more than a web server used to launch an all-encompassing build script on SVN commit. Since then, I've redesigned all builds using plugins such that Jenkins now manages the whole process (previous admins had reinvented so. f*cking. much. stuff Jenkins does natively) and we're using Jenkins to automate just about everything we can plug it into.
This and Powershell, which although its syntax is mind-bending and in some cases just outright appalling, is a decent attempt at a scripting language with exceptional power over Windows.
-A folder full of batch files, each 2,000 lines long, that actually coordinate the builds
-A folder full of .exe's like 'findAndReplace.exe', 'EmailBuildBreakingAuthorsFromSVN.exe', replicating native Jenkins functionality, written in-house
-SVN.exe must be installed on the build node; if the build fails, the working copy is usually locked and must be manually Cleaned Up
-All build nodes run Windows XP Pro (including the Jenkins master)
This is what I had to deal with when I took over the cluster last year!!!
For the record, the whole cluster now runs on Win2012 with regular security updates, and most builds use the .Net plugin to build the VS project using msbuild. Those that can't, use Powershell scripts.
My view is that the build which runs on the CI server should be close to identical to the build which a dev can run on a workstation. This way you don't have to check in anything to debug build errors.
The build may complement the normal build, in ways such as emailing, but shoulldn't modify it.
I will agree with you there; the builds run through our CI servers can't be run on a local workstation. I'm still not sure why, but the build script simply will not work if run locally. To run from the command line, I have to invoke msbuild and nodejs manually. This isn't a big problem since most devs build through Visual Studio anyway, but I do fully understand your argument.
Currently we have a docker-based test runner which works differently on the CI server. It should be identical, but of course the bugs you trigger are usually the difficult ones caused by unexpected timing problems, file permissions or subtle component interaction. If you get stuck trying to fix an error only triggered on the CI server, it's a huge time waster and commit log polluter. Especially if the security policy is such that you aren't easily allowed shell access on the build slave.
If devs have full access, it's not so much of a problem actually. But still very annoying whe you are dependent on your VCS and CI server to build your project properly.
Got any tips you could share regarding the Jenkins setup? I'm going to be needing to rework our current Jenkins build system—which sounds pretty similar to your "all-encompassing build script"—as we're about to migrate from Perforce to SVN. I know what we've got setup is pretty bad, but I don't know much about best practices for build organization.
The best advice I can give is to use the Jenkins plugins wherever possible. Jenkins can do almost everything either natively or via its extensive library of plugins. In the same vein, always use Jenkins version control; our old build scripts used to shell out to svn.exe and run the checkouts that way, into a static folder. This meant that we couldn't run builds in parallel, and if an SVN operation were interrupted, the working copy would lock and would need to be manually cleaned up.
For complex builds, write a script that is kept in the root of the project, then after checkout, simply do ./build.sh. We used to keep all our build scripts in a separate repository which would have to be checked out onto the build node in preparation; invariably, the scripts never got updated. This way, the product and the steps required to build it are kept together, and are very unlikely to come out of sync.
Use labels as much as possible; I have our cluster set up so that builds that require dependencies are bound to label '<product>' and I add/remove the labels from the build nodes as the dependencies are added. This also means that if one node breaks and is unable to build <product>, I just remove that label and Jenkins will stop scheduling builds on it. It becomes much easier to manage than restricting builds to hostnames.
Most importantly, ensure you can always build from a fresh checkout. I cannot tell you how annoying it gets when a build fails because some tiny thing is out of alignment on a fresh build node.
I also recommend imaging your build nodes so they're quick and easy to deply, and BACK UP YOUR JENKINS CONFIG! :) When I came back from my Christmas break, it was to discover that the Jenkins server I had worked oh so hard on had been inadvertently trashed by one of the guys on my team; the disk had run out of space, so he'd remoted onto the server, gone into the Jenkins folder and deleted everything older than a month.
He hadn't thought to restrict this to the build artefacts folders. Basically lobotomised the server, and that was when we discovered we'd been backing up the build NODES, but not the MASTER itself! I was able to salvage some of the config and rebuilt the server on the latest version of Jenkins. As a result, the Jenkins folder on the master is now an SVN working copy with all the config files under version control, and a scheduled task runs daily to commit the changes.
It has been a very painful road to get this thing stable! It would be considerably easier if people weren't trying to nuke all my hard work while my back was turned...!
You also get the benefit of it the jobs being built on a shared environment. If developer A wants to build a job, $THING-A and developer B wants to build a their job, $THING-B, there's no having to look through code for environment variables or other differing, machine-specific issues, that might pop up when building with something like, for example, Terraform.
The build server was still the shared environment, but all it did was call a batch script that did everything else, up to and including emailing the build breaker (see above comment). There was also copy-pasted jobs for each branch, rather than a single master build-branch job that took parameters.
Emacs - even a novice elisp programmer has more power to automate their Emacs environment than I've seen in any other tool. If you've ever wondered how a small community managed to maintain near feature-parity with popular IDEs for 3 decades, this is it.
I'd disagree on "feature parity" for some usecases. For example, I used Emacs as my Java IDE for some time, but it lost that race in a hurry as IDEs caught up to JDEE. And it wasn't even close within a year or two after that.
I'm not even sure if Emacs will hold up to tools like IntelliJ for Ruby/Rails/JavaScript development at this point, although in that space it's a lot closer.
Overall, however, I agree: Emacs is freakin' awesome. Heck, I booted into it as my shell for a few years. I keep meaning to get back into it but I rarely have the time, and I need to learn more Vim as well. Time is the enemy.
Me too. I'm now much more proficient in emacs-lisp than in Python, my second-best language. I don't write bash scripts often enough to remember all the gotchas.
It's a little embarrassing how much I do through emacs: manipulating the file system, url fetching, RPC... stuff you really shouldn't be doing with your text editor. But hey, the tool you know...
An automator script to gather AWS session credentials from a web-based provider and paste them in '~/.aws/credentials' for local testing and administration.
It's saved me an insane amount of time lately.
That said, my second biggest time saver has been saving language reference and library docs to my local drive:
Near-instant rendering, available anytime my computer is powered on, and they still link to the external docs when I have to follow some esoteric link.
I use a combination of F# scripts and PowerShell. Anytime I need to do data manipulation, code generation, or anything hard core development related I use F#. I could use PowerShell for this, but I'm more comfortable and feel I have more control with fsx scripts. For all the server related tasks or automated build tasks I use PowerShell. We're a heavy MS shop, and I get that it may not work for everyone, but we leverage so many MS technologies that it just makes sense to go with the flow.
I'm obviously biased (CTO and Co-founder), but I love automation, that's why I'm part of the bitrise team in the first place ;)
The CLI is similar to something like rake, but the config lives in a single file which can be moved anywhere, and you can get a list of available "tasks" (workflows in the bitrise terminology) by running `bitrise workflows` or `bitrise run` without any parameter.
There's also an open source editor (UI) available for it (https://discuss.bitrise.io/t/offline-workflow-editor-workflo...), which is now part of the "base plugins" which gets installed by the Bitrise CLI. It's also really light weight, as the CLI is a single binary distribution (written in Go).
My favorite tool this year has been SaltStack. It manages the entire lifecycle of our VMs now (provisioning and state management) and it was very easy to create a Slack bot with it that can update our production/dev environments.
Salt is a little confusing at first, since it's essentially just a framework for systems management with Python + ZeroMQ, but you can do a lot with it.
I love Salt. Worked with it in a past life, and now use Puppet, because that predated me.
The pillar model and the patterns around it are great. Heira is a terrible substitute, despite kind of following a superficially similar structure.
I also really liked the idea of reactors, but that came to be right around the time I left that gig, so I didn't get to play much with it other than writing some toys.
In general, Salt just feels well architected. Puppet, in comparison, feels like a ramshackle collection of useful things that just don't mesh well. The language design makes me think of Perl without a competent language designer.
I will say that Salt has a higher upfront learning curve. I highly recommend giving yourself enough time to throw out your first attempt at designing anything more than a simple system setup.
I too love salt, but its roots as a remote command execution tool really hinder it in a lot of cases. This is especially true when you try to do something outside the well trod path; such as attempting to verify that commands issued by the salt master were actually initiated by an authorized party.
In the long run, I think I prefer Ansible, if only because it limits how much control any one server has over your infrastructure. Ansible breaks down when you have a lot of servers (thousands), but IMO at that point you really should be working at a slightly higher level of abstraction with VM images and automated scaling groups.
I've tried reading the saltstack doc many times, I still am very confused. To the point that I've stuck to ansible, despite truly thinking saltstacks might be better.
Is there any not confusing doc about saltstack available?
The easiest way to learn Salt is by starting with execution modules (https://docs.saltstack.com/en/latest/ref/modules/all/index.h...), which are just Python modules that abstract the commands underneath and you can blast to machines, such as salt 'web~' pkg.install httpd to install Apache on all servers named web. You can send shell commands directly using cmd.run - salt 'web~' cmd.run 'yum -y install httpd'
(Note: I replaced asterisks with tildes in the above example due to HN formatting.)
On top of execution modules are state modules (https://docs.saltstack.com/en/latest/ref/states/all/). You define a state in YAML and tell a host to execute the state with the state.apply execution module. States consist of an id (a unique string identifier), the state module to use, and arguments, like so:
The other major components are runners (https://docs.saltstack.com/en/latest/ref/runners/), which are scripts written against the salt API that perform actions on the salt bus, and engines (https://docs.saltstack.com/en/latest/topics/engines/index.ht...), which are child processes spawned by the master that listen for and react to events - like the Slack bot I use. One of the included engines is called reactor, which makes it easy to script events in YAML.
There are more moving parts I'm not covering here, but I hope this helps you get started.
While Salt did great things to our deployment abilities (we didn't really have much prior to it) … it's not a tool I can recommend. It's design in comparable to PHP or MySQL, and I frankly believe someone should write the equivalent of "A Fractal of Bad Design" or "Do Not Pass This Way Again" for it.
1. Salt simply broadcasts commands out to the nodes connected to it. Because it has no concept of who should be connected to it, it cannot compute whether a node is missing, or if that missing node matters. It's test.ping command is just another command, too, so it suffers the same problems rendering it useless as a ping command
1. Salt's CLI will return success on failure.
1. It is not able to parallelize within a node.
It helps to know what a "state" looks like in Salt past here (this is YAML):
a_name: # the name of the state; the name/and mapping value here comprise the state
pkg.installed: # a state function; this installs a package
- arg1: value # args to the function. Yes, it's a list of single-element maps.
- arg2: value
user.present: # another state function
- …
These are effectively steps that salt will execute, typically in the order you define them, with some exceptions: a. you can specify dependencies b. you can specify that things run "first" or "last"
Also, note that this YAML file is templated with Jinja2.
1. You can't create and depend on a "PHONY" target; this means if you want to depend on the result of several states, you're SoL. You can just inline the set of states that you need, everywhere you need them, which is bad, or you can use a temporary file on disk, which is a hack.
1. Attempting to write code (which, really, is what SLS is) in YAML is crazy. (You can actually write these in at least 4 different languages! All Salt needs is a JSON-like structure.)
1. It is not able to pass information between "states" (a "state" in salt is a task in the grand list of stuff it needs to execute).
1. The error reporting leaves much to be desired. If you make a mistake in a YAML file, you can easily receive back errors with wrong line numbers, wrong file names, quoting the wrong part of the file.
1. At least when we used it, there was no way to properly escape inputs to the YAML file. (This might have been fixed; I honestly can't remember. At some point, it was "fixed", but not really, because the fix didn't work for any practical purpose.)
1. The minions would just "crash", and by crash, I mean, just stop responding. Since its not an outright crash
1. Bug reports were generally met with "you're not using the latest version, please upgrade"; generally, I'm understanding of this attitude, as it takes some work to determine if a bug being experienced in an old version is still present in a new version. Salt's architecture made upgrading nigh-impossible: minions/servers of different versions weren't compatible; DNS couldn't be used to switch minions over in some of our hair-brained ideas to gradually upgrade as the minions would cache DNS results indefinitely.
1. Minions would reconnect automatically; for whatever reason, reconnecting took a considerable toll on the server. If you hit the point where there were too many minions talking to the server, the fleet as a whole would go into a death spiral: too many minions attempting to connect causes an already connected one to miss a keepalive ping, and reconnect. Now you have yet another minion attempting to reconnect, and the problem is worse. More and more minions start doing this, and you're hosed. We had to restart the entire fleet of minions (by hand, b/c our manner of running a distributed command on the fleet was salt) multiple times.
1. Salt in some places, such as templating a file on a minion, needs to reference a file on the server. A relative path would have sufficed wonderfully, but instead Salt uses "salt: URLs". a. they're not URLs, and b. they MUST be absolute. This means that if you move a subtree of stuff (such as for refactoring), you're forced to adjust these URLs. Thankfully, they're easily greppable.
For anyone using Mac OS X, I highly recommend setting up a keyboard shortcut for the text-to-speech tool. See [1] for instructions and screenshots.
The computer voice is not too annoying, and it can read news articles and blog posts to you, rather than wasting your eyes. I often do the dishes or exercise while the computer reads hacker news to me.
The text-to-speech tool is also very useful to proofread any text you might be writing—I catch a lot of typos this way.
An answer for the less technical crowd: Zapier. It allows me to tie all of my favorite tools together. Those that do not integrate natively can be accessed via API if you want to put the time into it.
Heck, this is great for technical people too. There's the public integrations, but I use the dev platform for private things all the time. Takes some extra coding, but once the interface is built everything else comes free. That plus code steps and I'm set.
Terraform! Watching infrastructure being created and configured in seconds on AWS (or even openstack) is amazing. Preferably paired with something like chef or ansible.
All of the automation tools out there always felt clunky to me. Ansible has been working fairly OK so far for my usage, but I can't imagine using it if I was managing hundreds of machines.
I considered using saltstack, but despite spending a few hours reading their doc, I'm still very confused by all the lingo.
I recently started reading and considered using NixOS, but the still relative newness (it's 14 years old!) makes me worry about missing packages and updates.
I felt the same way about Saltstack. I've worked at shops that use Puppet (with and without Hiera) and CFEngine. In my person projects I've used a lot of Ansible.
Out of all the CM systems I've tried Ansible feels the most sane. It still has a some things that annoy me, but overall I've found it the easiest to deal with.
My current shop puts everything in Docker containers and runs them on DC/OS scheduled via Marathon. I don't touch the CM parts anymore, but I think our DC/OS team provisions the actual nodes via Puppet.
Ansible is basically my favourite automation tool, though I use it less to "save time" and more for peace of mind. I can define how I want my servers set up and ensure that every time I spin up a new one or make a deploy, all the proper steps are followed automatically.
Though technically, this can save time in tracking down errors or weird state-based bugs, I guess? So it even still qualifies under that.
This might be a bit on the noob side but I always feel like a boss when I get a complicated project set up to be built and deployed with one double-click or keystroke. Even though it's just knowing how to use MSBUILD/NAnt/Ant/make and robocopy/cp, it's still satisfying.
I get a lot of lift out of ndjson-cli (though jq is a lot more popular) for command-line JSON manipulation. I've found it incredibly powerful for scripting and piping with web/RESTful services.