
The Tools We Use To Stay Afloat - mxstbr
https://colloq.io/blog/the-tools-we-use-to-stay-afloat
======
awinter-py
I wish this were an article about some hot new documentation tool instead of
another misguided psalm to slack & kanban.

Making docs sexy would be a big quality of life boost for programmers and the
people who love them (i.e. project managers and users of software).

~~~
wpietri
I used to feel that way, but now generally don't. Most documentation is
effectively duplication of something already expressed in the code. That means
that whenever you add docs, you are multiplying the effort needed to change
things in the future. You also create opportunities for the various expressive
duplications to diverge, causing untold confusion.

Now documentation is always a last resort for me. That's a shame, as I enjoy
writing. Instead I try to put knowledge in existing places, like unit tests,
acceptance tests, method names, object names, code structure, file layout, in-
app text, and all of the other little things I'm already doing along the way.
Only if I can't put something there will I write docs, and then reluctantly
and with a feeling I probably missed a chance to minimize duplication.

The only documentation I still write without reservation is stuff that
obviously doesn't have to keep up with the project as it changes. E.g.,
personal journals and project blogs are great, in that everybody understands
those are out of date right after they're written.

~~~
enraged_camel
I don't think the things you listed, even when combined, are sufficient
substitutes for well-written documentation.

The fact of the matter is that, for humans, plain English is always much
easier to understand than computer code. The more complex the code, the more
true that is going to be. You can use all the friendly function names and
object names and file layouts you want. At the end of the day, it is not going
to be enough for someone who is not familiar with that part of the system to
understand it quickly _and_ to the appropriate level of detail. This is
especially true if you have junior-level or intermediate-level people on your
team. It is also true if you ever end up coming back to some piece of code you
yourself wrote a year ago.

Furthermore, documentation can provide additional value by including examples
of code usage in different settings that you may not have covered in your test
cases due to not being immediately relevant. It can contain links to other
websites and have embedded images of system diagrams or data flow. It can also
describe third-party systems your code has to interact with and various bugs
and unconventional behavior that may exist in those systems that your code has
to account for.

Yes, documentation involves more upfront effort. IMO the correct way to deal
with that is to include that in project estimates, and to not mark tasks and
features to be "done" until their documentation is written. Same with
revisions and bug fixes. I understand the feeling that it duplicates logic
already expressed in the code. But look at it this way: code is for computers.
Documentation is for humans.

~~~
wpietri
I should be clear that I like good documentation, and will happily write it
when necessary. It's just that over the years, I've a) found a lot of ways to
make it unnecessary, b) grown to despise useless or out of date docs, which is
the bulk of what I see, and c) grown very tired of writing documentation that
people end up not reading.

> [...] plain English is always much easier to understand than computer code
> [...] This is especially true if you have junior-level or intermediate-level
> people on your team.

This is why I'm a big fan of collective code ownership, pair programming, and
frequent pair rotation. That has a variety of benefits. It also has
substantial advantages over documentation. A big piece of which is that the
need for documentation is always speculative, whereas answering a question
always works from a confirmed need.

> It is also true if you ever end up coming back to some piece of code you
> yourself wrote a year ago.

This is what I use tests and journals for. Well-written tests are essentially
machine-verifiable documentation. They tell me what the intent of the system
is. If I want to know the history of the intent, then I turn to my journals.

> But look at it this way: code is for computers. Documentation is for humans.

I couldn't disagree more. As Martin Fowler says: “Any fool can write code that
a computer can understand. Good programmers write code that humans can
understand.”

Code and tests are primarily means of communication between developers. They
have the constraint that they should result in a running system. But we stuff
most of the work of that into compilers, optimizers, and run times. Our tools
are -- and should be -- optimized for human productivity. And since
programming is mostly a team sport, the biggest value of our tools is in how
they enable that teamwork.

~~~
ethbro
The biggest value in documentation to me has always been efficiency in
understanding: that usually means documenting at the architecture /
"components as black boxes" level.

I could trace through to see how someone glued something together without
having a hint what I'm looking for, or I could get the architecture gist and
then generally know where the thing I should be looking at is.

To me, this delivers two benefits. (1) Reduced redundancy / potential for
divergence (as you mentioned above), as you're not re-documenting code but
rather documenting at a higher level. (2) Reduced churn, as architecture
changes more slowly than code, and generally an arch change will have some
organizational process before being made (where docs can be updated).

Disclaimer: This is coming from a place of diving into a lot of client systems
with really poor or utterly lacking documentation.

~~~
wpietri
Sure, but no documentation here will be as efficient as an expert having a
conversation with you and answering your questions. I'd much rather have a few
whiteboard conversations and maybe some pair programming time, and that is
cheaper and easier for everybody then speculative production of documentation.

If the choices are only "shitty code with no docs" and "shitty code with good
docs", I'll definitely take the latter. And if I'm rolling a team off a
project, I'm happy to take some time to write some high-level docs for the
people who will next pick it up. That's a one-time expense with documentation
written for a defined audience.

But when I'm picking up a system and the original team is gone, I'd much
rather have good code and good tests over good docs.

And honestly, I'm not sure I've even inherited a system with good docs. I have
seen things that managers probably thought were good docs. And I've seen
things that the developers intended to be good docs, and might have even been
good once upon a time. But by the time I get to them, it's either the "we were
told to write docs" garbage that is useless, or stuff that's so far out of
date that it's just as often misleading as helpful.

------
kingbirdy
The branch deploy sounds pretty interesting, I wish there was more detail on
it than just a footnote

~~~
e1g
We do the same - it was surprisingly simple to implement -

    
    
      - CircleCI listens across all GIT branches and injects $CIRCLE_BRANCH into ENV
      - A build step populates a vhost template to point "$CIRCLE_BRANCH.qa.bofh.com" to "/var/www/$CIRCLE_BRANCH/current"
      - Another build step creates "manifest.json" with $CIRCLE_BRANCH and $CIRCLE_BUILD_NUM
      - The vhost config and the manifest are added to the release tarball/Docker image
      - The tarball/image gets shipped to the staging server 
      - (we ship to S3/Docker, then pull because of security concerns. But can ship inside the build process)
      - In staging, a script reads manifest.json and moves content to /var/www/$CIRCLE_BRANCH/$CIRCLE_BRANCH
      - The provided nginx vhost is copied into a standard dir with other vhosts
      - A symlink of "/var/www/$CIRCLE_BRANCH/current" is forced to point to the fresh release
      - nginx -s reload && profit
    

You can use a wildcard SSL to cover subdomains or add a step to trigger
LetsEncrypt certbot as required.

This method works equally well with Docker images - just requires an interim
step to launch the container. In staging, we use a naming convention to launch
on predictable ports for every app. First 2 numbers are app specific, second 3
numbers are build-specific. For example, "25$CIRCLE_BUILD_NUM" (where
$CIRCLE_BUILD_NUM is just the last 3 numbers of the build).

~~~
vmasto
How do you handle the database? Do you reuse the one on staging or create a
new instance with prepopulated data? If the former how do you deal with
migrations and schema edits?

~~~
e1g
Yes, the db has been the most challenging aspect for us. We have 3 situations
-

1\. "Common baseline". With a relatively stable product, most branches (as in
~51%) do not impact the schema. For testing / QA purposes, these share one
central QA db and pollute each out. Turns out, a lot of the times this is
quite ok because the PR is about how the data is displayed, or improved
logging, or UX change, or a security layer or anything else other than core
domain knowledge - they don't care for the data that much.

2\. "I'm special". Some branches do modify data (whether the format or the
structure). To handle these, the manifest.json file has an option to request a
separate database. If present, the rollout script will do "pg_dump + copy" of
the shared staging DB, and duplicate it into "qa_$BRANCH", then update the
config file (or .env for Docker) with the appropriate connection value.
Additionally, it will all *sql files in a dir specified in manifest.json
against the clone DB. This is done on every release, which does get annoying
by resetting the qa data (we could add another manifest switch here I). On the
upside, it forces you to codify all data migration rules from the start.

3\. "I am very special". Some changes transform data in a way that requires
business processing and cannot be done with easy SQL. Sorry, out of luck - we
don't automate special cases yet. The developer has to pull the QA database to
localhost, do his magic, and push it back. Not ideal, but hasn't caused any
problems yet. If ain't broke...

~~~
vmasto
Thanks for taking the time to answer!

FWIW this topic would make for a great technical post/how-to.

I also seem to recall that Automattic does this with their front end (calypso)
which handles wordpress.com

------
dbg31415
ZenHub is free for teams under 5 people. And it's great.

* ZenHub - Agile GitHub Project Management || [https://www.zenhub.com/](https://www.zenhub.com/)

Also I use a tool that keeps labels in sync across GitHub repos.

* github-label-sync || [https://www.npmjs.com/package/github-label-sync](https://www.npmjs.com/package/github-label-sync)

Harvest for time tracking.

* Simple Online Time Tracking Software - Harvest || [https://www.getharvest.com/](https://www.getharvest.com/)

Red Pen for annotations. (Does't integrate with anything and that pisses me
off -- how freakin' hard would it be to build web hooks so it could tie into
Slack?! But on the whole it's got an easy to use interface.)

* Red Pen || [https://redpen.io/](https://redpen.io/)

While I like Slack a lot, one client I have uses Discord... and it's not bad.

* Discord - Free Voice and Text Chat for Gamers || [https://discordapp.com/](https://discordapp.com/)

~~~
anselmh
Thanks for the list, it’s quite good! However, our point is to not use third-
parties for things where we don’t need it.

The point of the article is to show that you _can_ use Github solely for
project management without a third-party service.

But I completely agree with you: If you don’t like the little extra-work we do
to achieve that with just github.com, it’s probably right to add another tool
like ZenHub or waffle.io on top of it.

Making your toolset work for your team is the most important thing. For us,
it’s enough — and maybe for a few others as well. That’s why we shared our
approach :)

~~~
dbg31415
Gonna rant a bit... GitHub boards cost time. No one should use them. They're
just an inferior option to ZenHub (or even Waffle or Asana or any number of
other "board" interfaces you can tack on to GitHub). I've wasted my team's
time on GitHub boards... everyone quickly demanded we go back to ZenHub.

Most projects have multiple repositories, right? But GitHub boards have it so
that each board is based on one repo. Stupid. You want to see a project view
of all your repos at once... front-end, back-end, deployment, whatever... not
4 or 5 separate views on it.

I feel like GitHub REALLY dropped the ball on not having a board forever, then
putting out such a "beta-feeling" board. They should have just bought ZenHub
-- still time... but literally any tool out there makes the boards work better
than default GitHub.

ZenHub is free for small teams, and I'd argue that anyone can afford $5 /
month / user. Budget $150 / month / user for any team for tools -- seems about
right.

------
hgdsraj
Why don't you guys use pivotal tracker? Way better ticket management,
integrates with github

~~~
bdcravens
We've use Pivotal Tracker before, and it always felt challenging to have more
than a basic description/discussion (maybe that's the point), but most tools
like Jira, Github Issues, Trello, etc, facilitate a larger view.

~~~
wpietri
It is definitely the point.

One of the basic notions in the school of thought that Tracker comes out of is
that team interaction is valuable and to be encouraged. So making a tool that
makes it easy for people to not talk undermines the broader goal.

One of the basic insights of the Agile movement (R.I.P.) about Waterfall is
that processes structured around documentation rather than collaboration have
a lot of subtle bad effects that gradually destroy characteristics you'd like
your teams to have. E.g., responsiveness to change, systemic effectiveness,
resilience to failure, low overhead, ability to ship frequently, ability to
deliver customer value.

That's why when I set things up I generally drive things off of index cards.
[1] Those are _obviously_ insufficient, which forces people to discuss and
collaborate. (That's in contrast to more voluminous documentation, which is
_subtly_ insufficient.)

[1] E.g.: [http://williampietri.com/writing/2015/the-big-
board/](http://williampietri.com/writing/2015/the-big-board/)

------
cpr
Matches more or less what we do as a 4-man team.

I'm curious why they used appear.in instead of Slack's audio/video
conferencing? The latter is too new?

~~~
anselmh
It’s a combination of a few things. First, when we started doing calls
together Slack calls weren’t available yet for multiple people.

In our experience, Slack calls often fail for a variety of reasons. Other than
that, we don’t want to shift too much data to Slack and since it’s pretty
unclear how their calling technology works and appear.in uses WebRTC, we
rather use another service here than a service where we don’t know much about
it but only that calls aren’t too reliable. But as said, we haven’t bothered
to check the group-video calling yet there. We’re pragmatic. Appear.in works
in any browser (Slack calls don’t) and therefore… why should we change a well
working system?

