
Using Bots to Solve Developer Drudgery - avitzurel
https://www.kensodev.com/posts/2019/01/18/using-bots-to-solve-developer-drudgery/
======
silversmith
This just reads like a prequel to "why PR reviews are useless" blog post.

If a bot "approves" a PR in this fashion, it's not an approval, you've just
set up a "changes to this file are not worth reviewing" rule, and handed out a
green check-mark particpation award. Just let your developers force-merge such
non-issue pull requests. The very essence of PR approvals is someone else
looking at your masterpiece, thinking about it and pointing out the things you
forgot. If you don't have time for that, be honest with yourself and let go of
the mandatory approval requirement.

As for the latter part, the instant PRs and merges for updates,
congratulations on re-implementing a monolith while calling it a microservice.

Sincerely, \- A very cranky developer currently very slowly steering a
carelessly reviewed codebase back on track.

~~~
avitzurel
It's not that PR are useless at all. This library has our most important code
since it's used across all apps.

However, the JSON definitions are just not that important, that's a fact, we
can ping engineers and have them lose focus just to get approve or we can
decide to automatically approve if it's under directory "X".

I would argue that we did not implement a monolith at all, we just have a
faster cadence to update dependency than most.

With the way this works, you can't forget anything and this is why this
automation is so powerful, if you change the SDK, you know that everything
using this SDK is updated with the latest code. It will either fail tests
immediately or you can just move on with your life and focus on more important
things

~~~
wickedOne
which means your tests have a full coverage of code and scenarios?

------
Jtsummers
I like the intent here, but I think some things should be clarified.

If the swagger files are being autogenerated, then why would you ever review
them? You should _test_ them, but reviewing isn't appropriate. You should
review the things that generate them instead. When they change, that's a build
artifact (I don't review my generated libraries and binaries, I only test them
before delivering them).

On the commitbot, I hope you're not doing that in deployment without running
it through test first. Can it also generate issues if a particular service
fails its tests after the update?

~~~
jbmsf
re: why review; I commented about this above, but:

1\. We prefer to be able to say that __everything __is reviewed, but that in
this particular case the review works automatically. Requiring a PR review is
important to our security model (and therefore, our business). Skipping
review, especially in a library that contains _both_ these definitions and the
code that leverages them would be too cavalier for us. 2\. We want to give
developers control over _when_ these changes are merged. Many API changes are
coordinated with other changes (e.g. in integrating services). We _could_
require backwards compatibility, but we find this too onerous in this layer of
our stack.

re: test first; we have a very thorough process for qualifying releases:

\- As much as we like continuous deployment, our (enterprise) customers aren't
particularly interested in continuous changes to their experience and the
$VALUE of a customer to us is far to high to take that kind of risk. \-
Practically speaking, we adopt the git-flow model and have separate deployment
environments for integration (`develop` branch), QA (`release/ _` branch),
staging (`tags /_`) and then promote changes out to production, subject to
suitable approvals/controls. I'm fully aware that this approach may sound
cumbersome, but it's not in our line of business.

------
tabtab
Automating repetition of code is a sign that your stack is poorly factored for
your needs. Automating code repetition up front won't save you from
maintenance repetition, and maintenance is about 2/3 of average product cost.

The most productive stacks I have worked with trimmed out stuff that wasn't
needed for our particular organization. Every organization has common patterns
they are used to and like. But, off-the-shelf stacks have to cater to a wide
variety of organizations and organization preferences. Get a _tunable_ stack
and trim trim trim, and refactor duplication. Learn from each generation of
product releases and make your API's better for the next.

Make special behavior as-needed plug-ins, not part of the main stack. Don't be
a feature horder in the main stack, but keep a library of handy add-ons that
are known to fit your stack when needed.

As an example of flexible factoring, you shouldn't normally have to specify
the title and max length of an input field more than once (D.R.Y. Principle).
If you have to keep doing it twice, find out why and factor it out. However,
keep in mind that sometimes you do need to override the default title for
specific needs such that you shouldn't remove the _potential_ for task-
specific customization in your effort to refactor. Use shared defaults for a
field, but don't force shared defaults.

You should spend most of your coding time plugging in parameters to well-tuned
API's, not copy and pasting repetitious code. Most of your actual coding
should be for behavior that is unique to a project. Think twice, type once,
instead of the other way around.

------
wickedOne
i am all for having your ci ensuring the quality insurances which can be
automated (firing unit tests, static code analysis, etc.) to reduce the amount
of work any reviewer has to do on a pull request.

nevertheless, auto approving and merging pull requests and also auto bumping
dependecies with only a bot as reviewer does sound like a very bad idea in
general. how does one make sure the bump required for service A doesn't break
service B which also relies that dependency? admitted, testing someone else's
code is not the most exciting part of being a developer, but it's a necessary
evil isn't it?

to be honest, this workflow sounds like hotfix hell to me when you'd combine
it with continuous deployment...

------
outworlder
Taken in isolation, the poster's example seems silly. It's just a PR, right?
We'll add some process to ensure people look at it in a timely fashion yada
yada yada.

Ok, but what about _all these small tasks over there_? They may also be small,
but you have to think about them, work on them. Every second you are working
on them you are not adding any value. Your brain gets tired, it makes
everything feel so difficult to do, increasing friction on all tasks.

One thing that can't be overlooked: deploying changes to bots should also be
seamless and automated as far as humanly possible. Otherwise they too will
fall behind.

~~~
avitzurel
bots are automatically deployed via github too. They are just lambda function
behind an API gateway.

The library that interfaces with the bots is the same thing, once it's updated
ALL of the builds will automatically use that latest version.

~~~
outworlder
Did you roll your own? I was looking for some usable bot frameworks or
libraries for lambda, but I couldn't find anything. I don't really have the
time budget to roll my own stuff from scratch(although contributing back would
be feasible).

~~~
lozenge
GitHub API has a good Python wrapper, it is usually not too difficult to roll
your own. In this sample you can probably get the file list without a Git repo
checked out at all. The key is to make every event from GitHub feed into a
"reconsider_pr" function that refetches all info on the PR, rather than
keeping state and trying to minimise your API requests.

However it looks like you could build this out of combining bulldozer &
another bot from palantir, if you really want to go no code.

