
Management and false certainty - awinter-py
https://abe-winter.github.io/dress/for/the/job/you/want/2018/06/24/certainty.html
======
mlthoughts2018
Wow, the author of this seems to have an incredibly inflated sense of the
value of his or her opinion, and seems to have very hostile preconceived
notions of teammates’ errors and shallowness of their reasons for defending
their choices.

Not to mention that this part, at least, seems completely wrong,

> “But an even slower way to uncover problems than argument is
> implementation.”

Nope. Argument and architectural documents invite all manner of bikeshedding,
often wasting time when you could just implement things and adjust the parts
that end up causing problems.

This is especially critical for taking a YAGNI approach. Architectural designs
are such a slow waste of time that ends up leading you down a path where you
don’t need various abstractions or extensibility hooks for reasons that only
become clear after getting a minimal prototype going, looping in business team
feedback, and seeing what fails or works.

Basically, most arguments (in the way the author means it) are a really
irritating form of premature optimization, and just implementing a first bad
version is way more efficient.

~~~
motohagiography
This reasoning works until you need security, privacy, or any cryptography in
the solution at all.

Then it is farcically irresponsible.

There is a technical play I have seen over and over again, which is just this:
create a terrible prototype, add stakeholders with promises of functionality -
use this momentum to bulldoze appeals to quality and common sense.

It is a way certain kinds of engineers create a crisis that has them in the
middle of it, and ensures they maintain control and indispensability in the
ensuing chaos.

If only there were a word or trope for it.

~~~
mlthoughts2018
> “This reasoning works until you need security, privacy, or any cryptography
> in the solution at all.”

This has not been true for me and my team generally, where privacy and
security are up-front, primary concerns given the customer data that powers
several machine learning features and data ingestion services we maintain.
Treating these aspects of the problem as just yet another thing to throw into
the prototype and iterate on has worked extremely well. Meanwhile other teams
in my company have been caught out by nasty over-committment to certain
security patterns that were then totally taken away from possibility when
devops changed container-based deployments. Had they just not over-committed
to a fixed, ahead-of-time design, they might not have been set back as many
months of reworking it.

> “create a terrible prototype, add stakeholders with promises of
> functionality - use this momentum to bulldoze appeals to quality and common
> sense.”

Well I think you falsley mention quality and common sense at the end in a way
that makes it seem like you’ve got an ax to grind against approaches that
don’t prioritize long-term design, and that last sentence doesn’t fit in with
the rest.

Apart from that, I actually think the approach you describe is great and works
really well to create robust and safe software, because it actually allows the
team to generate buy-in from other business teams and product teams, and
creates chances for the team to solve things in a high-quality and best
practices focused way.

If instead you slowly start out with architectural debates, product or
business people assume the solution can’t be done on their unrealistic
timeline and just force the team to pivot to the next unrealistic project, and
you just flop around endlessly managing the tension between product people who
want crappy prototypes that can be diff’d against changing requirements, and
engineers who want stable requirements to pin down best-practices-compliant
architectural documentation.

In the end, overpromising on a threadbare prototype and working iteratively on
it is way better, and is not at all some disingenuous attempt to create a
fiefdom or a personal walled garden sort of project. It’s a technique for
actually delivering the _business_ result.

~~~
motohagiography
I often encounter engineering managers who make appeals to _the business_ , as
a way to sidestep their accountability to a direct internal customer. Mainly
in orgs transitioning from waterfall to iterative, so I hope you will forgive
any excessive skepticism.

I've found that either engineering can iterate to find fit, or product can
iterate to find fit. If both are iterating to find fit, without either vision
or direction, the wheels will and do come off. Architecture provides that
direction for engineering if Product is still figuring out market fit.

If Product has fit (even customer traction), engineering can iterate its way
forward. The alternative is "game of aeron chairs," where everyone is
competing to hitch their wagon to the most prestigious customer they can to
drive their internal pet initiatives.

When you deal with security, and in particular cryptography without a clear
architecture, you are taking on risk of being spike-stripped because you
aren't playing at an enterprise level.

Architecture isn't compliance, it's design, so it's not about best-practices
and exogenous check boxes - it's about building something with an eye to
maintainability, market fit, and stakeholder acceptance.

I do think the Lean approach with validated learning over iterations is very
useful. I also think Agile is applicable in a lot of development. What
iterative development has not figured out in general is how to integrate
security for complex solutions, and these days security and privacy are the
dealbreakers.

So here I will admit one bias, because instead of merely grinding an axe I
figured I would bet the farm on a solution to this precise problem of how to
do security in iterative development environments, where the value of
architects and security analysts has been marginalized by a fundamental change
in development culture.

Hustling on an MVP isn't bad at all, but with new freedom comes some anti-
patterns that it would be to the benefit of all to articulate and recognize.

~~~
iovrthoughtthis
> I often encounter engineering managers who make appeals to the business, as
> a way to sidestep their accountability to a direct internal customer.

I cant help but feel like this is not a fault of being “business focussed” but
instead of misunderstanding what or who the business is. Business focussed
(imo) is management speak for a focus on solving the right problem for the
right customer within the constraints of that customer.

In the case you describe the customer is internal. There seems to be an
implicit misconnect there though. Perhaps this is a example of a lack of
information usually communicated through a sales process with a customer?

Similar to the issues that arise when you ask a friend for a favour and they
do a bad job / ghost you / bail. Its hard to release them of the obligation
they signed up for. There is little to no explicit skin in the game (money?).
In a similar way the development team building for an internal customer have
an unclear, poorly defined relationship with a convoluted accountability
process.

Perhaps it can also be modelled in a game theoretic way? e.g:

Lets have 2 sets of players: A. (c) Customer <-> (p) service provider B. (c)
Internal customer <-> (p) internal service provider

Each set of players plays a set of potentially unbounded games which represent
projects. The moves available for each game are: 1\. Continue working together
2\. Stop working together

Each move has an associated reward and cost which has a simple numeric value
but represents the aggregate of things both monetary and socially valuable /
costly. This means that they are not of the same type though, so a single
score for a move cant be produced just by subtracting the cost from the
reward. (This probably needs modelling better).

Moves are represented in the form: A.c.1 which means the customer in set A
chooses to work together.

A.c.1 and A.p.1 has a cost of 2 and a reward of 3. Im assuming that working on
the project only costs the time and money of the participants and rewards them
with a product, money and a happy relationship (socially valuable).

A.c.1 and A.p.2 has a cost of 1 and a reward of 2. Im assuming it costs a
little socially to the relationship, especially if previous games have been
played (the project is running) and the reward is the time and money saved.

A.c.2 and A.p.1 is the same as above.

A.c.2 and A.p.2 has a cost of 0 and a reward of 2. Im assuming that both
players agreeing to not work together is amicable and so not costing anything
socially or monetarily.

B.c.1 and B.p.1 has a cost of 1 and a reward of 3. Im assuming that the cost
is just time spent on the project and that the reward is a better relationship
between the players, and job security of both players.

B.c.1 and B.p.2 has a cost of 3 and a reward of 1. Im assuming the cost is to
the relationships and the job security of B.p and the reward is for time
saved.

B.c.2 and B.p.1 is the same as above except there is extra cost to B.c

B.c.2 and B.p.2 has a cost of 2 and a reward of 1. Im assuming that the cost
of deciding not to work together internally is a risk to both player’s job
security and that the reward is time saved.

The goal is to maximise reward while minimising cost.

This is a pretty poor model I think but it goes some way to expressing my
thoughts on issues with internal projects and potentially why using a third
“arbiter of vision” e.g the business, to justify things in the project may be
motivated.

I composed this on my phone on my hour and a half journey to work. I wish it
were easier to compose meaningful responses to comments online. It likely puts
other people pff even replying.

------
tyingq
Funny. This is why orgs like Bain and E&Y make tons of money. They give you
the answer, and if things don't work out, you have a scapegoat.

------
baxtr
I read piece, but in all honesty, I have a hard time to understand what the
point is here. Express certainty if you’re leading?

~~~
humanrebar
It's an argument for vigorous, not-personal, dialectic in design and planning
meetings. It should be OK to make your best case. And it should be OK to get
skeptical questions about your case. Or to hear an equivalent case for an
alternative.

It's an aggressive and competitive take on what a healthy collaborative
environment should look like. I've seen other approaches, but, to be honest,
they sort of exchange the aggressive for the passive-aggressive.

A key line in the post:

> ...an even slower way to uncover problems than argument is implementation.

The thesis is that it is expensive to avoid argument or doing it in an
incomplete or lackluster way.

------
anotheryou
Do you argue more with those above or below you?

~~~
triztian
What about peers?

~~~
anotheryou
yes, peers ideally. I actually edited my post to clarify that the ideal is to
be a team member of both groups, but than discarded it to keep the comment
short.

My question remains whether there are more conflicts with those who have the
ultimate say in high level things or with those who handle or do the
implementation.

------
dalbasal
Another way to put it might be creating a functional, collective "truth." We
are really good at X. Y is the future. We are going to do ABC. 1...2...3... Go
Team! It doesn't need to be true, but it does need to create a unified
mission, an esprit de corps.

92.6% of all leadership advice is about creating this sort of common truth, so
that everyone can cooperate: Team building. Mission statements. Voices of
authority. Core values. Crisis of leadership. Everyone sweeps the shed.
Extreme ownership...

There is a lot of worry these days, largely from the idealogical centre that
we achieve a (public-political) consensus on "basic truths" anymore. There is
no ground to stand on, in any kind d of divisided discussion. What they're
refering to is an absence of "basic truths."

Anyway, truths are "expensive." We have 3 scalable, but highly imperfect ways
ways of creating truth in modern society: democracy, justice & science.

Democracy is supposed to settle debates with the subjective will of the
people. But, it's divisive as well as irrational and convinces half the
population that the other half is evil.

Science is genuinely good at determining truth, where the truth is
scientifically determinable. But, science has also produced pseudo-science.
Basically bad science that can answer questions which good science cannot:
Freudianism/psychology, Marxism, liberal economics/metaphysics and all the
"social sciences" claiming to have discovered the truth about women's issues,
colonialism or whatnot.

Law... Well... This is the most ancient and least believable source of truth.
Common law and tiered court systems are almost modelled on cognitive
dissonance, making sure new truths do not contradict old ones.

