
A parable about problem solving in software development - Zarkonnen
http://www.drmaciver.com/2013/03/what-the-fuck-are-we-doing/
======
mattmanser
I honestly couldn't make head nor tale of this.

It's certainly not a parable as there's no lesson drawn.

Without any code specifics this comes across as a bunch of smart people who
thought they knew how to program but didn't constantly reinventing the wheel
even though everything described was a solved problem.

To me anyway.

~~~
andyjpb
How about "Obviously this is how we should have done it in the first place.
It’s not just obvious in retrospect, it should have been obvious in the
beginning. We were just too focused on fixing this one problem with our
current system rather than calling the system itself into question to see it."

~~~
DRMacIver
What, you expect us to read the article to derive the lessons contained
therein? What lunacy is this?!

~~~
mattmanser
I read the article. 3 times in the end.

I now realise quite a lot of it was meant to be sarcastic from your other
comments. Which clarifies a lot of your points. I'm a brit and employ the
humour myself a lot. But it _really_ doesn't work online or in texts or in
emails. _Really doesn't work_.

I'm sure your story works well in the pub because you're there to embellish,
explain and make obvious which bits are jokes and which aren't. Online, it's a
real mess, it's actually two stories and it's very confusing.

Story 1 - We started with a HTTP API and the end of the story is we ended up
connecting directly to the DB. No explanation why the initial decision was
made, seems to make no sense. This initial decision and final revelation is
actually totally moot to story 2.

Story 2 - To try and 'fix' performance problems with the opening of story 1,
you went down a rabbit hole of fixing the problem in front of you. This led to
hilarious craziness. Understandable and I think this was actually supposed to
be the main thrust of the story, but your opening was about a completely
different story.

You also veer off those two stories a _lot_ , constantly digressing into minor
plots without any good reason. So not only did I read 'so, there's this
tortoise and there's a hare, detail, detail, detail, and so it turned out, the
emperor wasn't wearing any clothes!', but I'm also wondering where all the
minor plot points are taking me, which is pretty much nowhere. They were
clarifications that don't clarify at all, they just distract.

~~~
opminion
_but your opening was about a completely different story_

The second paragraph starts with: "It’s a parable about what happens when
you’re always solving the problem right in front of you rather than
questioning whether this is a problem you actually need to be solving."

------
drewcrawford
This was a very strange story to see, because my life is almost a mirror image
of the author's. Except my harebrained ORM-powered RPC message-queue rest-was-
too-slow experiment is working so far. Consider this a counterstory.

The author lists two problems with the old system: code re-use and speed. But
he doesn't really say how the solution is going to improve on the status quo,
other than just by throwing programmers at the problem and hoping they improve
on REST. Now maybe they had clear solutions and maybe they didn't--but the
post doesn't say.

I sympathize with the author's problems. But I also started with specific
clarity about the problems and strong solutions about how in practice to
improve. For example, the speed problem for my use case was very specifically
that latency was too high. Latency was too high specifically because of fast-
lived connections, the SSL handshake, and TCP overhead. So I designed a system
very specifically around long-lived connections, a non-SSL based crypto layer,
and UDP. It's a full order of magnitude faster; it actually changes the way
you use network calls because they are so cheap now. As time has gone on I
have extended the speed improvements, but it was way better than the status
quo on day one.

Similarly for code reuse, I had a very narrow set of problems surrounding
reusing across client/server only very specific parts of very specific
classes. As a result, rare is the change that requires moving client and
server in lockstep. I don't think I did it in the month of February. And the
productivity gain was great from day one.

I am the first to admit that I've had some bugs in the complicated pipeline
that require Deep Magic to comprehend. I have one or two open right now. But,
if you have a 10x runtime improvement and a 3x development improvement,
spending a day or two learning the ins and outs of Deep Magic is not the worst
thing you can do.

I think the moral to this story, if there is one, is don't fly by the seat of
your pants. Don't write a system that is faster, write one that is exactly
92.56% faster by the application of two specific techniques. Don't write an
RPC system that eliminates the barriers for sharing arbitrary code, write an
RPC system that eliminates two particular barriers for classes that meet three
known requirements that most people in the organization are annoyed about. If
you don't know where you're going, you will never find out if you get there.

~~~
DRMacIver
Did you miss the part where I explicitly said in multiple places that there
was an obvious simple solution and that this post was an example of what not
to do?

------
Patient0
He lost me at:

"At some point it occurred to one of us that the reason our HTTP API was slow
was of course that HTTP was slow. "

Why "of course" is HTTP slow? There's nothing intrinsic to HTTP that I am
aware of that would make it "slower" than any other protocol going over a TCP
socket...

~~~
DRMacIver
For our cousins across the pond, this is an instance of a type of humour
called "Sarcasm", in which you state something directly in order to imply that
it is obviously wrong.

~~~
NateDad
Yeah, FWIW, I missed the sarcasm, too. Such is the trouble with a text only
format.

~~~
DRMacIver
Did you miss the sarcasm even in retrospect? Because I don't really see how it
can be possible to read this article and not understand that I'm pointing out
that all of these things we did are terrible ideas and you shouldn't do them.

~~~
Patient0
Yes the overall tone of the article was clear that it was "this is what you
shouldn't do" - but I read it as implying that somehow this rube-goldberg-
esque contraption was arrived at by a series of logical and reasonable steps,
one of which would have been "re-write the REST layer because HTTP is slow".
So it was kinda implying that this was actually a reasonable step to take in
isolation...

My problem with the statement "HTTP is slow": All of the reasons why an HTTP
connection might be giving poor performance (serialization costs, SSL
encryption costs, TCP latency) are not intrinsic to HTTP and would be a
problem for any protocol that goes over the network. So it's almost never
accurate to say "HTTP is slow".

Anyway thanks for sharing the write up. It's easy to sneer and say "what a
bunch of clueless idiots!" but when I think back on some of the stuff I did
when I was first starting out as a programmer and thought I was shit-hot:

* My first ever experiment with Yacc ended up in a "scheduler" with a custom dependency language for expressing the dependencies, used to schedule our whole overnight batch processing

* After learning about smart pointers in C++, I knew that they were the way to go! I happily wrote: std::vector<std::auto_ptr<T>> \- it was only a year after I had resigned that my (ex) boss, who was still on friendly terms, calls me up to query why the latest version of the C++ compiler won't compile this code.... "Did I write this? What the fuck was I thinking!?"

* I think my first Java class was a very complicated thing that, at the end of the day, allowed one to "more easily" write code to read a CSV file...

Well, it's all a learning experience! </shudder>

~~~
DRMacIver
I'm not sure I would call the steps "reasonable" per se. They're much more
obviously bad ideas in retrospect than they were at the time, but a lot of
them should have been pretty obviously bad ideas at the time as well.

On the HTTP is slow thing: I think I was pretty clear in the write up that we
had failed to performance test and it wasn't at all clear that our replacement
was faster. I think these are sufficient disclaimers that I was pretty
obviously labelling it "HEY DON'T DO IT THIS WAY" :-)

~~~
triplesec
this is where code gets to be a bit like dating and general human interaction.
"it seemed like a good idea at the time".

------
NateDad
Wow, when did this guy start working at my company?

This is like 75% of what happened at my company, except we actually made a
whole (custom, of course) framework for dynamically loading code on the server
and client.... we also never did the last step of having the clients talk to
the DB (the server federates access based on a complex authorization model
that can't reasonably be done directly against the database). We are currently
rewriting the client-server communication stack (I'm still not sure why).

And next month we're starting a full rewrite from scratch to put it in the
cloud. Whee!

~~~
DRMacIver
May I extol the virtues of quitting and finding a better job to you? :-)

------
seguer
I was headed down a similar path myself, but luckily I was writing a
completely new library that would abstract a lot of key-value stores. Before
we used it in production, I performance tested it - HTTP is really slow. The
end result was a client library that talked directly to the stores using
adapters for the abstraction.

~~~
Patient0
"Before we used it in production, I performance tested it - HTTP is really
slow"

Slow compared to what? direct memory access?

If you're going to say "HTTP is really slow" I contend you really probably
meant "communication over a network is slow"...

~~~
jeremyjh
Well, he didn't express this very well, but I don't think he is quite that
naive. Bytes move across a network media at the speed of light regardless of
whether the electrons and pulses encode HTTP headers or not. And the latency
of network layer switches and routers is similarly enencumbered by the
presence of slashes in resource names. However, it is _typical_ that HTTP-
based client/server applications have more layers of serialization and a
heavier payload that you'd find with a typical binary database protocol. Plus
of course your HTTP server probably turns right around and talks to a database
adding a whole extra layer of serialization and network data transport.

You are correct that this is not the case per-se, and I think there is an
implication here that a lot of people really do not think very much about the
massive overhead of serialization in general - instead they use a heuristic
like "HTTP is slow". And while that heuristic does not capture an accurate
model of the problem, it does often point in a direction that will prove
fruitful as they grasp about blindly in possible solution spaces.

------
triplesec
Quote of the week: "What follows next is the point at which it all starts to
go completely tea party."

(para) "At the moment what we have here is a slightly baroque architecture,
but it’s fundamentally not that much worse than many you’d encounter in the
wild. It’s not good, but looking at it from the outside you can sortof see
where we’re coming from. What follows next is the point at which it all starts
to go completely tea party."

------
markokocic
I hate to read that "one caffeine fueled weekend later ..." in a story like
this.

Is a weekend the only time one is allowed to commit fundamental code changes
to the project? Is a "lone cowboy" coding all alone without any consultation
and coordination with the rest of the team really a good role model in any
development environment?

This is just a small nitpick. The rest of the story, however, is good and
educational.

------
contingencies
This post is a logical mess. The author might do better writing another one in
a few years or learning to use diagrams to convey architecture (but for god's
sake don't bother with UML!).

Architecturally, the changes discussed seem to be (1) From a standard protocol
to some customization for premature optimization purposes (unmeasured
results), and back to a standard protocol. (2) From service abstraction
through higher-layer communication to service-dependency through direct
database connectivity. (3) From code duplication between client/server to the
use of a centralized model. (4) From multiple codebases to a single codebase.

My take: (1) Is premature optimization, needless complexity and was ultimately
a waste of time. (2) A change so completely fundamentally different with a
view towards system architecture (security, resource allocation,
proxy/WAN/potential-client-range, etc.) that it is hard to analyze; ultimately
it suggests a complete misunderstanding or rescoping of the original problem.
(3) Generating code or behaviour from models is very often a great idea. While
up-front complexity may appear slightly larger, it means you can retarget your
code to other languages/platforms much more easily, without the inefficies and
syntax-learning overheads of partial-but-similar strategies through runtime
abstraction libraries. (These are occasionally useful for _very-specific_
requirements, but that's a tangent.) (4) See 2.

~~~
rossjudson
Your insult is a logical error. You might do better writing another one after
googling David Maciver.

Always remember! Google before insulting!

My take: You failed to to note the confessional tone of the piece. You also
failed to note that even very smart people (see Google point above) can
sometimes get sucked into wise-by-step, terrible-in-aggregate decisions.

~~~
contingencies
No insult intended. However, congratulations on exhibiting lowest form of wit.
I was just trying to extract some meaning from the piece that was practical.
Sorry to ruffle anyone's feathers... I'll get back to duckduckgo-ing
'totalitarian panopticons of childlike presentation'.

