
On Abstraction [video] - tosh
https://www.youtube.com/watch?v=x9pxbnFC4aQ
======
agentultra
In the bridge analogy he implies that software engineering cannot be more like
civil engineering due to changing requirements. He stretches the common
analogy of building bridges by suggesting it's normal to add a car-wash to the
middle of the bridge before you've finished developing it. This is the norm in
software development so there's no way we can engineer a software solution
with the same rigour as other, proper, engineering disciplines!

I _do_ believe software engineering can be more like civil engineering and
that there are organizations around the world which will license _capital-P_
engineers in software. Just because requirements change does not mean we
cannot write mathematically precise and robust models and definitions of
systems. It also doesn't mean we cannot, a priori, build software that is as
robust to operating conditions as a bridge.

What we're doing as an industry when we run software in production without
clearly known limits is simply taking risk. We could know what the tolerable
limits are of a particular server are or precisely define the probability that
a given change will reach quorum within a particular bound of time. However
for a good number of projects we're comfortable accepting that our customers
will discover those omissions for us and no one will get hurt.

However if you're writing software to control robotics that help people walk
or air-flight control software to safely land multi-million dollar jets you're
probably going to take more precautions to design and test your software and
create detailed, precise specifications.

 _Update_ : To take the bridge analogy in a different direction: it's more
common in the software industry to build two bridges: the one most people use
but could fall down at any time _or_ the bridge that costs a significant
amount more but there will be a small team of developers waiting to help you
get up if it does.

 _update update_ : The point is you can write sound models of your
abstractions and _prove_ properties about them with sufficient tooling. Much
the same way that a blueprint is an abstraction of the bridge that is built,
we have ways of writing blueprints of software systems.

~~~
wvenable
I think software engineering cannot be more like civil engineering because
software is the result of the decision making process. In civil engineering,
the blueprint is the result of the decision making process and the bulk of
time and cost is building from that blueprint.

You cannot compare the process of building a bridge to building software; you
could compare the process of constructing the blueprint with building
software. And I think the scale of complexity of most software far exceeds the
complexity of most blueprints.

~~~
miceeatnicerice
No software is creatively fathomed out of nothing. Opinionated frameworks
abound, pre-ordaining the little manouevres needed to wire them up properly,
and solving familiar problems with established patterns. If these have all
been decided beforehand, even if only by convention, then what are they but
blueprints?

What difference there is works in software's favour: there's a frothy top
layer of most projects where developers can indulge their whims/hone their
abilities and approaches. But it's not the whole.

~~~
Jtsummers
They aren't blueprints, they're foundations and components if we're going to
stick to this analogy.

Blueprints are specifications and design documents. Most software lacks these.

~~~
wvenable
Specifications and design documents are merely estimations. If you've made
every single decision possible, you've written the software.

If you have blueprints for a bridge and get two different companies to build
it, you'll get the same bridge both times. The differences, if any, would be
minor.

If you give specifications and design documents to two software teams you
could get radically different products that look nothing alike and it's
entirely possible that neither one of them will satisfy the clients needs.

~~~
agentultra
Given a formal model or proof of a system and the two teams will either
succeed or fail.

Give them a specification in prose and they will have a little too much wiggle
room. Such specifications are useful to a degree but I look at them like
sketches on a napkin.

If you use a more formal method of mathematics as your specification then you
can be more precise about the invariants that matter and model your system
more faithfully. And with a good proof assistant or model checker the computer
can even help you catch flaws in your design that you would never have been
able to think of on your own.

It's true that the source code is a proof of something. It often helps to know
whether you've built the right thing. And that it does what you think it does.

~~~
wvenable
Given a sufficient model or proof of the system and you don't need the teams
at all -- they can be automated away.

Getting your model right is as hard, if not harder, than getting the software
right in the first place. The problem hasn't changed you've just added more
layers (and more cost) in the hopes that doing it twice, differently,
eliminates most of the problems.

~~~
agentultra
For some trivial small problems writing a model in TLA+ or Lean is definitely
more costly than just writing the software even if you get a few things wrong.
It'd be like commissioning a blueprint for your shed in the backyard. For
those sorts of tasks it is sufficient to just write a few tests and call it a
day.

However for more complex services that are trying to manage several clusters
of resources amongst tens of thousands of tenants or more there are invariably
going to be errors. The kinds of errors you see might require a particular
series of events to change your state in 53 steps to hit it... but if you're
servicing > 1M requests in a minute that ends up being frequent enough to be
bothersome.

Even more so if you're working on a memory controller in a new hardware
platform that is expected to ship in a few million units. It'd be nice to know
that you have strong evidence that your system is correct.

And developing models is hard but so is thinking. Nobody said it was easy. But
one shouldn't say that we can't engineer robust software systems. It's just
patently false.

------
tosh
Zach Tellman's book
([http://elementsofclojure.com/](http://elementsofclojure.com/)) is also
mentioned in the talk and worth reading. I'd argue it is even worth reading if
you don't use Clojure.

There is a free example chapter on naming.

------
huntie
I thought this was a good talk, but it felt like an overview of chapter 2 of
SICP. Not a bad thing, but if this interests you give SICP a read as well.

~~~
prospero
In the first ten minutes of the talk, the SICP definition (which is taken from
the Hoare paper) is examined and discarded as too limited.

------
contingencies
[https://en.wikipedia.org/wiki/Naming_and_Necessity](https://en.wikipedia.org/wiki/Naming_and_Necessity)
seems to be an overview of the "fruits of 100 years of analytical philosophy"
mentioned as inspirational for the naming chapter.

------
Upvoter33
This is pretty fascinating, and, truth be told, I am very much not into
Clojure or anything related (quite the opposite, really). I wish there was
more thinking and reasoning about the basics of abstraction, naming, etc.

~~~
leblancfg
There wasn't a single line of Clojure in that hour-long talk, I don't get your
point. Do consume the content before you comment on it.

------
bluetwo
I thought the part where he talked about over-engineering being a symptom of a
lack of understanding of the assumptions was an interesting way to look at the
problem.

