
Programmer as wizard, programmer as engineer (2018) - matfil
https://www.tedinski.com/2018/03/20/wizarding-vs-engineering.html
======
markmiro
It would be interesting if programmers took "sketching" to be a valuable and
necessary part of programming. It's common practice for painters to make a
pencil draft first. It's common in industrial design to produce prototypes.

However, when it comes to code we treat it similar to writing. We may have a
first draft, but the final draft is often nothing more than a cleaned-up
draft. I could be wrong. I never wrote professionally.

It would be interesting if we had languages that would be great for
prototyping but designed to be unusable in production. However, I'm having a
hard time imagining properties that don't already exist in languages like
Python and JS. You want weak typing of course, but you'd be ok with poor
security. Maybe we'd some nice features that would make the language run
slowly since it running in prod would be a non-goal.

~~~
biztos
There is a saying, "real writing is rewriting."

Ideally the final draft is something that has been very aggressively
refactored, multiple times, with input from a Refactoring Engineer.

(Did I just invent a new job category? I don't think there's currently any
equivalent of an Editor in the Software Engineering world; code review is a
chaotic approximation.)

Unfortunately there is usually pressure, and maybe also desire, to just make
it work, maybe with tests, and move on to the next thing, at least in
companies.

~~~
LoSboccacc
"Build one to throw away"

\- the mythical man month

~~~
stronglikedan
"Can't we just spruce up the working prototype and deploy it to production?"

\-- Management

~~~
AstralStorm
If it only were an actual question...

~~~
LoSboccacc
I've learned in time to never make a working ux in my prototypes. just raw
data showing the innards working, and that's it. functional or pretty ux just
gets shipped in prod no matter what.

------
zimablue
I'm sceptical of his argument that "we've gone from dynamic being trendy back
to typed(java), because 'people' had to maintain dynamic codebases".

An equivalent but probably equally not-the-real-explanation argument would be
that we've gone from an era of opportunity into an era of oligopoly as the
internet titans have emerged and so the "coolest kids" who everyone cargocults
have gone from being fast-growing startups to members of the big-5 elite. Kind
of the same as his argument just with detail on who his "people" are and why,
but it changes the implications.

I think it's more as he touches on that the two are converging. I think
eventually some sort of pluggable typing will win (the "proofs" on your system
won't be a single compilation pass but maybe different typing for different
parts of the system, and specific proofs run between compile and runtime as
the two blur), which will look more like gradual typing.

~~~
jondubois
According to the definitions provided, it seems that engineering is the wrong
approach for the vast majority of systems (especially popular web-based
platforms). We should be leaning more towards wizarding.

Most systems change all the time. So long as a company needs executives to
make decisions and steer in different directions depending on the economic
conditions, companies also need the flexibility to change their code.

Even the Linux Kernel which is now decades old is still being changed all the
time. If you use tools which assume that every line of code you write is not
going to change, then unless you're programming an aircraft or a medical
hardware device you're probably using the wrong tools.

You should not assume that just because some low level module is deeply nested
within the code, that it means that it should not be changed or thrown out.

That's why I prefer dynamically typed languages for web systems; they start
with the rigth assumption about the ever-evolving nature of the project.

If JavaScript was designed to be statically typed from the beginning, web
browsers would not have attained the usefulness or popularity that they have
today.

~~~
dtech
I disagree with your assertion that dynamic languages are easier to change. In
my experience programs written in a typed language are much easier to change,
because IDEs can exactly tell you how something interconnects (and thus where
breaks might happen) and the compiler can give you some confidence that you
didn't miss anything, or alternatively throw an error if you did miss
something.

You can partially provide the same benefits in dynamic languages with tests,
but at that point you're paying the same cost as with a type system.

~~~
jondubois
Statically typed languages encourage bad programming practices precisely
because they make it easier to track type references across many files.

Ideally, types should not traverse too many files; there should be a clear
class hierarchy and each level should provide more abstraction. Also it's
better to encourage passing simple types like strings, numbers or clones of
objects instead of active instances... Because if an instance of a class is
referenced in many parts of the code, it's difficult to track which part of
the code was responsible for changes made to that instance; also it's harder
to maintain your train of thought when traversing many files to debug a simple
operation related to a specific instance type.

On a related note, I've noticed at multiple companies that when they force
developers to use specific IDEs which make it easy to find stuff, the
directory structure of the projects tends to suffer (since existing developers
don't rely on the directory structure to find things; so they stop caring
about it) - This makes it harder for newcomers to make sense of the code and
makes the project totally dependent on the IDE.

~~~
lalaithion
I feel like "each level should provide more abstraction" is at odds with
"encourage passing simple types like strings, numbers".

Also, I'd go further than "pass clones of objects instead of active instances"
all the way to "only use mutable objects when performance demands it, and keep
the scope of the mutability small".

------
rgoulter
"Wizard, Engineer" reminds me of Yegge's "Software Liberal/Conservative"
approach to risk.
[https://plus.google.com/110981030061712822816/posts/KaSKeg4v...](https://plus.google.com/110981030061712822816/posts/KaSKeg4vQtz)

Albeit, rather than "wizards like implicit/magic, engineers prefer
explicit/boilerplate/maintainability", the difference Yegge suggests was
management of risk.

~~~
matfil
Thanks, that's a pretty interesting take.

(Also a reminder that we're probably going to lose some interesting stuff when
Google+ goes kaboom...)

------
bitwize
Lisp is the ultimate "wizarding" programming language.

But when I miraculously was called upon to _maintain_ an enterprise code base
in Common Lisp, it was an absolute _joy_. Because whenever they encountered a
roadblock in maintenance, the Lisp wizards who had come before me just
wizarded up a solution. One of the things that stuck out was that it had its
own custom test framework, that was head and shoulders above XUnit, Mocha, or
any other commonly-used test framework. Adding a new test was virtually a one-
liner; the test would generate test data , send it to the server, and check
the server's response against an XML template provided by the test case.

------
dmichulke
A really good alternative to Python and C: Clojure + Java

Mostly because

\- Clojure is very _very_ terse

\- Java has the battle-tested libs

\- they run on the same (J)VM - so no FFI required in your code

~~~
zimablue
I think the problem with this is, if I'm going to say "completely rewrite this
[function/class/module] into another language", then there's a good chance you
want that language to be about as fast as possible. I guess because a lot of
problems fall straight from "speed absolutely doesn't matter" to "this is the
bottleneck of the whole thing".

I think that is a reason why there's a lot of python/c++ in hedge fund land.
I've written some clojure but don't know the c interop story for it.

~~~
dmichulke
> completely rewrite this into another language

You just opened the box of pandora for a million reasons most of which are
unbeknownst to the both of us :) But for the sake of argument I will continue
under this assumption

> language to be about as fast as possible

If you're interested in Mathy stuff like Machine Learning, FFT, ... then
maybe. But even for those you usually have JNI bindings, so it's easy to use
most of those mathy C libs if necessary.

But I guess that 95% of all software isn't about speed but about something
else (correctness, maintainability, safety against threats, portability, ...)
because costs today are usually dictated by manpower costs or those arising
from safety/security incidents and much less often by hardware costs compared
to say a decade ago.

~~~
mywittyname
My limited experience taught me that doing anything mathy in Java is painful
due to types (well, more lack of math-specific built-in types). The
input/output is usually going to be in primitive types while the library
likely uses some home-brew custom typing(or Commons, if you're lucky) for
Complex numbers, matrices, etc. So you have to do the type conversion song-
and-dance.

The Python libraries never seem to care. Just give them a list of a list of
number-ish values and off they go.

Java might have some great ML/Math libraries, but the fact that Python
dominates data science suggests that my experience is a real-world pain-point.

------
kelnos
> _What do we do?_

For me, it's "simple": I never ever ever _ever_ write one to throw away (and I
do mean that in a literal, absolutist sense, which is rare for me). This does
mean that I often can't use wizard tools (so it can be less fun to build). I
never use dynamic languages, and build on the JVM (Scala or Java), because I
know it will scale, and there are battle-tested libraries that do nearly
everything under the sun floating out there. (If your org has a different
"blessed" platform for production services, then use that.) It isn't _quite_
as quick to build the MVP as if I was just hacking something together. But I
can do it fast _enough_ , and still end up with a maintainable, evolve-able
code base.

It's not a perfect process. My first version usually has only a few tests that
verify behaviors that I had trouble modeling clearly in code and didn't feel
confident about. Sometimes I miss error cases here and there that someone else
has to find and deal with later. Also note that, because I use strongly-typed
languages, I can push a decent amount of correctness verification onto the
type system, so the compiler catches a ton of errors that I'd need a giant
test suite to catch using many dynamic languages. The tests that I do write
focus on logical correctness, not code correctness.

But at the end of the day, I deliver products on time that I feel much more
comfortable being robust in a production environment than build-one-to-throw-
away prototypes. Stuff that I'm fine holding a pager for if I need to. I have
several "prototypes" that are still running in production several years after
my first release, maintained by other people after I've moved on. And by and
large they still contain a lot of the original code, and the design remains
close to (and/or continues to be heavily influenced by) the original design.

On the flip side, I've had to deal with code that's been thrown together with
the expectation that it could be thrown away later (of course it never can
be), and it's incredibly difficult to bring it up to a robustness level that
would be deemed acceptable for a generally-available product. These code bases
constantly set off pagers for dubious reasons and write unactionable crap to
logging systems... and it doesn't have to be that way!

~~~
pweissbrod
I paint (mostly acrylic) and code. To me, the notion of building a practice
run is very liberating; it enables me to think about components, how they
should be named/organized, how they interact without the pressure of "getting
it right the first time". The time-box for throwaway work should be small.
Small enough such that you feel a firm confident grip on the plan at hand. If
you have something fully end-to-end operational then more than likely the
throwaway has been overworked.

I encourage you to consider trying out more throw-away work before writing the
real thing. You find yourself with not only a more lucid vision of what goes
in to the "real thing" but a little more muscle memory in getting started down
the right path.

~~~
AstralStorm
Unfortunately you cannot throw away a building with a fresco, and many useful
applications are of this sort of size.

(Even something as "simple" as email.)

The best you can do is paint the fresco on paper, digitize, design the
building outline and iterate 3D building designs. And you have not considered
materials and structural design at this point and do not even have a scale
physical model.

Prototypes work for small, enclosed apps with limited functionality. Not even
video games most of the time and these are relatively small and specific. Word
processor? Good luck. DAW? Oh my. CAD/CAM tool? You've got to be kidding me.
IDE? Nope. A compiler? You'll rewrite it a few times. Desktop environment? A
good recipe to lose users. Even CRUD...

You cannot iterate a painting into a building with a fresco.

The thing is that currently an MVP is definitely huge, as much as startups
would like everyone else to believe otherwise.

------
jgoodhcg
Cute way to put it. Just like many I'm in the midst of painful transition from
quick and dirty mvp to well engineered end product.

Everything Rich Hickey has been giving talks on and clojure itself seem to be
the best _solution_ I've seen.

------
luord
There are quite a few false assumptions here.

> People have certainly managed to create test suites that make it harder to
> maintain the code.

Sure, but pretending that this is the norm is just not true. And arguing
against an extreme case can be done against _anything_ .

> I get the impression Google has been able to migrate a lot of C++ and Python
> to Go using this approach.

Not only is this heavily suspect (I'd love to read a citation for this) but if
there's a language that represents the "engineering" side (?) pretty clearly,
it is C++.

> Gradual type systems has started to garner a lot more interest.

No they haven't. Those have existed for over _five decades_ . Another problem
with these articles is how they seem oddly ignorant of the history of computer
science. This is specially odd coming from a PhD (assuming that I looked up
the right person in google).

And, last but not least, some of the best engineered pieces of code are
precisely shells, dynamic languages and frameworks.

It's things like these what makes these articles seem like they were written
by java developers annoyed because java is no longer the trendiest toy.

~~~
marcosdumay
> No they haven't. Those have existed for over five decades.

There is nothing in "started to garner more interest" that implies something
is new. It's even the other way around.

------
quantombone
Using PyTorch (and the broader space of machine learning algorithms under the
“deep learning” category) really makes me feel like a wizard. But the downside
to being Python-dependent is that putting PyTorch stuff into products not
easy. I hope PyTorch 1.0 will change that.

Note: This article was not written with Machine Learning in mind, and I will
have to re-read the article to better articulate my thoughts on “Machine
Learning Wizardry” and juxtapose my own ideas with those of the article
author.

Kudos to author: The article’s main metaphor is excellent because it got my
creative juices flowing (i.e., brain working at 110% for a few brief moments).

~~~
mlboss
You can always use ONNX to convert PyTorch trained model to any other format.
[https://onnx.ai/](https://onnx.ai/)

------
revskill
Universal, component-based application is the answer of the "boundary" in the
article. You got both wizard and engineering solution, which is simple, easy
to delete and FUN.

~~~
bboreham
Brad Cox described this in “Object-oriented Programming”, a book I first read
in 1989. Building applications. simply by plugging together off-the-shelf
components remains a beautiful vision, and it is almost entirely unrealised 30
years on.

~~~
marcodave
> “Object-oriented Programming”, [...] Building applications. simply by
> plugging together off-the-shelf components

I've read just here on HN that microservices architecture is none other than
an implementation of the original concept of OOP.

~~~
bboreham
Possibly. But my understanding of “the concept of OOP” involves a separate
identity for each object.

Suppose the thing you are dealing with is “Customers”, then with OOP each
Customer is a separate object, while with MSOP you must communicate to the
service which Customer you are talking about. The thing you talk to and the
thing you are talking about have different identities.

------
fullstackchris
I wholeheartedly agree that software development as a whole, especially in the
past few years, has been more in the spirit of 'wizardry'.

But with the speed at which the sheer amount of new software 'stuff' comes out
each year, there's simply hasn't been enough time to develop rigorous
engineering specs or best practices for all of those tools.

Perhaps we're starting to see an initial version of a universally accepted
model, at least for the frontend, with the mentioned Typescript + Flux (I'll
just say Redux).

Many assume that Redux is only a state container, and it is at face value, but
more semantically it is (when implemented correctly) a very logical
boilerplate where it puts everything related to state in its proper place, so
any developer can look at the code base and immediately get a general idea of
where state is set, what events exist and how the state is used throughout an
application.

I'd like to see things like these very strict patterns emerge for other tools,
like Node. For example, I could imagine a snippet of THE universally accepted
express boilerplate for a login, given a specific backend... (can an email +
password login really get _that_ customized?)

...argh, on second thought I suppose it can, but even then such a boilerplate
could have sections with a freebie spot for customizations

Eh, it's late and I wonder if such universal pattern ideas are a pipedream...
I suppose only time will tell...

------
casper345
"Python + C" I am taking this approach working/learning with the raspberry pi.
Although it can do both python and C, I know in electronics working with low
level C will be a lot beneficial for my learning and code itself. When I
migrate to more advance topics to scripting and ML, I will use both PYthon and
C to implement what is needed.

------
jbergens
It sounds like optionally typed languages like Typescript should be really
good if you want to start in "wizarding" mode and then change the code more
into an "engineering" solution.

------
Adamantcheese
A wizard then should have a spellbook, one filled with all sorts of spells
written out for immediate use. Maintainable hacks if you will. Those are
probably just scripts though I suppose?

~~~
bryanrasmussen
The problem with spellbooks is one seen in media on the subject of wizardry,
that the wizard spends a lot of time memorizing spells. This is why the most
useful spells get turned into artifacts that the wizard does not have to
always memorize to know how to cast correctly.

What is needed is an artifact like spellbook, that the wizard when faced with
a situation could describe it to the spellbook and get back the correct spell
or combination of spells to solve the situation. Attempts have been made to
create such an artifact, but unfortunately the resulting spellbooks still take
a long time to find the correct spell and when reading the spells you often
find that there are missing ingredients or a complicated set of gestures that
must be performed to make the spell work, and you have to read all about these
gestures in turn to figure out which ones you really need.

~~~
thisiszilff
Have you heard of hoogle (Haskell's type signature search)? It almost exactly
feels like what you've described.

~~~
bryanrasmussen
no, thanks for telling me!

