Hacker News new | past | comments | ask | show | jobs | submit login
Education of a Programmer (medium.com/terrycrowley)
307 points by jasim on April 2, 2017 | hide | past | favorite | 61 comments



One thing I'd add to these principles: I always emphasize when I teach programming that there's no magic. There's always a reason behind everything and regular folks like you can understand it. There are people (even among tech community) who like to say something is "magic" or "beyond one's comprehension", but don't take their words seriously. Mysticism is just intellectual laziness. Don't indulge in it.


I'm pretty sure this is not going to help "ordinary" people. Unless they already think like you and me.

When I was a kid, I wanted to know how all the black boxes worked -- one of my greatest days was when my father explained the internal combustion engine well enough that cars no longer seemed like magic.

Later I noticed that many people -- especially when talking about electronics -- would ask "How does it work?", when they actually meant "How do you use it?" I thought this was just a quirk of the Australian dialect, but as I gained more experience with people, I realised it was really an aspect of human thought.

To most people, knowing "how it works" is no more than being familiar enough with the magic spells that you can use them with confidence.


Isn't "the magic spells that you can use", exactly how we (programmers) use interfaces.

I trust that the interface, be it the C++ standard, or boost::whatever, is well enough tested that I don't need to know "how it works".

It would be nice to know how everything works but there is only finite time in the working day.


Is it truly possible to have a solid understanding of the interface and its use, though, without having a reasonable insight into, at least, the likely implementation of something?


I'm sure this isn't what you meant, but how about something like movement? I've met doctors who know anatomy like the (ahem) back of their hand, but they couldn't perform coordinated physical tasks to save their life.


Yes, that's the whole point of abstraction, to reduce cognitive load by ignoring irrelevant implementation details.


This is why some people don't like languages with big runtimes/standard libraries like C++.


I don't see how that relates?

You can understand the source of your libraries just fine, but still be grateful that they are usually hidden behind interfaces that let the compiler help you spot silly mistakes.

(OK, less so in C++, more so in eg Haskell.)


> To most people, knowing "how it works" is no more than being familiar enough with the magic spells that you can use them with confidence.

It's necessary to see beyond chosen words. I might ask how a computer program works and it could mean "how do you use it," or "how does it function." The other party has to infer my meaning.

I don't think you can take a phrase that is common in your life and apply it to everyone else. Words / phrase have different meanings in different contexts for different people.


It's a matter of time and importance.

Sure, I'd love to live forever so I could take the time to fully understand everything, but unfortunately I have to optimize my time, and learning the ins and outs of an engine just isn't that important in my day to day life.


A little defensive no?

While I tried to convey my own excitements and my dislikes, I was careful in my comment not to say that everyone most like the same things.

What I will say though is that there is a difference between known how a thing works and how to use it. Very often you only need the latter. But thinking that it equals the former is just an error.


My post wasn't defensive, it was pointing out that treating things like "magic boxes" is a purely rational decision, not an inherent character flaw in people.

But then you find a way to try and turn even that into a flaw of some sort, and continue with your narrative that it's somehow a problem that people don't take the time to fully understand everything they use.


This week I've started reading The Design of Everyday Things after HN recommendations, and the author makes a distinction between objects themselves, and their 'conceptual models', which may differ for the designer and the user. A single user may even have multiple, conflicting conceptual models of a device and its use. These are sourced from what he calls the 'system image', or overall set of information available about the device including past experience, appearance, sales literature, manuals, websites, advertising, etc. Needless to say, people can operate successfully on an incorrect mental model as long as it serves its purpose.

PS. Australian here. Yes, people do speak like that, but I agree with your conclusion.


A model that you need to use a device can be much simpler than a model that you need to fix the device if it is broken which in turn may be simpler than a model required to build the device from scratch, etc.

I saw "how does it work" used as "how to use it" on American TV https://www.youtube.com/watch?v=MEFXhopARyw


> To most people, knowing "how it works" is no more than being familiar enough with the magic spells that you can use them with confidence.

I like the (literal) treatment of this topic in Harry Potter and the Method of Rationality.


>Harry Potter and the Method of Rationality

Never heard of it before. For anyone else interested; http://www.hpmor.com/


Yes, and Hermoine is in no way stupid. Personally I find her more intigent the Harry (even in HPMoR). But that's probably because I finder her to be more rational than Elizer Yudkowsky.


Which version of Hermione are your referring to? Eliezer did write one of them after all…


Eliezer's one.


One of my greatest days also, but my take was that the explanation was good all along and I was finally able to understand.


Sorry, I didn't mean to dis my dad there. He explained it as soon as he thought I would be able to understand it.

What I meant by "well enough" is that the explanation that he gave -- or even the one that I now have in my head -- is nowhere near the truth. Engines are complicated, and I will never truly understand them. But at least they can be demystified.


Ah, but don't forget Clarke's three laws:

1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible.

3) Any sufficiently advanced technology is indistinguishable from magic.


#1 is a complicated way of saying that almost everything that a distinguished elderly scientist has an opinion about is possible. Did Clarke really believe that truly impossible things will cause elderly experts to be unsure? Or did he really just mean "almost everything is possible"?


Saying that something is "magical" is often merely short-hand for "this behavior would take too much time or effort to figure out right now, let's not get lost in the weeds and just continue along with our objectives."

Admittedly, some technical folks are apt to take words literally and don't response well to the usage of the word "magic/magical" even when its meant in a figurative sense.


While I agree that the person that is using the word "magic" might often do so for the reason that you say, I think it is better to state it as "an implementation detail" rather than using the word "magic". Not because you and I don't understand what is meant but because by using that word we give power to the idea that something is beyond comprehension when that is not what we meant.


People who believe magic exists don't believe magic is beyong comprehension


I agree to an extent, but when explaining things to beginners, there genuinely are elements that are 100% impossible to understand until earlier, building-block concepts are firmly under one's belt.

And knowing that there are hard limits on understanding is important. It is almost reassuring: yes, you don't (and can't) understand x right now, but you will be able to when you enhance your mental models! Learning is powerful! Contrast that to "you can understand anything just as you are" which is a dangerous attitude, IMO, because it teaches that students don't need to change. But what is learning, if not that?


There's no magic, but sometimes there's no real reason behind why something is the way it is. Sometimes a developer just picks a path because they had to solve a problem, and they didn't think deeply about it. I understand that your explanation if focused more on how everything can be understood, but I think it's worth making this slight clarification for some cases.


You make a good point. Let's replace the word 'magic' with 'abstraction' as 'magic' is quite overloaded. Yes, there is a place and time for abstraction in computer programming but as developers gain experience they should start piercing the layers of abstraction to understand the whole picture.


When people say something is "magic" they don't mean it is literally inexplicable.


I'll usually call some technology thing "magic" when I mean to say that something is useful to me, but that its usefulness to me doesn't require me to understand it at any deep level (or even shallow level sometimes). Wifi is kind of like that: very useful to me, I use it every day... have almost no understanding of the ins/outs... it's PFM (pure fucking magic) to me. And I don't have much interest in learning about it more than than I know.... I've got other fish to fry. Same thing about my car's motor: I only know the most theoretical background of internal combustion engines... have me diagnose an engine problem? Ha! Everything in front of the firewall and under the hood may as well been conjured by Merlin himself.

Life is short. I spend it miserly on those things I have to or want to care about and wifi and car engines aren't those things (now).


There is no excuse for not to understand the so called 'magic' even at a very superficial level. There are tons of blogs, videos, how to articles, books etc. Just search on YouTube and I am sure you will understand most of anything in just a few minutes.


The excuse is that there are literally billions of things that one could thrive to understand at least superficially, but the typical human life has only 4500 weeks.


entirely depends on the definition of "understand".

superficial understanding you'll forget in a couple weeks? sure. but nothing more, unless you're some sort of savant.


I hear the words "magic" and "magical" used in the Python community to mean that a particular module or framework does not expose its inner workings and does a lot of work behind the scenes that is not transparent to the programmer.


I feel as though Django is described as 'magic' quite a lot, and frameworks with apparently little magic (like Flask) aren't accused of it so much. The 'magic' trend has carried over to Common Lisp with frameworks like Clack supporting similar 'magic' to Flask.


Right, yeah. I mean, depending on your philosophy that might be good or bad, but I don't think it's literally expressing a mindset that "nobody could understand how this works, just don't touch it."


Indeed, I often use the term "magic" to describe something that happens for very arbitrary reasons or through an extremely complex and counter intuitive set of steps. I.e., code that uses gotos, hard-to-understand reflection, or code/language that has a large amount of esoteric assumptions and rules in the background.


I call implicitly executing code magic. It is not like I could not comprehend it, but I should not have to, it should be obvious. I use the word to describe bad interfaces, giving me this feeling of little control.

Big fan of The Zen of Python.


people calling something "magic" within a context of computers and mysticism are two totally unrelated things. people don't literally think something is under the influence of a supernatural force.

and you may claim there is a reason behind everything, but good luck finding the reason. no one will ever find the reason for every single bug, issue, problem, etc. they come across. so just because there might exist a reason, the fact that it can't be practically found makes that stance a bit mystical itself.


Adding to this advice. Sometimes it is easy to blame the compiler, O.S., etc. In may experience it has been my own fault 98% of the time. Specially when using very popular software tools.


The "end to end argument" in that paper reflects the issue of how dumb the network should be. This goes back and forth. With telephone switches and Plan 55-A message switching, all the intelligence was in the network. With the ARPANET, the network did reliable delivery, but the work was distributed among the dedicated network machines. IP was pure dumb datagram, which was controversial at the time. It still is; the last few years have seen the proliferation of non-transparent middle boxes, from the Great Firewall of China to Comcast to Cloudflare to Google.


One day we'll have 2 available outgoing ports: 80 and 443 —the web. IPv6 won't be needed because nobody will have servers, and can all be served behind a NAT. Email will go through webmail interfaces, and there will be a couple dozen providers in the whole world. Anything else will instantly be rejected as spam —only spammers use small providers anyway.

We will see conspicuously interesting ads, thanks to ever more accurate, automated analysis of our communications and browsing patterns. Sometimes, such analysis will be performed to stop criminals (mainly terrorists, paedophiles, and intellectual thieves).

How about getting our shit together and starting to fight this nightmare?


The best summary quotation I have in https://github.com/globalcitizen/taoup is ...

Upgrade cost of network complexity: The Internet has smart edges ... and a simple core. Adding an new Internet service is just a matter of distributing an application ... Compare this to voice, where one has to upgrade the entire core. - RFC3439 (2002)


I love how he says “revel in the asynchrony” when discussing distributed systems.

In a more general sense, taking the bull by the horns and dealing with the underlying, sometimes unwanted, realities of the system rather than burying one’s head in the sand and pretending the problems do not exist (only to be forced to deal with them later and apply band-aid solutions that are not robust).

As he goes on to say Rather than trying to hide it, you accept it and design for it.


There was a PhD thesis that appeared on HN a few weeks ago - Algebraic Subtyping (which I still have open in another tab because I've been making my way through it at a dreadfully slow pace) and its core claim is that traditional type systems start from a simple interface, which results in a needlessly complex model, whereas starting from a simple model enables the development of a slightly more complicated, yet still elegant and simple interface.

The similarities between that claim and this one are striking.


[flagged]


Don't link to jwz from HN unless you want other people to see a NSFW picture of a hairy nutsack.


Adding to this: jwz filter based on referer so if one copy the link and open it in a new tab one should be safe although I have only tested once in my preferred browser.


This article really resonates with my experience.

The section on "performance" is so spot on. It is remarkable how rare it is that programmers think in these terms. There is no absolute truth to any particular performance optimization; optimizations only exist within the context of a complex topology of resource constraints that vary with time and application. It seems like a lot of work but this type of analysis becomes routine and automatic with experience. You really do have to evaluate your optimizations from first principles each time.

Props also for "Einsteinian" (I use the terms relativistic/Newtonian). I frequently say "all software systems are distributed systems". Programmers often resist the idea but it actually makes the design of systems much easier once you fully embrace the implications.


"If someone tries to explain a system by describing the code structure and does not understand the rate and volume of data flow, they do not understand the system."

This is something that becomes evident when you stop using OOP languages, and then you cannot unsee it anymore.


Could you elaborate a bit? I'm super curious but I have little experience with non-OOP languages.


You could go and learn Clojure, or Haskell if you're particularly masochistic, but even C will do the trick. Be warned that it will be frustrating at first, it definitely takes some brain rewiring.


Functional languages describe the data instead of what you do to the data. This is game changing in perception.

Go read Learn Yourself a Haskell.


Some great quotes in here, though quite a few are effective restatements.

As usual, added to https://github.com/globalcitizen/taoup

I particularly liked the light speed analysis technique, the bold statement that a clean start is a fundamental technique in managing complexity, and the expression of effective technical management as the design of transparent, self-coordinating feedback loops.


honestly, i am an ardent believer of the ancient simian proverb (as popularized by silvanus p. thompson, in calculus made easy): 'what one fool can do, so can another' (given sufficient motivation, that is).

edit-001: feynman's infamous algorithm contains more detail: http://wiki.c2.com/?FeynmanAlgorithm


I also like this quote:

"What one programmer can do in one month, two programmers can do in two months." - Fred Brooks


> I had been designing asynchronous distributed systems for decades but was struck by this quote from Pat Helland, a SQL architect, at an internal Microsoft talk. “We live in an Einsteinian universe — there is no such thing as simultaneity."

Had the author really not heard this idea from Lamport already?


Surely, but that's beside the point. After decades of experience working in its cognitive neighborhood, the author finally got to know that idea well enough that Helland's point resonated with him. It's no simple matter to go from being aware of a concept to weaving the concept and all its implications fluently into ones thoughts. I'm familiar with all the main ideas presented in this piece, but I still found a lot to think about and took a lot of notes while reading it.


To be fair though, the practical upshot Lamport's work on distributed is let people try to produce systems which let other people pretend they live in a non-Einsteinian universe.

In that model, Very Clever People build a consistent distributed database which merely mortal application developers can treat as if it were a single instance. I think Crowley wants application developers to just accept nature of distributed systems.


he uses the phrase "anti-fragile" which as far as i know was coined by nassim taleb in his book "anti fragile" kind of an economist or chaos theorist idk good book regardless


Attempting to read or even skim an article of this length on mobile makes me wish my browser had a minimap scroll bar. I wonder if that could be feasibly implemented as a bookmarklet.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: