> Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
I dare say the same is true of software engineering. I, nominally a backend engineer, know (and apply) more about HTTP than most front-end devs and architects I've met, simply because I sat down one day and read the HTTP spec. (It's not a difficult read!)
A few years back I was building a web server from scratch for my own quirky needs (using, of all things, C and Scheme). It required understanding the HTTP protocol, and I agree the RFCs are not all that hard to read and learning the details in order to apply them.
However what I eventually found out is that the HTTP "rules" were not faithfully followed by many implementations. For example, extra care taken to make sure HTTP headers were correctly parsed just caused headaches. The trouble was that headers received from many origins were "malformed" despite specs saying what a header "MUST" contain or what characters are not allowed.
I know servers are supposed to be "tolerant" of non-compliant clients (and vice-versa), and realistically there's little choice but to go along with "loose" compliance. I've often wondered to what extent that reality contributes to less than optimum security that's so often been an issue.
How many of us actually read the documentation and the source for the systems we use? All the options and flags for jq, wget, socat, ssh, rsync, etc. I am trying to spend about 5 minutes a day just reading man pages, esp about things I THINK I know but actually don't.
In my personal experience, the best engineers I've encountered (and learned from) have understood every system, subsystem, and interaction, all the way down to the most fundamental foundational level. And this understanding allows them to make the best decisions (because they're equipped with the best information). This knowledge doesn't come by sitting down and studying how CPU architecture works when you're building a web application. But it does come from diving as deep as is required for any given task. So maybe if you're dealing with a web app performance bug and you have to crack open Chrome source code, trace it down to something that is compute-intensive, learn whatever C++ code is involved, understand how it utilizes the CPU, and learn about the specific architecture you're using that exhibits the problem, then you have ultimately obtained a significant depth and breadth of information, but at the end of it, you know and understandexactly why your web app performs the way it does, how to workaround it in your app, how to fix it in Chrome (or why you shouldn't), and how the CPU architecture affects the Chrome source code. Now you can apply Chrome, CPU architecture, and C++ to anything that is built upon any one of them (independently or otherwise). That's not to say you know everything about each of them, but you've learned things that will help you in the future in some cases.
The most important skill here is being able to diagnose a problem and fearlessly, relentlessly employ the engineering discipline of solving whatever problem/task is at hand, and not because of observed symptoms ("hey, I turned that knob and everything was OK! I don't know why, but I can close this JIRA ticket and move on with my life. I'm a 10X engineer!") but because you understand precisely what's happening. I made the mistake of the first decade of my software engineering career learning from trial/error and observations, and while those skills are useful in some cases, the best engineers are extremely disciplined about understanding the full depth of a problem before writing a line of code.
In a nutshell, I guess what I'm advocating for is do not blindly study man pages. The reason is because without a practical application for the knowledge, it seeps out of your brain and you forget it quickly. The exception (case in point, GP's example) is when what you're studying does have a practical application or is relevant to what you spend your time doing. This has always been my problem with academic curricula (sure, some people can learn well this way, and there's definitely a minimum foundation necessary that must simply be committed to memory). Even in basic subjects like maths -- the work is rote, and we maybe get a passing grade, but often without the understanding (or the depth of understanding) that is really the most important aspect of learning the subject matter.
I have optimized websites based on a basic understanding of how CPU's work before. For example it's much faster to do 200 checks on an object then load the next object vs doing one check at a time for each object and thus reloading objects 200 times. This ended up being a 30x or so speed up and seemed like magic to half the room.
It's not about knowing the minute details so much as understanding what's going on well enough to model it in your head.
PS: Assuming you are operating on lots of data, a small scale test can and did go the other way.
Good thinking but beware of what a CPU "is". I've just came back from intel.com boards and .. holy jesus, amount of details even memory locality level thinking ignores .. To leverage a processor you need to understand OS cache conventions and interaction with L1 and L2 caches and how these caches are wired to the actual core. Otherwise you're already losing 30% of the raw bandwidth.
I left with a strong laziness view on optimization. Profile based on what the business needs and ignore everything else or you will never escape the rabbit hole.
Most people wouldn't know an algorithm and concepts of algorithmic complexity if it bit them on the ass. Even devs. You don't need a PHD in computer science, just read some stuff and think a bit.
I completely and totally agree with you. I would only add that the best engineers also understand when to take a complex set of interactions and create a black box abstraction from them. They also understand when the abstractions are likely to leak and what the consequences are.
I am not putting that much weight into, 5 minutes a day is not a lot be familiar with the capabilities of the tools. A couple weeks ago, I had no idea that `jq` had a compiler in it. Many of us, myself included, use our tools in very shallow ways.
I think everyone has time for it, but it requires nerves of steel. You are thinking "I could just fix this the easy way", you are feeling social pressure to quickly get to the next thing. It's easy to decide "I can't take the time to really figure this out."
But if you can ignore the pressure and stick to your guns, you end up saving time in the long run, sometimes making orders of magnitude more work possible. Most managers should appreciate that.
But it's difficult to have the nerve to do it, and it can be difficult to explain in the short term. Like most opportunities there's a cost to pay up front.
Would that we as a profession developed an encyclopedia of ways of pushing back on "Is it done yet? How much longer?" completion pressure. There's certainly a profusion of lore about PFYs and lusers, why not structural business frustrations?
Well...yeah...that's kind of the whole basis of 'requirements elicitation', to understand what your client is trying to accomplish on such a level that they don't need to give you a list of tasks, you create the tasks that will accomplish want they need the system to do.
depends on what you consider your job to be, right?
for all I know, historically there must have been a lot of masons asking the same question when stacking bricks: "people have time to do that while building a wall? to carefully put mortar between the bricks??"
but nobody remembers those masons because all their walls have fallen apart by now.
("... wait seriously, even the inner walls?? but the boss never checks those anyway")
_This_ is the right path. Dig as deep as you need. Don't be afraid to get your hands dirty. So many people just randomly fear the 'magic' of the lower levels.
Do you have any tips for remembering the minute details in the manuals? Do you make flashcards, or do you re-read them repeatedly spaced out over time?
I find the volume of overwhelming, but I think I have a practice now that works well for me. Say I want to do something in vim, but it feels clunky. Part of me says "there may be a better way to do this", and I go looking for a way. I usually limit such a search to ten minutes or so. I'll stretch that if I'm getting closer.
It's not a hard science, but I think the two important elements are 1. Being willing to deep dive and 2. Monitoring how much time I spend to allow for reasonable stops. I come back to unsolved issues when they come up repeatedly. That tells me those are more important.
My personal process has a lot of parallels with "lazy" or "short-circuit" evaluation and "greedy" algorithms.
First, remembering the fact that certain information is out there is a lot easier to remember than the actual details of that information. Bits like "zsh has this crazy advanced globbing syntax that obsoletes many uses of `find`" or "ssh can do a proxy/tunneling thing and remote-desktop things with the right options, also it sometimes doesn't need to create a login session and sometimes it does" or "ffmpeg has these crazy complex video filters that allow you to do real cool tricks (therefore maybe the same for audio filters though I haven't actually read about that yet)".
Some of this is man pages, some of this is blog posts or stackoverflow answers. I keep my bookmarks well-organized using tags (in Firefox, Chrome doesn't seem to have tagged bookmarks for some reason, last time I checked). Whenever I find something that seems it may be useful some day, I bookmark it, tag it properly and sometimes I add a few keywords to title that I am likely to search for when I need the info.
Then, given the knowledge that some information is out there, I allow myself to look it up whenever.
I've never been very good at rote memorization, at least not doing it on purpose. I often lack the motivation to muster up the will and focus required. So I don't force myself, but somehow still remember stuff any way.
There's so many tiny things in such a wide field of interests, I don't even really want to memorize all :) So I cut it down to knowing the existence of information (and sometimes, classes of information).
Then maybe some day I'm working with some particular features of ssh or git, and I notice myself looking up the same commands or switches a few times over again. So apparently I'm not memorizing these. Then, I make a note. That's not a very organized system, it can be a post-it, a markdown/textfile, an alias, a shellscript, a code comment, a github gist. I used to try and keep one textfile with "useful commands and switches and tricks and and and", but I found myself never looking at it, so I stopped doing that. Instead I try to put the note somewhere I'm likely to come across when I need it in context.
The way Sublime Text just remembers the content of new untitled textfiles and then allows you to organize these groups of files into projects, quick-switch between projects using ctrl-alt-P, is just perfect (or shall I say, "sublime"?). It allows a random short note evolve organically from temporary scratch to a more permanent reference note.
I also download some reference manuals, so I have access offline, which is often significantly faster to quickly open, check and close. For instance there's a link to the GLSL 4 spec in my start menu, which instantly opens in katarakt by just pressing "alt-F1, down, right, enter" -- a leftover from a project where I was reading that thing all the time. After a while I added a shorter webpage-converted-to-markdown reference to the sublime project file, and now I use it less.
I guess the shorter summary is: Yes I do have tips, but they are what work for myself, but the more generally applicable advice is: yes there are tips and there are tricks and they are whatever works by any means necessary, but most importantly: yes, there are tips and tricks, and some of them will work for you too! :)
RTFM is a weird boundary. I've 'wasted' so many hours dabbling in tutorials made by other people instead of diving in real information: specs, source. It's a mental click, maybe it seems overwhelming, maybe it seems too broad and we're too impatient to read a chapter to get an answer. After a while 1) you get more patient 2) you know other sources won't help ... all of a suddent specs looks like fun reads.
ps: I was just on www.ecmascript.org/es4/spec/, historical artefact but full of surprises.
I'd say it's the exact same problem. The structural engineers are looking at these pre-designed components as black boxes, not bothering to understand why they were designed the way they were, what they do internally. A huge portion of software engineers sees the components they work with (such as HTTP) as black boxes, too. This means that when the engineer is considering its use, they cannot effectively consider how it will hold up in the particular situation they are dealing with.
You need to know at least nominally how the sub-components interact, or you can't predict how something will perform when you use it. Even the strongest abstractions leak a little bit.
I dare say the same is true of software engineering. I, nominally a backend engineer, know (and apply) more about HTTP than most front-end devs and architects I've met, simply because I sat down one day and read the HTTP spec. (It's not a difficult read!)