ln -s a b
ln -s ../../a_fine_file .
The real question I have left: why is this so intuitive for cp, but completely counterintuitive for ln?
I wonder if is because we think of how we would follow the link.
That doesn't make sense to me. The way I think about cp is "copy this, and put it over here". If you think about symlinks that way, you mix it up.
When you say "source to destination, where destination is the soft link", that makes it more confusing for me, because if you consider the symlink that is being created as the "destination" (which at least is the intuitive way to think about it for me, but I suppose it's individual), you actually end up with:
# cp [source] [destination]
# ln -s [destination] [source]
Where "source" is "the new thing that should be created".
Since I seem incapable of getting out of this way of thinking about sources and destinations, my rule of thumb is that when creating a symlink, you always decide where it should point first. Not intuitive perhaps, but this time I've made the mistake so many times it kinda sticks.
ln /some/file /other/file
This is made a bit more confusing by the differences in man pages. For GNU ln, it's shown as ln TARGET LINK_NAME, but for OpenBSD, ln source [target]. But the usage is pretty much the same?
# cp [existing-thing] [new-thing]
# mv [existing-thing] [new-thing]
# ln -s [existing-thing] [new-thing]
Remember it like this: It's the same syntax as "cp". To copy a file you use "cp source dest". It's the same for ln: "ln -s source dest"
The association with cp makes a lot more sense then. This is how I learned it.
softlink dest <- src
file cp src -> dest
std::memcpy dest <- src
java arraycopy src -> dest
golang io.Copy dest <- src
I'm only 80% confident I didn't make an error in the above 5 examples...
AT&T assembly: mov src, dest
Whenever I'm debugging at the assembly level I have to just write down the appropriate one on a piece of paper in front of me.
Mnemnic that helps me: IN SYMLINKS, REAL FIRST!
ln -s REAL_FILE link_name
1. Same as mv: `mv REAL_FILE link_name`
2. The second argument is optional. Therefore, the first argument must be the real file, and the second the link name. It does not make sense to omit the real file.
"Source" and "target" are confusing names because the target of the ln command is the file it creates, which is the symlink that points to the source file, which is the "target" of the symlink in an intuitive sense. The mnemonic works because it matches mv and in that case the file that exists must obviously come first. In ln without -s the file also must exist, which makes perfect sense. So it's easy enough to remember if you understand what the ln semantics are and don't get hung up by the source and target jargon.
You're probably going to notice the time you spend reading man pages, when you just! want! to! get! something! done! And that's frustrating. What you're less likely to notice is the next five times you reach for the wrong tool, or have to go hunting around for some bubble-gum-and-twine way to do something, because you didn't previously read the man page that tells you exactly what you needed to know to do it easily and correctly.
Hell, I often find that while the author has managed to cram everything and the kitchen sink into it, there's so much missing from it regarding how a tool works, caveats, whatever. All these things that are crucial to actually understanding the tools you're using beyond what some flag or other does on a surface level.
I've read many man pages, many many times, and if I get nothing out of it, I find it is invariably because I am in a hurry, and I'm probably about to fuck something up. The thing to do is usually to slow down and do it right. Only rarely is the right thing to ask someone else for the solution, or see what some other poorly-informed person on the internet thought, although in an emergency asking for help is almost always the right call.
A huge amount of what man pages don't tell you is general Unix philosophy. Man pages don't tell you the lore around the thing, they describe the implementation. It's up to you to infer the consequences and how it can be used. Man pages don't hide the underlying bones of the system either, and generally assume you're comfortable writing a C program to test a syscall if you're not sure about something. So sure, reading man pages isn't always easy, but neither is getting regular exercise or eating your vegetables.
By the way: I might be a Linux elitist, but nobody's ever called me that. I use OS X in my day job. I've been programming in some capacity for about 30 years. Your mileage may vary. If you take my advice you may hate me for it, but it will almost certainly make you a better developer in the long run. And I don't mind if you think I'm an asshole. I know I'm right, and you probably know I'm right too.
Finally, if we weren't having this interaction in public, I'd be kinder, gentler, and might not say anything at all. But if one other person who hates reading difficult technical material is pushed to overcome that limitation by reading this thread, it's worth it, even if it means making you mad.
And just in case you're still reading, in addition to reading man pages, and taking notes, here are the other things you should be doing:
Preferring textbooks over blog posts and youtube videos to learn a new field.
Reading original research by pioneers in the field, like Turing, Shannon, etc, rather than their main findings rehashed by lesser minds.
Reading source code rather than jumping from documentation to Stack Overflow when something doesn't work.
Reading Knuth on algorithms, Stevens on Unix networking, etc. In other words, read the classics that everyone says you should read but most people don't. Work through the exercises. It's the closest thing to an actual superpower.
Look around for people better than you at all this stuff to help you improve.
Its one of those things that after almost 20 years, I should remember. But it just does not go in.
It becomes interesting when junior programmers are watching how I do something and I end up googl'ing things that they know.
I use the excuse that it fees up more space for other more interesting stuff, a bit like some execs just ware T-shirt and jeans to reduce the number of things to distract them in the morning so that can concentrate on the important things.
I see you take a similar approach to spelling
ln, cp, mv, most everything will always follow this pattern.
$ tldr ln
Creates links to files and directories.
- Create a symbolic link to a file or directory:
ln -s path/to/file_or_directory path/to/symlink
- Overwrite an existing symbolic to point to a different file:
ln -sf path/to/new_file path/to/symlink
- Create a hard link to a file:
ln path/to/file path/to/hardlink
mv <current...> <destination>
cp <current...> <destination>
scp <current...> <destination>
mount <current> <destination>
All I can think of that breaks the rule is `rm`, `unlink`, and `umount`. But they're hardly gotchas, and they're not reversing arguments they just don't have a 'destination'.
You can also do that with cp and mv if you need to, using `-t`. `cp -t .dotfiles/ .nanorc .bash*`. Generally more useful when you want to move a bunch of files around.
Zip does indeed break the 'rule', but this is not an excuse, cp and mv manage just fine with:
in in in in [...] out
Right now my main battle is deciding to port docs like this into asciddoc(tor) or keep them in org, both being exported to html5 eventually.
Usage: ln [OPTIONS] TARGET... LINK|DIR
Create a link LINK or DIR/TARGET to the specified TARGET(s)
I'm honestly surprised about the help confusion though:
Usage: ln [OPTION]... [-T] TARGET LINK_NAME (1st form)
For GNU ln at least I don't find it confusing at all particularly considering the only other option is LINK_NAME. I guess YMMV though and it seems kinda pointless to argue whether it is or is not confusing. Perhaps a poll could quantify it.
ln -s /the/real/thing
ln -s physical virtual
I never forgot it and can see it visually in my mind. You need something physical before you can virtualise it.
$ mv existing_path new_path
$ cp existing_path new_path
$ ln existing_path new_path
ln -s src dst
Real is always better than fake, real comes first.
Also, an obligatory xkcd: https://xkcd.com/1168/
This is not to criticize - we all have places our mental models break down unexpectedly. I'm just interested in how that's happening.
This has to exist... but that doesn’t.
Read it somewhere long time ago, never had to look for it again :)
It comes in the form of "go ahead and install this thing, and throw this config value into it, and it should just work". And my reaction is "I want to read up on that thing first. I want to know what stuff it writes to my computer and where and what paradigms it uses. I want to think about how it will best integrate with the tools and workflows we have already. And then once I've done that, I'll probably be able to move forward with it comfortably. Then I'll want to document it so it becomes part of the regular setup others do when they onboard onto the project."
Just this past week when I said a version of this, I got a reaction like there is something wrong with me.
Here is an actual single search progression for me (In reverse order because copy/pasta :shrug:
react context optimize rerender "props.children"
react context optimize rerender
react usereducer dispatch async
react usereducer dispatch api
optimize usecontext react
usereducer rerender usecontext
when to use usereducer
react hooks usecontext and usereducer
This is a fairly simple search, too - no use of negative search terms and minimal use of phrase matching. I didn't see any of those enhancements used in the OP which seems odd.
edit - formatting
Example: site:news.ycombinator.com -inurl:item google operators
In the above example, I would probably filter out results having to do with other react hooks so adding `-useState` would help accomplish that. If I am googling specific syntax or an error log, then wrapping it in quotes will do a phrase match and filter out results that don't contain the phrase.
I don't use the `site:` search pattern because just adding `github` does a good enough job and
Another pattern I use is `filetype:pdf`
:thinking: and lastly I nearly always use the tools to filter by results within the past year. That does an okay job filtering out older documentation, tutorials, articles etc
Edit: also `site:stackoverflow.com ruby "Hash#dig"` if you wanted an exact match, etc..
Verbatim phrase match:
Verbatim word negation:
Verbatim phrase negation:
The author states : What I’m trying to show with all this is that you can do something 100 times but still not remember how to do it off the top of your head.
My experience differs from this, if I were to rewrite it I would say something like:
"You can do something 100 times, and as long as you can look it up somewhere, it is okay to not memorize how to do it."
You have to evaluate the impact on your flow of stopping to look something up, you have to evaluate what you consider your 'base' skill set is to evaluate if you should memorize something.
Before Google, the canonical case here was arithmetic. Who needs to memorize multiplication tables if you have a calculator handy to do simple arithmetic. Basically if you cannot do basic arithmetic in your head, you are always going to be at a disadvantage with respect to someone who can.
I have found a reasonable compromise, when I Google something like this, I write down the solution in an Evernote notebook that I keep for such things. So if the same question comes up I can always find the answer and don't have to have either the web page or Google around to get to the answer.
 And as a "magic trick" you can hand a cashier what appears to them to be an odd amount of money, only to have them discover when they enter it into their register the change is a minimal number of coins/bills.
We used to memorise information, now we memorise meta-information - a mental map of concepts and trigger keywords. We have learned to quickly grok new concepts and we still have to understand how things work in order to do anything.
I definitely felt proud of myself for passing, especially when the other 6 senior linux admins all failed because they were too arrogant to study or prepare.
Or Feynman on the need for mathematical fluency to do physics:
> What we have to do is to learn to differentiate like we know how much is 3 and 5, or how much is 5 times 7, because that kind of work is involved so often that it’s good not to be confounded by it. When you write something down, you should be able to immediately differentiate it without even thinking about it, and without making any mistakes. You’ll find you need to do this operation all the time—not only in physics, but in all the sciences. Therefore differentiation is like the arithmetic you had to learn before you could learn algebra.
> Incidentally, the same goes for algebra: there’s a lot of algebra. We are assuming that you can do algebra in your sleep, upside down, without making a mistake. We know it isn’t true, so you should also practice algebra: write yourself a lot of expressions, practice them, and don’t make any errors.
EDIT: also want to add that using documentation and external resources is totally valid, as the grandparent comment states. There's just a balance to be had between relying on Google versus what you can draw from your mind quickly. Also think it's worthwhile to note that there is interesting work to be done in making documentation systems better and more integrated into our runtimes, see for example: https://www.geoffreylitt.com/margin-notes/
I've been exploring building a tool which is sort of a "universal search bar" for programming questions: finding answers via google, searching local repos, finding code snippets you've saved, etc. Another way to think of it is like an extension of your memory.
The idea is to make the information super fast to retrieve so you break don't break flow while you're programming.
For anyone who's curious, I'm collecting interest here: https://forms.gle/8q9nPUc22t6WDFsG7
I also looked into freelance jobs lately, and it's pretty crazy there too, especially with the JS' frantic mutations. You'd mostly need a handful frameworks and a whole load of specialized microsolutions, but still: last year everyone was using Ionic, now React Native is everywhere. Web APIs are also evolving nonstop. Chill out for some months, and you're lagging behind the others.
And that's not even beginning to mention compiling other people's C programs under MacOS. Pretty sure that ‘The voodoo of satisfying the compiler’ would be a book of respectable thickness without ever getting into programming proper.
I know gradle, and pretty well at that. I had to learn the basics of maven to develop a Jenkins plugin (at work). Developing jenkins plugins for a long time required using maven. Now there's the option for using gradle, but my recollection was that it was a sub-par experience.
I recommend this for MDN, too, e.g. I can type,
mdn map type
I need to add one for crates.io & the Rust STL. Note, of course, that this sends your searches to Google by virtue of using the "I'm Feeling Lucky" functionality.
It's very barebones MVP at the moment, but you can expect more features very soon. Open to feedback!
Perhaps this is a reference to the interviewing process. Tests are a different thing, though. I’ve been on practical technical screens where you can google, because the test was about building a larger system, and they don’t care if you don’t remember this or that API syntax. But at the same time I get if you’re trying to assess someone’s aptitude for devising novel algorithms, sometimes a closed book test makes sense.
I think there are many devs (myself included earlier in my career) that push back in the algorithm tests because, well, it’s really hard. How often do you have to derive novel algos? A lot of our work is kind of UI frontends to DBs or gluing things and most of the sort of hard scalable algorithm problems someone else has implemented in a library. So why go through 6 months of really hard study, just to get that kind of job?
Well, I think here should be “maker” roles where you can bypass the hard CS stuff and just crank code, if you have an impressive portfolio of work. But having a deep understanding of data structures (not just arrays and maps) and algorithms really gives you a mastery of your craft, especially around performance and scalability.
Nobody should be screened out because they didn’t remember a particular bit of syntax though. (But it should be noted most algo tests don’t require anything but the language basics.)
I've met lots of younger devs who have varying degrees of impostor syndrome, so anecdotes like this probably help people like that feel less bad about not knowing everything.
And from what I've seen, those who don't google consistently also fail to understand the answers they find when they do google things. It is, in fact, an essential skill.
I’ve heard some government contractors can’t google because they work on non-internet connected computers. They probably keep a lot of books.
And just making the decision to type things into Google is far from the ability to search effectively. Increasingly search engines will converge on a bad answer for a given question because it's the most popular. You just cannot assume that the correct solution is going to be prominent, because once another one has critical mass, it cannot be dethroned. The result that sounds just barely plausible enough to fool the average person making the search (not the average programmer) wins, and often it's terribly wrong.
I don't think he has books, but he has an enormous amount of documentation (something like the whole MSDN library, a bunch of internal documentation and reference info, and other documentation) installed on his computer.
I definitely had a lot of books, but actually the vast majority of my information came from man pages, RFCs and specs. I actually learned C++ by reading the spec -- I've never read any C++ book in my life, even though I was a professional C++ programmer for at least 10 years. For learning the STL, I did it by reading the source code!
Probably the biggest thing I miss in the age of Google is that offline documentation is getting hard to find. One of the reasons I chose Rust for my latest side project is that it's extremely easy to install offline information for virtually everything.
I like doing web searches searches and Stack Overflow is really stupendously awesome. Especially when I'm working in a new language or framework, I can find idiomatic solutions to problems without having to read through thousands of lines of source code. However, I do think that newer programmers today are missing out when they don't learn how to search primary sources to get their answers.
If you have a question about how the language works, it really does pay off to look at the spec. In doing so, you learn about things in a larger context. I'm always grateful for SO answers that have links to primary sources like specs in their answers, but I've noticed that my younger colleagues almost never click on those links. When they have the answer they want, they are back to their own code. Often they miss important nuances because they are too focused on getting the answer.
Similarly, I have found that there is often a huge reluctance to read source code. A colleague once joked that he used dependencies so that he didn't have to read the source code. It's funny because it's true ;-) However, if you at least try to answer your questions first by looking at the source code for a dependency, then it will tell you a lot about that dependency (mostly: OMG! We need to ditch this ASAP! ;-) )
I used to have a collection of hundreds of RFCs on my computer. Really, anything that is about the internet has an RFC (or did back in the day, anyway). These days, even I'm lazy and don't bother to maintain offline collections of this information. However, it is worrying that I often run into developers who build very complex internet applications and don't even know what an RFC is let alone use them to learn how they should build their systems. They will have very ingrained opinions on how build things, but they have no idea how the HTTP protocol works, for example. Discussions on the topic of how to design something usually results in snippets of blog posts obtained through Google searches, rather than pointing to the relevant RFC and saying, "It works like X so we need to do Y". Negotiation of how to proceed often involves considerable heated discussion about which notable internet pundit we should trust the most.
Yeah, this is an "old programmer rant" ;-) However, if I could wave a magic wand and get younger programmers to all add something to their arsenal of tricks it would be to read primary sources early and often.
I remember that. I think it was a combination of there not being good reference websites in some cases, not good question sites in others, and search engines generally being both less good and having less of those sites to index.
I specifically remember finding sites that were goldmines for certain topics, and hoarding them in my bookmark list of useful resources.
That being said, Google is also a good search engine, so it’s difficult to compete with them regardless of personalized results.
What "Google getting better" actually means is that it is more likely to return what most people want as the first page hits. That is a tradeoff, because it obscures everything else whenever what most people want is wrong.
As a programmer, you're only going to be doing searches for things that are non-obvious, so there is a vastly higher chance of Google giving you the wrong answer than for the average search.
There is a fundamental contradiction between a search engine that tries to show you only what it thinks you want, and the fact that you don't know exactly what you want when you search, until you see some results.
I think there's something like a conservation law, that the more you make it easy to find some things on the Internet, the harder you make it to find others. Kind of like the proof that you can't compress all strings.
As far as I can just about every search engine uses results from Google directly or indirectly by using a source like Bing that in turn gets some of their results from Google. If you won't call something a search engine unless it doesn't use anyone else's search results you might have a hard time naming more than one. You can't escape Google.
Really enjoy taking advantage of it and reassuring the younger kids (only two or three years younger than me) that they're more than capable of this shit. And it helps me be more confident when I'm at work - at school they see me as the guy with all the answers (ha, I wish - but it's a good reminder that we're all our own worst critics).
For example, if your solution involved generating all permutations of a sequence, and you didn't remember how to do that efficiently, I would let you make up a black box function that just does it. We might come back to this later in the interview, but I'm generally uninterested in whether you memorized an algorithm for it or remembered how to call the standard library function that does it.
That said, if you make up the black box function, I might dig into why you designed the function signature the way you did, and I'd do that to see whether you can talk about code design from a "other people will read and use and debug your code" perspective.
But sometimes, the interview question is generating all the permutations :(
In any case, there's a lot of strategies to show that you understand a concept without relying on exact memorization. For example, a common answer I give if I can't remember something might be something like: "The language has a sort function that can accept a list, I don't remember the exact API or the underlying implementation but let's assume it's n log(n) as that's a common runtime complexity for sorting--I'm going to define that as 'sort(listArg)'." I don't remember the exact function, so I'll just define and explain how it might work myself. If you're expected to be able to compile the code you're writing, simply ask the interviewer and explain the API you're looking for, I've had great success with that as well and prefer when an interviewee asks me over staring at the screen or board.
If it's a problem where you're expected to produce some algorithm, if you can't come up with any solution, explain to your interviewer what you're stuck on. They may provide a hint that will get you rolling. I had this happen during an interview at Microsoft (spoiler, I got the job) where I forgot how to determine the length of the hypotenuse of a triangle haha. The interviewer wrote the formula on the board and we moved on. Interviewer later told me I seemed nervous and he chalked it up to a brain fart--good call, I was really nervous! The point of that problem wasn't to see if I knew the Pythagorean theorem--it was just a small piece of the puzzle I was stumped on. Similarly start from a simple, naive solution, and interatively optimize rather than trying to recall a perfectly optimal solution. If time is running low, explain your intended optimization or ones you think might be meaningful, making note of any uncertainties in that explanation.
I've worked at Microsoft, Amazon, some other notables and interviewed at many others. I am speaking from that experience. Others may have better ideas. Of course, not all places approach this in a reasonable fashion and some are just missing the point entirely with their interview and they do expect you to somehow memorize everything.
The biggest problem with whiteboards is that they reward exactly what they say they try to weed out, rote memorization. It's no secret you could spend weeks practicing on leetcode for an interview but those skills simply don't translate to real-world programming. Algorithm design is maybe 2% of software engineering. The vast majority is knowing what pieces of technology are out there in order to avoid reinventing the wheel all the time. Whiteboards don't test that skill. It tests a marathon runner on their 100 meter dash time.
This 100% happened to me (apart from being laughed at) only it was even more of a meltdown. Also 10 years experience and a problem I could do in my sleep. It was my first interview after a long time and it triggered performance anxiety and nervousness.
The solution is you need to practice live interviews, not just leetcode solo.
You could argue it's stupid and colleges should only look at personal essays and GPAs and extracurriculars. But then there's problems with that too (lack of standardization for one).
You could argue that high schools shouldn't teach trig or calc because they're largely irrelevant to most tasks in most fields and "what does that say about the system". But actually they do come into play once and awhile and lay a conceptual foundation, so they're still sort of useful to learn. And thus they form a good common foundation to test aptitude inside an hour or so, or at least no one's thought of a better one.
The truth is, software engineering really quite hard, but that whiteboard interviews just happen to be a rather poor way of evaluating candidates.
A work sample test, structured interview, or IQ test (alone or in combination) would be a much better filtering process.
I'm not sure what any of these mean in concrete terms. Is an IQ test like those "why are manhole covers round" Microsoft questions of yore? Yuck. Is a work sample a homework test? That takes more time for both parties. It's not a good substitute for a 1 hour first round screen. And what's a "structured" interview??
Now I'm not saying the situation isn't complicated. In the mid-20th century, IQ testing was a great driver of social mobility. However, as was pointed out in _The Bell Curve_, it's becoming the opposite. From what I understand, the heritability of IQ and assortative mating (i.e. people marrying other people of similar IQ/educational attainment) are the main drivers of this.
I don't think the formation of an entrenched class system is necessarily a good thing, but I think in the local context of our industry there's still a lot of good that could be done by improving our hiring process. I mean, would you rather hire people who only sound smart, or who are smart?
If you have more questions after reading that, then ask away.
I think there's a serious disdain for interviewing in general. The whole process sucks from end to end. Often it's because the interviewers just aren't good at it. But when it comes down to whiteboard interviewing, I think it's about your value system.
More specifically, I think every interviewer has an opinion on the practical <--> theoretical spectrum, and the problem is when the candidate disagrees with the interviewer's opinion on where on the spectrum the interview should be.
The upside of a practical interview is that you are testing whether they can do the work. The downside of a practical interview is that it indexes heavily on experience and lightly on the ability to cross disciplines. Theoretical interviews are the reverse.
So the important question is: what does your team care about? Google will care much more about the ability for their engineers to move around than your startup will. In general large companies will care much more about portability. They have deeper pockets and are therefore more willing to train you on the job, so generally speaking if they think you can ramp up (even for senior hires), they're happy. As much as startups will say they see things this way, it's not really true. Startups in general care that their senior hires come with substantial domain expertise.
It's a whiteboard, I'm looking out for far more important things than whether the method is named `size` or `length`.
But I do think that Computer Science Jeopardy on a whiteboard is a bad idea.
A shitty interviewer will follow a rigid script and "ding" you on trivial errors.
I see it at least a couple times a week on sub-reddits related to software development. Junior devs who are convinced more senior developers are super-geniuses who know everything from memory.
I try to dissuade this kind of thinking when I see it: I like answering people's questions, and I take pride in knowing a lot of stuff. But there's way more stuff that I don't know than that I know. And I might need these folks' help with something I'm unfamiliar with, so I don't want them thinking they're "below" me and there's no way they could know something I don't. I'm sure they do!
Consider this, many of the most novel algorithms were published without ever being run in actual software or in part of a larger system.
Most screens also include a systems design test. Maybe you could argue all we ever should do is system design tests. But they don’t always involve coding.
Yes the whole interview process is based on the idea of doing stuff for the company that you wouldn't actually do.
Indeed I interview my candidate's ability to find resources, to know what considerations are needed, how to deal with IDEs, collaborative tools, the git protocol. I will fire someone on the spot for implementing an academic algorithm if there was a library and potentially peer reviewed library they should have used instead.
The process for successful interviewing at places that use whiteboard testing for interviewing involves digging in and putting in a lot of hard work studying previously unknown concepts-- work that most people don't consider very fun.
To me, this sounds like exactly the kind of thing that a person will need to do to be successful in their first few months in a big company. Learning about their particular systems, authentication and authorization methods, security audit procedures, deployment methods, how they do dependency injection, logging, diagnostics, troubleshooting...
I find this very ironic: Hiring for suitability to handle edge cases is also a brute force approach that doesn't scale.
A good algorithm optimizes for expe ted inputs and farms out cache misses and unexpected behavior to some other system.
Overfitting for the possibility that it might be necessary in a design choice is not the way to gatekeep candidates
The problem is that people naturally blame themselves for not knowing everything rather than accepting that we have built monuments of trash code that nobody should be expected to understand and that therefore we should stop doing that and stop using that.
Ideally, I'd be able to have tasks that are highly specified in advance and with priorities that are stable enough that I do not have to have multiple social interruptions per day in order to be on the correct task.
Basically, I wish I could achieve productive software development from an isolated cabin without an internet connection to provide distractions, with lots of subtle natural ambiance and my own thoughts instead of interruptive social interactions, and synchronize my work product with society on the same cadence with which I would travel to town to get groceries.
Obviously, this is not the situation.
This is how you can get into Csikszentmihalyi's "flow", if you add a little cognitive work and struggle to it, so it's not just typing in the solution.
> Ideally, I'd be able to have tasks that are highly specified in advance and with priorities that are stable enough that I do not have to have multiple social interruptions per day in order to be on the correct task.
Some tasks are like that, some business opportunities are amenable to that approach, some teams work that way, and many more aren't and don't. But yes, not spinning your wheels is a prerequisite to getting maximally engaged in your work and getting good results. Sometimes that requires a lot of interruptions of the actual work, because the work is rarely well specified in advance. If it was, it probably would not be interesting because it would already have been done.
> Obviously, this is not the situation.
It could be the situation, and there is work that needs to be done in that way. In fact, this work tends to have high value, including business value, precisely because our dominant working style prevents almost all of us from achieving it. We can get closer by mastering our tools and by choosing tools that afford mastery in their use.
I would call these "user" roles. And perhaps there is a space for a "professional software user" career as we continue building software so unnecessarily complex that only a professional can use it.
Can you give an example?
I.e. you have a search field and want to show the possible matches as you type. With each new char you’re calling this search and getting back a list of words. The dictionary is in memory in whatever data structure you choose. You only have to return matches if the search is 3 or more chars long.
Now, how does your implementation scale when the dictionary has a million words?
But my question is: why would you want to roll your own here? Wouldn’t the appropriate move be to look for solutions for your specific use case that’ve already been made? Ie if you’re talking web, I see no reason why a standard database index would work. MySQL’s index, a b-tree, is perfectly serviceable for the question, no? I’m having trouble imaging a situation where an out of the box solution wouldn’t work...
I don’t think I have a deep understanding of efficient data structures. I couldn’t re-implement a red black tree for example without looking it up. I wonder: is it enough to have a cursory understanding of things like runtime complexity, space complexity; understand the different basic data structures and how they have trade offs that can help or hinder different operations, and how that connects to real world work. Especially how different actual solutions use different data structures internally. What I don’t understand is the value from going from cursory understanding of data structures to deep understanding.
Of course that doesn't mean you shouldn't know how advanced data structures work and be able to work with them, it just means you shouldn't reach for say, a fibonacci heap before you just use an array.
We don't disagree. Understanding data structures is about reaching for the right tool, and very often the hammer of an array/map is the right tool. But the problem I gave isn't that exotic. It's a case where "reaching for an array" is a brute force solution that won't scale.
So how do you separate candidates who understand those limitations from those that don't? By asking about those cases. They're not that common, but that doesn't mean they never come up or you won't get a nasty bug if you don't understand these foundations.
If they immediately jump to using a fancy data structure like a suffix array, but they don't ask any questions about how critical performance and scaling are, then it shows they know a lot of stuff about CS, but they may be lacking in more practical experience.
If they tell you they would just use an array or a map (which would be extremely inefficient with large amounts of data), then you ask a follow-up question about scaling, and see how they respond. If they can't answer that question, then they lack practical experience and fundamental knowledge of advanced data structures.
Do you agree with that approach?
I don't know if I agree that using a trie at the outset reflects poorly on a candidate. They might see where you are going with the question, even though "you didn't mention scale." Good candidates are going to think about scale at least a little bit. That's not necessarily "lacking practical experience." It's not like basic tries or linked lists are super complex "fancy" data structures.
I think the key is for both parties to communicate thought process. If you're concerned about overengineering then prompt them to explain why they didn't choose an array.
I have done what you could call deriving novel algorithms, and in a commercial context no less. What is or is not an algorithm not very well defined. There was a famous paper which defined an algorithm as "logic + control", which could really describe any code that we write.
I think what the experience taught me (where a CS education might help) was how to think algorithmically. The point of algorithmic thinking and algorithmic code is to be able to prove (formally or informally) that certain undesirable states are impossible. Algorithms focused around performance have proofs that unnecessary computations are not performed, cryptographic algorithms have proofs that secrets are not revealed during the computation, concurrency algorithms prove that data races do not occur, and so on.
It's actually quite easy to apply this sort of thinking to highly problem-specific code. The more specific the code is, the stronger the guarantees you can prove.
What becomes more difficult is writing code that is both generic and is able to make interesting guarantees about the state space of it's execution. It's the code that kind of code (generic with strong guarantees) that are what people typically mean by "algorithms".
 Not really as bad as this https://xkcd.com/664/
Screening and interview selection is driven by need. If one needs a subject matter expert in specific area, it does make a sense to test the person's familiarity with subject without googling.
It also helps to test Duning-Kruger effect. 
But this is wrong to assume that googling is bad. Being able to search, assimilate and effectively use information (by book or Google), is also a very important skill in itself.
I feel doing a test to figure out if the person has understood some common data structures and algorithms is good to see how a person approach and solves the problems, since programming is still an inherently logical puzzle solving.
This has been the opposite of my experience. Earlier in my career there was a lot more "sit and think about how to assemble these basic building blocks." Now it's a lot more "man, it would be nice if complex object A could play well with complex object B, I wonder how to make that work and what the pitfalls are." I'm googling different things than I used to, sure, but pretty much any task I'm tackling at this point involves a fair amount of googling for docs, github issues, stack overflow, etc.
A technician at CERN maintaining the accelerators is probably pretty damn smart, they probably also understand some particle physics. They may not be the ones driving/designing the experiments and they probably aren't getting their names on those physics papers but that doesn't mean they didn't contribute any value...
Which isn't the difference between doctor and nurse, more like an MD and PhD in medicine, one focused on practice, the other research.
In school terms it would be more like a technician or vocational training around coding for particular types of projects.
Computer Scientists - Come up with the foundational algorithms etc.
Software Engineers - Translate and map those foundational algorithms into more readily available tooling, occasionally directly building a complex system using engineering principles.
Software Developers - Use the tooling created by software engineers to create user friendly solutions for the masses, aka developing apps or websites.
Of course, no manager reads The Mythical Man Month, and those that do don’t follow it.
I've literally never met a manager that practiced this. Is the book still relevant? Not disagreeing, just curious.
Here's quite a famous example: https://www.statista.com/statistics/272140/employees-of-twit... It shows a graph of the number of employees at Twitter over time. In Jan '08, there are 8. In Jan '09 there are 29. In Jan '10 there are 130. In Jan '11 there are 350. In Dec '13 there are 2712.
Although a lot of that staff are going to be sales and marketing (and probably the growth is justified), the development team is also growing exponentially during that time. The Mythical Man Month would say that probably they are spending a lot more money than they need to to get the growth they needed.
I used to work for a manufacturer of telephony equipment. On one of the products I worked on they had 5000 developers! (A single piece of code!!!). The average amount of code deployed was 1 line of code per day per developer. As much as we can argue that KLOC is a bad measure of productivity, if your average developer is only producing considerably less than 1 KLOC in an entire year, you know you have really, terrible, terrible problems. One of the questions you might want to ask is, if you want to write about 5000 lines of code a day, where is the sweet spot, actually? I think we can agree it's not at 5000 programmers. However, it's often really, really difficult to talk to non-technical management and get them to understand that more programmers does not usually equal more productivity.
I could regale you with literally hundreds more examples, but I think it's sufficient to say that, yes, the Mythical Man Month is still really relevant these days.
The book is worthwhile, not just for the central lesson, but WHY (TL;DR: communication overhead between different individuals increase as you add "nodes" - interestingly, you can make analogies to L2 cache expiration problems and others) and also a nice view at IBM of the past. And it was written in an era where there wasn't the need to make every book 300 pages long, so it's generally a lot more meat/page, though hardly perfect.
Yeah it's a pretty universal idea.
> Is the book still relevant?
Yes, precisely because that idea is so universal. Its advice has been relevant, insightful, and mostly ignored ever since it was written.