
Ask HN: Should I prioritize learning algorithms and data structures? - scobar
I intend to eventually formally learn CS algorithms and data structures. So far, I haven&#x27;t run into a situation where not knowing them has catastrophically hindered the viability of my code. I understand that I don&#x27;t know what I don&#x27;t know. [1] I often see comments along the lines of, &quot;I learned algorithms and data structures while getting my CS degree, but rarely (or never) used them in practice. Without reviewing them, I wouldn&#x27;t likely pass a technical interview that heavily relies on whiteboarding and CS theory.&quot;<p>Is this most often the case, or is it just the case that is most often verbally expressed? For each professional programmer who doesn&#x27;t know or rarely uses CS theory in practice, is there one or more that uses it regularly? I understand there will be specific positions that rely on CS theory more often, but my questions are meant to cover the general range of professional programmers.<p>I prioritize the list of topics I wish to learn or improve as there&#x27;s too many to learn them all within a given time period. I often push algorithms and data structures down the list below topics I immediately need to know to complete parts of projects following best practices as well as I&#x27;m aware. The strongest case I can see for learning or memorizing algorithms and data structures before I encounter a situation that requires I implement them in practice is in order to do well in a technical interview. I wonder if I&#x27;m really restricting my ability to code better because my thoughts on this are misguided. Should I at least skim through resources on CS theory so that I can better recognize situations where I should utilize it?<p>[1] http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Dunning–Kruger_effect
======
pmiller2
The more you know about algorithms and data structures, the better you can do
in technical interviews. That much is true. The same for things like recursion
and dynamic programming.

In actual fact, nobody uses recursion in real life, and dynamic programming is
of limited use outside of some very niche areas (unless you consider the
generic concept of caching to be a type of dynamic programming). And, rarely
do you need much in the way of data structures beyond lists, arrays, hash
tables, and things that can be constructed out of them.

Most notably of the set of "things that can be constructed out of them," you
want to know something about trees and graphs.

Anything beyond that is something that's going to apply only in certain very
specialized situations. For example, if you're writing a crawler, knowing
about Bloom filters comes in really handy.

Things that will actually come in handy are knowing various architectural
patterns. This doesn't mean design patterns _per se_ \-- this would be things
like service oriented architecture, for example. Most of these things you'll
naturally gain through experience, but it can't hurt to read a book or two
about them.

So, as far as actionable advice, I'd pursue at least the equivalent of a
formal data structures class to strengthen your knowledge and give you the
vocabulary to talk about things with other engineers. And, as I mentioned,
background reading about things like architectural patterns won't hurt,
either.

This is just my personal experience as a junior/mid level engineer. Your
mileage may very well vary, and I hope some more senior people have other
things to say.

------
PaulHoule
There are a few parts of it.

Computer science is a dynamic field so stay away from the 40-year old
textbooks of Knuth.

Most of the algorithms you will find in older books (binary search, sorting
over total orderings, random number generators) are already built into most
programming languages today. If you are programming in something like Java or
Ruby your first instinct should be to use the lists, hashtables and similar
structures that come with your language.

As for actual algorithms I think the theory of parsers and finite state
machines is pretty useful because you can certainly tackle tasks that would
otherwise be difficult.

One aspect of the science of data structures and algorithms is understanding
how algorithms scale, and the classic searching and sorting algorithms are a
playground for that.

Another thing is to look at modern families of algorithms that do things that
are counter-intuitive, here is one book i really like

[http://ebooks.cambridge.org/ebook.jsf?bid=CBO9780511574931](http://ebooks.cambridge.org/ebook.jsf?bid=CBO9780511574931)

a lot of the stuff in there sounds impossible at first but it is very
possible.

I also have a soft spot for algorithms that involve partial orderings since
I've found my conventional intuition is not always right here, this includes
transitive closures, dependency resolution, and the binary space partitioning
used in computer graphics.

------
brudgers
Knowing CS {data structures, famous algorithms, and rudimentary algorithmic
analysis} won't keep someone from doing a job that doesn't require it. It
won't make them worse at it either. On the other hand, not knowing some CS
will rule a person out for some jobs that expect or require it. And of course,
knowledge of this is implied by your post.

What I think is missed is that the person who doesn't know these things either
works only with and for people who don't value them and or in jobs where the
limit of what can be accomplished is bounded by brute force. In the long term,
brute force approaches often get automated.

The thing about CS is that the knowledge is much longer lasting than the
tools. Another comment mentioned avoiding Knuth - queus and stacks and lists
remain important as do searching and sorting. Knowing about them provides a
tool for making intelligent decisions. Knowing what NP-complete means and what
forms it takes allows intelligent decisions based on recognizing a problem has
that property. Even more practically, it improves the odds of working for and
with people who can recognize it and thereby avoid deathmarches to failure.

