If you never actually use them during your career, at least it will help you feel more like a programmer.
By whose standard?
Don't conflate what you enjoy about the craft with the ideals of others. If you don't like algorithms, don't write algorithms. Life is too short to begin believing you're a poor programmer because other people say you have to know algorithms to be a programmer. If you are writing programs with a programming language and having fun then the rest of them can jog off.
Unfortunately we tend to self-select towards being overly competitive. Trust me, I put too much emotional stock on my craft. I love programming. And I got caught up too much in what other people thought made a good programmer. The truth is that it's a big field and there's room for all different kinds of programmers.
Just don't think for a moment that you're not a programmer. Someone who claims, "you're not a programmer unless you can implement the Fast Fourier Transform in your sleep!" is a posturing nin-com-poop. The intellectual equivalent to a peacock. Or a jealous chihuahua. If you like what you're doing, keep doing it.
And if algorithms are something you're interested in there are plenty of books to get you started (The Little Schemer series being my favourite).
Well said. I've also struggled a bit since a lot of my talents are in design. I am one of those "designgeneers" that you sometimes hear about. The downside to being a hybrid coder/designer is that you never really have a deep understanding of stuff like memory and architecture.
So you eventually have to come to the realization that there isn't anything wrong with that, per se. I design great experiences and then I code them. That's where I fill the void.
Luckily, I'm not involved in your banking or any other mission critical applications. For Ruby on Rails/PHP/Python, my knowledge suffices and I can be proficient enough to deliver a great app, or web experience.
Would I ever use an algorithm? This is where I feel like it's important to be a strong programmer as well. Maybe I should be using an algorithm, instead of 40 lines of code that dance around the same conclusion? Maybe. I have no idea because I don't even know where to start.
Never say never. I consider myself a designer and developer, and I feel I have a pretty good grasp of that stuff. Professionally, I am usually typecast as a programmer, so I spend at least some of my time in algorithms.
I was into art/design when I was like five years old, and I became interested in programming by the time I was in high school. That gave me quite a lot of experience in both areas. I do feel the only limiting factor is time and determination.
Of course, if you have no interest in learning those deep theories, that is cool too. Delivering value, to you or others, is ultimately the only thing that matters.
If it genuinely interests you there's nothing stopping you.
Just don't feel like you have to because of what others think.
Algorithms are neat and very fun once you understand them... but I think critical thinking is far more useful. If you spend time just studying algorithms you might end up just memorizing them by rote and learn nothing. However if you develop critical thinking you will be more apt to be able to develop algorithms on your own.
One of my favourite exercises is to ask, "Yes, but how?" While reading through an interesting program I will see a function that perhaps I understand conceptually by the way it is named (ie: (search a list-of-things)). The trick is to not be satisfied and let your curiosity track down that function definition and figure out how it works. See if you can formulate in your mind why it works once you know the how of it. Then see if you can find some literature on the method used -- many algorithms have been discovered and documented by several generations of grad students by now.
Or you can take a more formal approach and start with the maths.
I don't know. If you are struggling with A* search, and claiming to be a great programmer, it sound contradictory. I am assuming poster was familiar with graphs and graph searches before he stumbled onto A* search.
Recursive/Non-recursive dfs should be pretty much obvious to a comp sci student. Here is a sample non-recursive dfs:
A* search is the same, except you change the $graph to include costs, and change the `queue` implementation to pop based on a heuristic function(cost from source till here + cost to dest).
I am not saying a good computer programmer must know A* search - I am saying if you are a programmer, your thought process should be molded in a way you start finding these things obvious.
I can't think of any programmers who I personally know, or otherwise, who aren't well versed in algorithms. Familiarity with algorithms serves 2 purpose:
1. It gives your mind general elasticity and agility.
2. Many times, it gives you foundations on which you build.
I recently wrote this small python lib which converts xml to python dict and vice-versa https://github.com/thoughtnirvana/xmldict. It came naturally to me as I was simply implementing a recursive-descent parser. I don't think had I not known about parsers, the solution would have come naturally to me.
That is just one example. There are other examples involving dynamic programming, counting sort...which translates nicely to real world use cases, and if you don't know the basic algorithms, you end up re-implementing a second-class substitute badly.
In defence of the poster, I think it depends on how much experience you have solving a certain class of problems, and perhaps more importantly, how you were taught.
I don't recall having studying the A* algorithm before, so I first looked it up on Wikipedia for a few minutes, and then came back to the comments. It's worth me saying that 30 seconds of reading your comment and linked gist did more to aid my comprehension than a few minutes looking through the Wikipedia article.
This is the case because you described the algorithm in terms of building blocks I was already familiar with. It's a depth-first search with a weighted queue; as you say, pretty obvious! But it wasn't immediately obvious from the Wikipedia article, because Wikipedia articles are designed to impart information, not necessarily to teach.
If the poster was taught by a professor who was teaching more like the Wikipedia article (and I've known a lot of lecturers who do this!), I can understand the confusion.
I see where you are coming from. I was arguing more on the side of "familiarity with algorithms" is like "hitting the gym" irrespective of what game/sport you play. It either helps you directly, or gives you general skill.
I was mostly taking exception to conflating "great programmers - no algorithmic knowledge".
It's great that you figured out A* search in under 30 seconds, but that's because you had the foundation. As I said, I am not claiming A* search(or any other algorithm) is the baseline, but there is a baseline(I won't bother enumerating algorithms - there is no agreed upon list), and a good programmer should strive to be above that baseline. Simply putting your hands up and claiming I can develop large systems fine and hence algorithmic knowledge is not really helpful - it is counter productive.
Again, if it's working fine for the poster, that's great. But the poster might be missing "unknown unknowns."
I agree, but I think there are also multiple foundations, and missing out on one or two doesn't necessarily result in a structural collapse into mediocracy.
It's quite easy to end up with a bad teacher and miss the foundation necessary to understand a class of algorithms. In an ideal world you'd backtrack until you understood that missing foundation, but it's easy to get the impression that a particular area of study is beyond your intellect.
I kinda think that's what's happened here. The poster probably has enough foundations to work with normally, but here and there are missing pillars of knowledge.
That said, search is a pretty huge and important class of algorithms! The fact that he had difficulty understanding A* suggests that a rather large pillar is missing from the foundations of his craft.
On a side note, Wikipedia is a horrible source to learn a new algorithm.
I can't say for A*, but many other algorithms (like Bayes Probability, FFT) I tried look up made little sense to me on Wikipedia, but learning from some well presented material(be it text, graphics or video) made them easy to comprehend.
Not as concise as the explanation given by irahul (which actually made me get the concept), but your article is very interesting and well written, and it is absolutely worth sharing to a couple friends of mine who would probably not get it with the wikipedia article.
Something I have found is that with learning algorithms a sideways approach is sometimes better if you find yourself struggling.
I initially had some trouble visualizing the process of algorithm development if I simply followed a book and did exercises (self-taught), so I would pick a complicated looking algorithm and just read about it, then think about it for a while, and try to find somewhere it had been used in code and how it solved some problem.
After a while I would go back to the book and it would sink in and I was able to better understand the mathematical model and intuition. Several such algorithms later and I am able to pick up new algorithms relatively quicker.
> if you are a programmer, your thought process should be molded in a way you start finding these things obvious
> had I not known about parsers, the solution would have come naturally to me
Your use of "obvious" and "come naturally" fit together, and match the template of someone who is naturally more inclined to understand this stuff than the OP might be, or (for that matter) I might be.
But rather than abandon algos as irrelevant, the OP should put more effort in.
I disagree that anyone is "naturally-inclined" to be better at algorithms, or coding, or abstract thinking. Rather, approach it as if we've been a blank slate that has picked up techniques and patterns of thinking, as throughout our lives we've been confronted with problems that required a new way of thinking. I'd reckon that someone you see as "naturally inclined" has simply encountered problems before that lend themselves to abstract in a similar way as these new problems they face (algorithms, architecture, speaking Japanese).
If you believe my paradigm, then you start to realize that not only is your mind extremely flexible, but that by focusing on a varied set of similar problems, you can start developing a framework to pick up more skills faster, and thus develop that "natural inclination" in the field you practice.
Interesting, I had never seen this formulation where DFS and BFS can be expressed by a single algorithm that differs only in the behavior of the "queue." It's still a bit misleading to call it "queue" since it might be a stack, but I see the intention.
This formulation of DFS (which tracks a stack of nodes to visit) is less efficient than one that tracks a stack of iterators, because the size of your stack is O(vertices) instead of O(graph depth). On the other hand, it's simpler in some ways, so I can see why it would be preferable in some situations.
Many graph search algorithms can be expressed as a generic graph search algorithm where you have a "white set" of unvisited nodes, a "black set" of visited nodes, and a "grey set" of nodes that you've encountered but have yet to visit. The structure of the grey set determines the algorithm:
Queue = BFS
Stack = DFS
Priority queue by distance from start = Dijkstra's algorithm
Priority queue by distance + heuristic = A*
Bounded priority queue by heuristic = Beam search
Priority queue by connecting edge weights = Prim's algorithm
Pointer reversal = Mark & sweep garbage collector
Check on whether it points into from-space = Copying garbage collector.
Oh. I should have guessed "practical ai programming" was actually PAIP(Paradigms...). I haven't read PAIP, but have read AIMA, and the topic under discussion occurs in AIMA; so I assumed OP meant AIMA.
I agree, and relevantly to this post it's a deep book on programming where algorithms and algorithm design do appear but aren't really central. They're an essential part of the toolbox but not the focus.
Yeah thats a nice xmldict you wrote there, except it doesnt do anything good at all. Try to parse this with it, <html>hello</lol>world</html>. So much for your A* search and algorithmic knowledge.
I am no good with algorithms but would know to stop at such an idea as to make my own xmldict parser in that way. I would instead learn about parsing and the algorithms for successful parsing, instead of a half-assed approach.
> Try to parse this with it, <html>hello</lol>world</html>.
Parsing invalid xml with a xml parser throws an error like it should.
I am using cElementTree for parsing and this is what will happen with your input.
In : import xml.etree.cElementTree as ElementTree
In : ElementTree.XML('<html>hello</lol>world</html>')
ParseError Traceback (most recent call last)
/home/rahul/musings/python/<ipython-input-2-54e782b0af58> in <module>()
----> 1 ElementTree.XML('<html>hello</lol>world</html>')
/home/rahul/musings/python/<string> in XML(text)
ParseError: mismatched tag: line 1, column 13
> make my own xmldict parser in that way
What is that way you are talking about? There isn't a xmldict implementation for Python(at least I can't google it), I needed one, so I wrote a recursive descent parser. A recursive descent parser is a popular choice provided your grammar is LL(k) - Python, Perl et al run on hand-coded recursive-descent parsers. Also, recursive-descent parsers are easiest to handroll.
PS - There is a way to disagree. Your's isn't the right way.
Architecture, in many ways, has trumped algorithms.
Back when I was a 10 year old kid, if you wanted to sort something on a microcomputer you had to write your own sort in BASIC or assembler, or if you were really lucky, FORTH.
Today if you want to sort something you can use UNIX sort or the sort built into languages like Java/PHP/C#/Ruby or you can use ORDER BY in SQL.
The increasing trend is that complex software systems encapsulate algorithms. Declarative languages like SQL are the ultimate -- a good relational database has a collection of internal memory, external memory and possibly parallel algorithms for which it will find the best choice for your particular query and data.
I'm skeptical of the "baby steps" approach that people have advocated where you do some busywork to figure out how to re-implement the addition operator and such. That's how computer science works, but CS education has a terrible batting average at producing effective developers. I can't stand hanging out on proggit these days because so many people are stuck on sorting algorithms and obsolete content from Knuth's ACP.
Personally I look at the algorithms literature the way a Somali pirate looks at a shipping lane. I look specifically for advanced algorithms that solve my problem and that I can't find good implementations of. It can take some time to wrap my head around, say, binary space partitioning or suffix trees, but at least I'm motivated.
I'm always thinking about the choice I have between implementing something new and reuse.
>CS education has a terrible batting average at producing effective developers.
That's likely because "producing effective developers" is not a focus of CS education. It's to make you understand fundamentals of computation, such as complexity and algorithms, as well as give a deeper understanding of computers both on a technical as well as a theoretical level.
These are all skills that can aid in becoming a great developer, but they are only supplementary. I'd say that a better course of education to take if one plans to become a developer is some form of software engineering, but not computer science.
As a Dev and CS major I can tell you that schools have gotten to be pretty good about teaching more than just the difference between merge sort and quick sort. My school offers things like Databases, OS, and even a web dev class this semester. Schools do so much to get kids to become good developers along side good programmers.
The problem is that the kids aren't trying to become good developers. How many college students in a class have a github (or some online code repo) that is publicly view-able? How many have a stackoverflow account and are at least somewhat active on it. What percentage of kids in a class will code a program outside of class just because they want to code something cool.
All these encapsulated algorithms are really useful, until you need something slightly different and hit all kinds of leaky abstractions. Declarative languages are especially bad at this, since you have to sacrifice a chicken and wait for the right phase of the moon to get the automatic optimizer to produce the code that you actually indent to run.
I think reuse wins in the long term because there's a competitive market -- a thousand flowers bloom and the better things float to the top.
SQL, for instance, has benefitted from implementations that vary from Microsoft Access all the way to Oracle. Because of fierce competition, vendors (even open source projects) have had to raise their game.
The same is true for Java frameworks -- if you looked at frameworks like Struts back in 2002 one could come to the conclusion that frameworks were part of the problem, not the solution. Today you can find frameworks that fit your tastes, whatever they are.
You are selling yourself a little short. And unfortunately some of the commenting public here can't see through your self deprecation, and just add insults thinking they're smarter. They're not.
Algorithms, the kind you're talking about, are HARD. All of the algorithms you gave as examples were originally research papers that took months to years to develop. All of the algorithms you cited are algorithms that pretty much EVERYONE gets wrong the first time, no matter how good they are or think they are.
The big question is do you want to write algorithms, instead of, or in addition to, the other kinds of programming? If coding sorts, searches and crazy data structures interests you, then DO IT! It takes practice and patience, and it probably won't end up better than large scale efforts you find in libraries, but it is FUN!
Cryptography in particular is fun because you will definitely get it wrong. Everybody gets it wrong, everyone here claiming they are good algorithmists will definitely royally screw up a crypto implementation, and very, very few people can come up with anything remotely decent on their own. The handful of people that can do it as their life's work.
Life lesson here. Anyone who claims to be great, almost certainly isn't. This is a case in point.
At the risk of offending everyone in this thread, this is written by a mediocre to reasonable programmer with no clue about his limitations, with little idea of what good programming is. He does a lot of things that he has heard about. He is used to being around bad programmers. He's used to finding problems in PHP code all day long.
Having a bad comparison set makes him think that he's better than he is.
But you cannot actually be a good programmer without having a variety of skills. Wonderful, you got your system working, do you understand it? For instance if your system has a performance problem, how do you begin to address that? How do you find problems? How do you track it? How do you fix it? How do you monitor it in the future? This stuff is important, and these are things you really can't do without having a good understanding of algorithms.
Of course understanding algorithms is not enough to be great. But it is necessary.
If you want an actual great programmer, look at Jeff Dean from Google. See http://research.google.com/people/jeff/ for a list of what he has done. He doesn't just write code. He doesn't just put systems together. He doesn't just mentor people. He doesn't just understand algorithms. He doesn't just test stuff. He doesn't just come up with infrastructure.
He does all that. And more. Excellently. (And is reportedly very modest about his abilities.)
Seriously. If "complex algorithms like sorting" is a problem for you, you are not a "great" programmer. If you can't implement a bubble sort in your sleep, you're missing something very fundamental. And if you think that sorting belongs on the same level of algorithmic complexity as compression and cryptography, you're in even more trouble.
What if I forgot how a bubble sort worked because I took algorithms so long ago? Does being able to implement it after seeing pseudo code count as being able to do it in my sleep? Am I a bad programmer because I don't remember how exactly an n^2 sort works?
Don't worry about all these egotistical computer science bigots on HN telling you that you are inferior because you can't wax poetic about the work of the scientists in our field.
Understanding performance and result trade-offs of various sorting algorithms is sometimes useful. But usually not.
Understanding how the nagle algorithm works in TCP is sometimes useful. But usually not.
Understanding how the A* algorithm works is sometimes useful. But usually not.
Understanding how zlib works is sometimes useful. But usually not.
That is the beauty of our field. We can build larger and more complex systems because others have laid the foundation. Actively seek out and learn relevant existing algorithms and libraries when you need them.
All of this stuff may or may not be useful to know. However, it is largely irrelevant when it comes to getting shit done in most bits of software. Sometimes, ignorance is blissfully productive.
It is true that having understandings of all the above will possibly never be handy, but having a vague knowledge of them is very useful. There are a lot of times I have seen people make things much harder for themselves just because they didn't know something had already been invented.
Having a broad knowledge of the field can be the difference between knowing what to search for and not.
Take a simple example of hash-tables vs. tries. Being able to write a trie or a hash-table without looking it up is perhaps of dubious use. Knowing the relative strengths and weaknesses of each without looking it up is in your words "sometimes useful, but usually not." But when you run into problems with a hash, if you at least know that a trie exists, you can say "oh, let's look it up and see if it solves my problem"
Really there's nothing new here. Scientists have deep knowledge and engineers have broad knowledge. Move along.
Be a great problem solver and don't worry about the rest. A lot of computer scientists know a ton about B-trees, A* search (I got a BS and MS in CS, and have been coding for 10 years, and dunno what the heck this is), quick-sort, heapsort, etc.. but have not produced anything substantial to show for it.
Then there's dudes that know only PHP and have built something that solved a crucial need.
Why would you rather be able to hack something together than to understand the underlying theory? In my opinion that value system is backwards. I think the most respectable skill is algorithms, and innumeracy should be as embarrassing as illiteracy.
Actually If a person can solve a problem without any previous algorithm theory it shows they can solve problems naturally. Learning algorithms if they wanted wouldn't be a problem. TO many people claim they are so sharp with algos. but cant code for shit.
This article is very timely for me as I have been thinking about this a lot the last few days. I graduated with a degree in computer science, but like the OP, I didn't study as hard as I could have (I would have a different mindset if I was in college now). It's only been about 8 years since college but I don't remember much at all about algorithms and Big O.
I have been programming in Rails for the last 5 years (on the side) and I'm surely not thinking about algorithms while I am programming. I usually am just hacking it together to get it to work, with help from Google, Stack Overflow, Ruby documentation, etc.
In college I feel like I had the mindset of just getting through the classes and graduating instead of actually learning and applying this information. I think it was just hard for me to see a connection between things I was learning (such as algorithms) and real world applications.
I'm definitely going to spend some time revisiting and relearning algorithms.
To me a "good programmer" isn't someone who can recide sorting algorithms or the like. A good programmer is someone who can properly identify the right tools for the job and adequitely apply them to the given scenario.
A good programmer knows why one algorithm of a certain classification is right for the job. He/she may not be able to memorize it to put it down but they know why one would need to apply it and can then consult notes/google for its implementation.
Patterns are a good example of this. If someone can conceptualize the problem they're facing clearly enough to know that they need a facade that signals a good programmer. The guy down the hall who memorized how to build a facade pattern but can't identify when its actually needed isn't necessarily a good programmer (but not necessarily bad either).
Algorithms are tools. You don't drive a screw with a hammer in the same way you don't use certain algorithms for the task you have in front of you. You need to know what kind of screw driver is required to be a good programmer.
I could rant on and on about metaphors and the like but the point is pretty cut and dry. Who cares if you can't slap out algorithms on a whim (well, interviewers care I guess) what matters is that you know when you need to apply a depth first vs. a breadth first search.
It's a good read for those that think that you need to be a super mathematician and know every algorithm to be an awesome programmer. Programming is much more than algorithms and maths. I think it's terrible that some companies judge programmers by how many algorithms they know, it seems like a bad filter, because you will filter out people like DHH (and a ton of other great programmers).
Companies are free to hire people any way they want, but is has always bothered me when they use those now-popular algorithmic tests in the interview and then determine that there are no qualified programmers. I always leaves me wondering if there would really be a developer shortage if hiring practices were different.
Just based on anecdotal evidence and personal experience: no, that's not it.
I work in a field where algorithms are not relevant for 99% of the time, an neither me nor anyone I know tests for them when hiring. Few us are Google, most of us just develop non-rocket science applications. And we can barely find the developers that can hack that.
Yep, I'm exactly like you. I also have a CS degree, but after many years of doing web dev. I feel stupid. I have recently started buying books that teaches algorithmics in a more easy approach.
I am currently reading "Python Algorithms: Mastering Basic Algorithms in the Python Language". I can't recommend it enough. Besides clearly explaining the basics, it helps you learn how to think about transforming naive solutions in to efficient algoritms.
Besides that, I find it fun to practice on code tests provided by codility, interviewstreet etc.
This book is great. I would also suggest Bruno Preiss' site, which has his data structures and algo book that target specific languages (one in ruby, one in java, one in python, etc) available for free. link: http://www.brpreiss.com/
This is an often pondered question. The only right answer is the aphorism 'know your weaknesses, know your strengths, do.'
The longer answer is that programming covers a broad swath of complexity and few if any people are masters across that whole area. The closest analog I know of is cooking. There are literally billions of people who can cook, but they are also all different, some have excellent technique but can't pick spices, some have an intuition for compatible flavors, some have a knack for presentation, it goes on and on. But if you make great meals and everyone loves them and says how tasty and satisfying they are, should you stress about how you've never been able to master making souffles ? No. Recognize that when a souffle is called for you'll need to do a bit of extra work around that part of the meal, maybe even call in someone for whom souffle is 'easy' and do a great meal.
As many people have alluded to, I think the two are not the same thing. It seems to me that the difference between a programmer and an algorithmist is akin to the difference between a physicist and an engineer. One is focused on solving ideal problems, while the other has to know something about that, but ultimately the day to day work is much more about system building and finding a solution that works in the, much more fuzzy and dirty, real world.
There are many examples of this dichotomy. An architect has to know something about structural mechanics, but ultimately is more about finding a solution that fits within the constraints of structural mechanics that also does what it's supposed to in the real world.
Without a goal or focus it can be hard to find algorithms that have meaning. Writing algorithms without a goal leaves you without focus and, although you may have written some good functions it'll be harder to know where they can be properly applied without some form of goal or project to work towards.
To me, an appropriate way to learn is to conceptualize a goal. If what you want to start with is a calculator then make one using only the basic +, - operators and maybe binary functions if you care to. Modulus, Multiplication, Power functions, Division and others can be accomplished with simple uses of +/- and building up from there.
For me, a good project was a mobile game that uses a home built physics engine. Collission detection, "bouncing", gravity and the complexity of sprite drawing/manipulation was enough of a goal to apply moderately complex trig that was essential in school. I now have a portfolio of very diverse and extremely useful sprite functions that I built on my own that I can reuse in other projects.
I wrote my own heuristic algorithm for battleship that I'd argue is the best possible AI without allowing it to have memory or cheating... it all started with looking at the problem space and determining how simple things I already know (stats) applied to the problem and figured how to apply it from there.
That reminds me of the classic and occasionally useful multiplication algorithm.
int total = 0;
if (x & 1) total += y; // if x's last bit is true add y
x = x >> 1; //binary shift right 1 to divide x by 2.
y = y << 1; //binary shift left 1 to multiply y by 2.
}while (x > 0);
which I tend to use as an intro to algorithms because it's useful and short. But, just complex enough to think about.
I like that the Project Euler problems get hard fast, it's a great opportunity to fail over and over again, which (at least for me) forces you to think more deeply about the problems, when something is too easy you can fall into the trap of a "good enough" solution, which is often appropriate in the day job, but not necessarily so when you are trying to practice. I love the code golf option on 4clojure for that also, because for so many of the problems I just can't see how people get the smallest solutions - not that I would want to write code like that for a shipping product.
Project Euler is fun and all that. But no real way to learn about all those techniques you need in the first place. A particularly hard problem might set you off learning about stuff in that direction. I'm still working on a way to generalize my dynamic programming knowledge enough to solve problem 256 at the moment. Problem 202 was fun, when you realized what the problem was really about.
As a statistician and Operations Research professional I find this somewhat common among programmers. I also find it baffling but understandable. To me the computer and software is just a tool for creating algorithms and making better models for decision analysis. I believe a lot of programmers look at the computer as an appendage and go to it first to try to solve problems.
Learning to write good algorithms is like learning a language. It takes work and understanding in small steps. Get a good book. Find a good mentor. It will come to you and you will be better for it.
I share a similar background. In most typical programming being good at gluing, and above all perseverance, will get the job done. I would probably even gladly trade some of my mostly useless algorithmic knowledge for some more practical experience; what good does it do you to understand the finer points of interior point methods? Even if you have a problem that benefits from their application, in practice you just use an off-the-shelve solver.
The ear player is the perfect analogy, but there's no reason why you can't go and learn music theory after the fact. In fact, it would help you to explain the music to others, and to see patterns. The power of giving a name to a thing is encapsulation and abstraction, so I would propose that an ear-player and an ear-programmer would be more likely to get stuck in the gutter of whatever they're doing and pigeonhole themselves.
Regarding getting stuck in a gutter, in music I find it's exactly the opposite. The musician who has a stockpile of pre-packaged phrases/idioms/licks tends to sound very predictable.
This is especially problematic in jazz, where a premium is placed on improvisation/creativity. There's real sense in which the very idea of having a pre-packaged phrase is totally antithetical to improvisation.
Also, pre-packaged ideas in music tend to get ingrained at the muscular level. The last thing you want to happen is for your hands to start running the show.
I do realize however that the very best musicians (and programmers!) probably have a huge arsenal of ideas AND deploy them tastefully.
Yes, a person who is great at ear-playing could be made better if they can analyze why a thing sounds good, the in-the-moment improvisation flow doesn't give you enough time to really think about what you're doing anyway, but before and after, you can strategize. They're two separate thought-processes that complement each other, imo, as it is with programming. A lot of times I find I can just flow and write code, but there is much room for thoughtful refactoring after the fact. Experience, theory, named concepts and systematic analysis gives you tools to really improve.
To me, programming has always been about building stuff. This rule governs all decisions I make with respect to software. Programming language? Pick what's suited to the problem. Database? Pick what will be elegant to model your problem.
A programmer need not be a computer scientist. If it brings you joy in building so called CRUD apps, so be it. I know there are some who look down upon that and think that programming should be all about writing beautiful code in Haskell and Lisp.
However, you should know what you want. What others perceive of you is hardly a concern unless it correlates to your own goals.
I'm with the poster. After 8 years of professional experience I'm great at building large, stable systems that work, but I'm not even that interested in complex algorithms.
For me it is a question of cost/benefit analysis. What real value do you get out of being able to write super mathy algorithms when the solution is often a Google search away? More importantly, programmers get paid to build software that works, not to recite algorithms. (that's a generalization. Google & Amazon probably care much more about in-depth knowledge of algorithms but I consider them the exception).
Take the search algorithm example. Don't pre-optimize. I'll use the simplest linear search I can to get the job done, and if needs be, look up a faster algorithm to speed up the search.
This is a real problem in the industry IMO. You do not need to be a good algorithmist to be one of the outstanding programmers in most industries. You just don't. The most important contributor to being a good programmer is excellent interpersonal communication - something that is unfortunately actively filtered out by many interview processes.
Take SOAP for example - that whole "bag of hurt" was created due to a lack of communication between developers (those who created HTTP and those who were developing web-based RPC (i.e. SOAP).) The accidental complexity of SOAP versus the more effective REST (leveraging the existing HTTP platform) and the problems that come with the inner platform effect have cost the industry millions (if not more.) I'm sure SOAP had some super smart guys on the team, but it took them years to speak to someone on the HTTP effort earnestly enough to realise they were replicating features of the platform they were building upon.
Or take XHTML vs HTML5 vs HTML 5 (the space before the "5" is important to the W3C apparently.) Just google for discussions about the rift between the W3C and the WHATWG - to me it smacks of "geeks" (and I use that term perjoratively, knowing that I am a geek) not communicating properly.
Finally: on spec, on time and on budget software development is far, far from a solved problem. There are so many knarly issues in most organisations - code maintainability, code testability, development process, effective test writing, refactoring, release mechanisms, team work, navigating politics... naming, even.
All this, before you worry about whether to use this library (and it usually will be a library) or that because of a slightly different algorithmic implementation.
We need the algorithmists - but we also need the guy who can break bad news to the project office, the guy who can construct beautifully simple object models and the guy who really knows what testable code looks like.
To be fair, SOAP was intentionally being transport-agnostic. It was a bad idea (when other transports were becoming irrelevant if not unusable due to draconian firewalls) but it wasn't an accident. My takeaway from the debacle is that a useful schema language needs to be read and comprehended by humans, not absurdly complex tooling that may or may not ever exist.
This seems to be a case of the Dunning-Kruger effect where an unskilled worker has the cognitive bias to overestimate their skills. it is easy to write code. it is easy to write code that works. it is hard to write good code.
Couldn't agree more. He is no objective information to believe he's a great programmer at all. He also seems to toss around the word "great" a lot as well; I've never heard anyone refer to UT Dallas as a "great university".
Assuming you are familiar with scientific readings, I'd recommend to read papers on algorithms and data structures on a regular basis. There are tons of it and they normally discuss complexity and shortcomings as well as comparisons to other algorithms. Personally I find it much harder, to understand algorithms, other people wrote, than coming up with my own algorithms.
I like to print such papers (normally 4-12 pages) and carry them with me (only one at a time), wherever I go, so I can read a few lines, whenever I have to wait somewhere.
As a programmer, I have been hugely influenced by mathematics. I don't say it in the sense that I use math heavy concepts when programming, but the practice and experience I gained
through mathematics have improved my programming style (more rigorous approach) as well as my understanding of algorithms.
I've been programming since I was really young, and I skipped university, so most of my knowledge is self-taught. Up until recently, I would have no idea what you were talking about if you said "A*" or "Boyer-Moore".
But then I realized that those algorithms are just basic concepts behind fancy names.
I was recently reading an article from that C++ guy at Facebook, where he talked about the programming challenges they do during interviews. He mentioned that every candidate should at least be able to write a simple version of the 'strstr()' function.
I did some research. I learnt about Boyer-Moore, and I realized that it was actually pretty straightforward and simple. Then I thought about how it might be optimized for tiny alphabets, such as DNA sequences. I came up with some ideas while travelling, and when I got home, I found a research paper that used exactly the same approach.
So that's when I realized that the world of algorithms wasn't as daunting as I thought it was.
Optimization can sometimes make them a little hard to read, but even the most complex ones can be broken down into understandable pieces.
Developing and implementing algorithms is slow. You should think, tinker, use pen&paper and try things. You know, some people spend their whole lives to just tinker with small amount of code to come up with new stuff.
I think the reason most people don't feel competent in programming or algorithms comes from the fact that they don't know how slow it sometimes is to implement something. Maybe they have preconceived an idea that someone just sits on the computer and types the program in full speed.
I love algorithms. I'm not particularly good at it, but I just love to think about the problem and try to beat it. You just have to take the time and sit on it long enough to crack it. Soon enough, you will have small successes and you start feeling more competent and more self-conscious. You know that if something's difficult for you, it's difficult for everyone else, and that motivates you to try harder.
For me, the joy of engineering is problem solving. If you can solve a problem at hand with skills you have, don't stress over knowledge you don't have -- we are all necessarily ignorant in areas because there isn't enough time to become an expert in everything.
That said, some things _do_ need heavy algorithmic lifting; recognizing that your problem falls into this category saves time. Maybe you can reuse code that does X instead of rederiving and reimplementing it. Maybe the client's performance requirements are asymptotically impossible to achieve. Knowing this lets you confront the issue sooner and gives you a stronger claim than "I can't do this" if they push back.
Algorithms can be a force multiplier for your skill. A recent project required a key-value associative array. Unfortunately we don't yet know what the key strings will look like (long? short? self-similar?), the number of keys (5? 100? 8000000000?) or the expected use (lots of inserts and deletes? mainly lookups?), and the best approach depends on these variables.
Now, any single approach would be easy to implement, but maybe not so easy to adapt if (when...) requirements change. Instead, I'm using STL containers and algorithms as modular building blocks. Changing from a set to a trie? Changing from a vector to a list? A few lines of changes (and sometimes none).
Now, would you call this algorithmic knowledge or programming (C++) knowledge? Well, I'd say somewhere in between... in a sense I'm punting to a library that's been written and tested by smarter people than me, just like I might use any other framework. On the other hand, if I don't know what it means when the framework says "lists take O(n) for lookups", or "vectors offer constant time random access", then even when requirements are clear I might not be able to map them back to the best implementation.
Algorithms are just abstracted solutions to common problems. A binary search doesn't (mostly) care if you are dealing with integers or widgets or FoobarConfigurationManagers. It just says "follow these steps to find an element." Programming is full of common problems: iterating over a list; multiplying the result of two functions; adding a set of numbers. Simple, right? But combine these simple tasks in the right way and you have an algorithm to calculate convolutions.
Likewise, combining simple algorithms can solve a more complex problem.
I have a pet theory that there are what I would say "right-brain coders" and "left-brain coders". Left-brain coders are the typically geeks who love math, engineering, algorithms etc. They can be extremely good at digging down on some particular piece of code, optimize it or write some algorithm for it. They do not however necessarily excel at things like overall architecture or coming up with novel solutions.
The right-brainers are more the creative types which would typically be more interested in areas like art, design or writing but has found themselves into coding. They do not necessarily like math, algorithms etc but are great at identifying greater patterns, overall architecture or thinking "out of the box"
No one is typically one or the other but people tend to be on a spectrum somewhere between, although programming tends to attract more left-brainers than right-brainers.
I'll add n=1 to your theory. I'm definitely a right-brained programmer, vaguely synesthetic, and an OCD one at that. I don't write code; I just sneeze some code out and keep straightening up till it's all pretty.
"If you can build large scale systems, you can code complex algorithms."
Not sure I agree. Architecting/Maintaining huge systems and inventing very smart algorithms are very different skills. Both can be learned to some extent, and a programmer should learn both to some level, but it is totally possible that one person is very stong in one but not that strong in the other.
Speaking of very different skills, I used to have a developer working for me who was an absolute genius at fixing bugs - he would dive into complex code he had never seen before and be able to pinpoint and fix problems that had eluded other people.
However, when confronted with a "clean sheet of paper" for the design of a new module he literally couldn't do it - even with a lot of coaching.
I used to be the bug fixer guy. There was nothing worse to me than a blank file. There's something simple and focused and comfortable about fixing bugs for a lot of people. It does take creative, critical thinking to fix bugs it's different than creating something from square one.
With a bunch of practice I got to the point where I'm more or less equally comfortable in either role. I learned to apply principles and defined a process that works for me. I watched other people create. I had two really good mentors that showed a lot of patience coaching me through those moments of profound uncertainty that come with creating new things.
tl;dr: different people think differently, and that doesn't mean they're stupid.
I'm exactly like the person you describe, arethuza.
I've been a sysadmin for 15 years, and am fantastic at what I do and am highly sought after by recruiters and so on and so forth. I started as a programmer, and I've never stopped programming, mostly for fun but my job regularly requires scripting, of course, and I'm great at it. I also really enjoy writing test automation and release automation, and I'm fantastic at those, too.
I can't design a program from scratch to save my life. I've never been able to. When I try, what comes out is absolute garbage, and usually extremely hostile to being actually used by actual people.
If I stick with a piece of non-trivial from-scratch code, I can usually get it to the point of being useful, but it takes 5+ iterations of near-total rewrites.
The basic issue, which is much like the original article, is that I just can't see the end result in my head, at all. If I have a system I need to change, seeing the end result of the system + my one change is easy. But when I try to imagine a complete system in my head (or on paper or whatever) it just ... fails. What I have in my head is much simpler than reality requires, and frequently wrong in major respects. I don't know why and I've never been able to fix it. At this point I've basically given up; when I need a high level design, I get someone else to write it, and then I fill it in.
If you give me a program where all the functions definitions and what they're supposed to do is written out, but nothing else, what I give back to you will be amazing. But if you tell me to write a program that does X, what I give back to you will be crap. This potentially has impact on my career advancement, but so far I've been able to do well by simply stating clearly that these are my limits, and I can be most effectively used in role X that doesn't cause trouble with these limits.
I'm sort of the same way and I think I've narrowed down the cause to be either choice fatigue and/or perfectionism. With a bug, the problem is "it's broke", the solution is "fix it". I don't have to think about languages, frameworks, APIs, code style/structure/efficiency/maintainability, etc. It's already in place and it's just not doing exactly what is expected, usually in some relatively minor way. With a new feature, or worse, project, the options are endless. From "Should I use a stack I'm comfortable with or use this opportunity to learn a new technology?" to "Should I completely rethink how a user enters information into a computer or just use a text box?".
My problem is convincing managers that good bug fixers don't always make good clean-sheet coders. Nearly everyone has things their good at. The trick is figuring out how to utilize strengths while mitigating weaknesses.
right, i'm one of those "puzzle hackers". I consider myself more of a problem solver rather than an Academic who architects an application from scratch and knows exactly what data structures and design patterns to implement without doing some extra research.
My math is terrible. My math teachers were horrible, and my attention spam limited. Complex equations make my eyes spin.
I have a great visual and linguistic memory which helps me overcome these limitations. I'm also good at envisioning somewhat complex patterns and logic problems. When i code I usually start cowboy, and optimize later. I'm not at all a great abstract thinker, and therefor not consider myself a great programmer. I'm also from a hobby PHP background for 12 years now. Maybe that says something :)
I'm mostly a software design person. Despite that, nothing is more pleasing and fun than to trace problems in a system with which I am unfamiliar. And I'm either extremely lucky or extremely good at it. When asked to help with a sticky problem I can usually steer co-workers to a solution fairly quickly. In a surprising number of instances the "problem" vanishes, never to return (unlike a Heisenbug).
My peers have somewhat ungratefully termed this the "Asshole Effect"!
Crazy to say I love fixing bugs! It does get tiring at some point to fix very similar bugs over and over again. I realized bug fixing is in essence binary search. Keep cutting the code size in half and write down the important values. Soon enough you'll find where the problem is.
I thin bug fixing builds a skill most people simply don't have, reading code. Reading other people's code is incredibly difficult and most people don't put the time and effort into it.
It does suck to get boxed into that kind of role. Where you are simply just reading code and changing a few characters per day. It is exciting get a clean sheet of paper and creating something new.
You're actually a little mislead. Proving that splay trees don't suck and inventing amortized analysis made Danny Sleator famous. Coming up with and coding it took him less than an hour (from his own mouth which might be embellished). If you look at the full proof of its log(n) running time, you'll realize that splay trees are incredibly complex.
Sleator is also an amazing problem solver. He can outcome most of the people commenting on this thread but isn't necessarily the best architect.
I believe algo is about problem solving. If you don't understand it, you don't know when to use which tool. Its unecessary if you're writing boilerplate and solved problems with frameworks but when you get to the google/amazon level, you can't just write good code. You need to understand algo to create dynamo and write dremel and mapreduce.
I agree with everything you say I just don't understand why you say that I am mislead.
Of course he could code it in one hour. Probably he came up with it in a bright moment. But a researcher has to think for years to come up with something so successful, and most of their ideas suck.
"Sleator is also an amazing problem solver. He can outcome most of the people commenting on this thread but isn't necessarily the best architect."
Which is what I think exactly.
"I believe algo is about problem solving."
Not just problem solving, but a kind of problem solving. algo is some kind of mathematical thinking. To be a extremely good at algorithmization you have to be extremely good in a kind of mathematical thinking. Of course those people who are extremely good at it like Sleator (or Von Neumann etc...) are extremely smart, so they are probably very good at other things also, but algorithmization is not a general purpose intelligence, it is a kind of 'mathematical intelligence'.
I disagree. Coding an algorithm is completely different from using it as a black box to design and implement a system. Algorithms are fraught with conditional logic, where well-implemented systems tend to quarantine conditionals to make edge cases as few as possible.
An algorist prides herself on ability to optimize the problem space, where a systems coder prides himself on the ability to keep systems bug-free and free of accidental complexity.
There are many coders who do both, but I don't think the set of algorists is identical to the set of systems coders.
I disagree at some point. If person doesn't at least knows difference between hash-table and B+tree -- then he cannot build large scale systems. The same for other algorithms. Everything is based on algorithms, and without their understanding people often will build scalable applications that have really horrible performance, so they will consume lots of hardware to do simple things with wrong algorithms. And that's horrible.
If that's true, it's new. I don't know the difference between a B+ tree and any other type of tree, and I built the world's then-largest email system in the '90s, eventually pushing 4,000 TPS through servers less powerful than an iPhone. Some problems just don't involve algorithms at a visible level.
We had one filesystem with one type of key-value indexing, and whatever that was, that's what we used. Both disk I/O and RAM were too constrained to worry about O(n); n was < 500, data structures couldn't even exceed 32k, and even linked lists were too slow - we used arrays. The only algorithm we (consciously) used was a hashing function for our cache keys and sharding, and we were far more concerned about its uniform distribution than its performance.
That class of problem is less prevalent today, but it's at least plausible that there's somebody architecting large-scale systems where the only algorithm is "do very few things very quickly".
Memorizing some basic properties of a data structure or algorithm and using them as black boxes to design large scale systems is different from actually implementing said algorithms/data structures, let alone devising them in the first place. That's what the OP's blog is about.
The thing is hash-tables and B+trees are data structures and not algorithms. I agree that if you don't know basic differences of algorithms you will run into performance issues but you will also run into problems if you don't know the difference between a data structure and an algorithm.
Data structure is "a particular way of storing and organizing data", so I think there's not so big distinction between two. More of that, when we think of data structures, we think of insertion/deletion/searching complexity, also we think about space needed (which is also a result of organizing algorithm). So I don't think that someone can run into troubles because of that difference.
I don't think your analogy works... there are different skill-sets. Maybe "I'm an excellent eater of hamburgers, but I'm horrible at forming patties and cooking."
Building large systems means that you're a builder -- you can take raw materials, assemble them in a coherent way, then troubleshoot/operate the thing you built. You write scripts, modify code to integrate with other components, and fix bugs. You don't need to know anything about sorting algorithms, although the thing you build sorts all sorts of data.
More knowledgeable programmers actually build the raw materials, or have an ability to dig deep into the code to fix fundamental issues. They may or may not have no knowledge of the overall system. A typical enterprise handles deep code issues by opening a ticket with their vendor.
I'm not making a "dig" at the "builder" programmers -- they are as essential as the deep-understanding programmers. The guy at Oracle who is debugging the code in the database that does sorts probably has deep mathematical knowledge... but he knows nothing about integrating ERP system components.
> It's really just a matter of breaking down your problem into bite size pieces and confronting them one at a time.
my gut tells me I disagree. I think that differentiates junior from senior programmer. Junior will take a big problem, chop into small chunks, make them work properly and kill project while trying to get all those small pieces to work together. Senior will look long enough on a big picture and of course will start from small but will always be aware how all pieces need to fit together. While my statement may sound obvious to everyone and you may say "junior should do the same", in reality your "confronting one piece at the time" is what made me disagree.
I do, however, believe you can tap into a middle-size problem without knowing how to code complex algorithms, especially in a startup world. Look at Facebook. Zuck never been the smartest programmer, but I dont think he took upon huge complex algorithms to get into the MVP stage just to prove user's numbers to investors and get money to hire programmers that could rewrite it from scratch to serve it to tens of millions of users flexibly.
I'd phrase it slightly differently: "If you can build large scale systems and your systems are still running 6 months after release without large modifications, then that means you can simplify complex problems into several major simple problems."
Some folk are asking how are the details are connected to the large scale, high level thinking?
I explain it to my customers like this with a quote I enjoy.
"How do drops of water know themselves to be a river?"
The details, algorithms, data are all the drops of water. The river is the large scale system.
Having an understanding of how a river (or large scale system) works gives you insights how each smaller and smaller piece should work together to fit into the overall picture.
Drill down far enough, and you are at the individual processes.
This is what I call understanding the system from bits and bytes, through to nuts and bolts, all the way up to the top of the pyramid where you have the high level system strategy.
If you can get the big picture, you can begin to learn and understand how it's built.
Having a lot of tiny algorithm skills does not mean, though, that you can piece together how they can work together to make a big picture.
It's my humble opinion this is how software gets off track and spirals out of control. Too much domain knowledge about tiny things without as much focus on the big picture to balance it out leads to premature optimization, over engineering and mis-assumptions, trivializations and all the other things we've slapped our wrists for.
I have similar feelings of inadequacy/anxiety. To fight this, I occasionally work through some Project Euler problems, and while they're great, I'd like a resource that contextualizes those types of problems within a theory or mental model.
Can people recommend resources to help? Ideally these would be resources that focus on learning algorithms (or data structures), not teaching you how to program.
One person already recommended "Python Algorithms: Mastering Basic Algorithms in the Python Language", which looks good. "Data Structures and Algorithms Made Easy" also looks good but has mixed reviews.
Are there other books? (Ideally not dry textbooks.)
Free/inexpensive open courses, e.g., through MIT or Stanford?
"I am a great mountain climber, but a horrible quarterback."
Presumably, people wouldn't find this statement surprising - while mountain climbing will improve your fitness level, mountain climbing skills aren't directly transferable to football.
If you're not a good algorithmist, it's because you don't have enough experience with algorithms - which, based on the blog post, is true for the writer. I have the opposite issue - I'm decent at algorithms but not-yet-decent at programming; this is because I have a background in pure math, which lends itself to algorithms and algorithmic thinking, but comparatively little experience programming.
I feel the same way although I feel that I have improved somewhat by writing an interpreter for my own programming language. The problem is we have no way to practice. I learn by writing code. If I didn't write a sorting algorithm myself then I don't know it. But how often do you find a legitimate reason to write a sort from scratch on a real project.
I think it would be great to build an entire computer science curriculum out of nothing but programming practice problems covering everything from bit twiddling to monads.
The article makes a good point. I too sometimes feel the same way in my technical career, often asking myself: "Do I really know X as much as I think I do"? I don't view it as insecurity, more often just an observation that there are real limits to my knowledge, and how I should go about improving certain skills.
It keeps me humble, thinking, and always learning, which to me are the greatest qualities a software engineer/developer should have.
Basic levels of algorithmic knowledge is required to be a competent coder. Being amazing just helps you be a better coder, but is only one of many facets.
however, don't ignore it. You should be able to figure out instantly that a hashtable should be used to find the intersection of two sets. Small problems like that crop up all the time and if you can't do that, I can't believe you can architect systems and write good code.
Any half decent programmer uses algorithms everyday without even knowing it. Algorithm is just another word that encompasses a massive subset of other words that in reality DO NOT matter. A word is of no use if you don't know the definition, but a definition is plenty useful all on its own.
I think that's where people get scared. Just look through many of the examples above and you'll be surprised. There's no mystery to them, and I'm sure many of you (conscious of it or not) have implemented many of those many times.
Algorithm is efficient and concise code. All the different names and acronyms flying around are just a way to apply your efficient and concise code to a particular problem.
The seemingly tougher algorithms make so much more sense when they are applied in a situation of your own. If you start becoming conscious of your uses of code, it'll all get easier.
I'm sure you're doing just fine, and if you stop thinking of the word "algorithm" as some deep dark abyss of complex problems and knowledge, you'll discover it to be something that you may have done before, or it'll be something new to apply to your next project.
This is one good reason to learn functional programming. Functional programming is all about algorithms and the algorithms are generally laid bare to be easily seen (rather than hidden in the noise of syntax). I have found that being taught functionally programming gave me a firm footing in reasoning about algorithms.
For instance: I can write a sort in any language because I carry this little gem around in my head:
I do not agree with the sentiment that it's only an innate ability. I was completely ignorant of most algorithms until I actually set out to increase my knowledge of them. Once you start each one becomes easier to grasp and you can then think in terms of solving a problem using some of the generic solutions you've come across. Like anything it just takes practice.
It takes time and practice. You won't get the practice at work. I dealt with a network flow problem a while back and a spell correction for URLs problem before that. In the last year I haven't done anything algorithmically challenging. Most of the time you're building systems that move data around.
A common mistake is that people learn about complex algorithms and data structures, remember their names but then can't solve questions that involve just basic structures like vectors, stacks or binary search trees.
I can't learn by reading a book, I have to solve problems to really grasp a concept so my advice is this:
Do topcoder.com/tc div 2 practice rooms, about 30 of them in a short time span. You'll see the solutions of other people and be able to learn from them. Also the level of div 2 problems is about the level of more difficult interview questions.
Like "great filmmaker" or "great wine", what does "great programmer" even mean? It's so context dependent, and there are countless aspects to programming that you could use to compare. It's not strictly hierarchical either. "Effective" seems like a far more useful term, because for most of us, most programming becomes simply work product, not a work of art.
If your position demands a fluency in graph algorithms that you don't have, you're ineffective. If your position demands creating Microsoft Access forms but you spend all day screwing off because you're bored, you're ineffective. If your position demands working on designs in large teams but you're arrogant to the point of disruptive, you're ineffective.
Nobody cares if you or someone else considers you "great" if you're not effective in your current environment. Conversely, if you're effective, good for you and quit worrying.
Studying hard in university does not help. I scored 88/100 (a very high mark) in my algorithms course at university.
If you ask me if I know <X> algorithm I'll almost definitely say "No". If you ask me in an interview to implement "<Y>" I'll ask you to describe it to me. There's just too many different algorithms and its hard to remember all the names for all the algorithms. I can almost always just use google to find the algorithm on the internet and implement it from pseudo code or something.
When I can't use google it means I'm in an interview...
Sometimes I jokingly tell myself "I'm a software engineer, not a computer scientist!".
I studied EE, not Comp. Sci, but I drifted into pure software. Now software is all I do, but I feel like I'm "missing" things from my knowledge.
Can anyone recommend me any good resources (books, online articles) that would help fill these 'algorithmist' gaps? I know of things like complexity theory, how functions scale with time using O(n) functions or whatever, but I don't know much about them. I mean like kind of an overview of computer science topics; something which is wide but not deep.
I imagine there are other EE refugees like me who feel lost when they read Google interview questions :)
Go to amazon, search for data structures and algorithms. Those four words should bring up a whole bunch of results. Choose one that's well reviewed that has familiar elements to you.
Also, wikipedia is a great place to start (of course).
Sorry I can't be more specific, but I don't know your educational background or experience. Fwiw I used http://www.amazon.com/PartyTime/dp/0262033844/ . The only thing is that I had a decent, recent math background, and if you haven't done "real" math in a long time, it might be one more barrier to learning what you really want to learn.
I, too, recognize this as a weakness in myself. Project Euler can easily make me feel like a loser, and the crowd that chants, "If you can't do pointer arithmetic, kill yourself," doesn't help, either.
I'm a great algorithmist and a mediocre programmer (by my own tough standards at least), so there.
Everyone has different skills. One of the real arts of building a team is figuring out what each member's top skills are, and divvying up the work appropriately. That's actually more difficult with a top-notch team because high performers are often good at everything -- but there are still areas where they absolutely shine vs they've learned a skill that didn't come as easily through practice and perseverance.
Implement a bunch of problems from online judges (Google: UVA Online Judge, SPOJ, Codeforces and Topcoder). If you can't solve a problem, there is often explanation of the solutions (at least in Codeforces and Topcoder), so read them, understand how the solution works and then implement them, then try to find related problems. Doing this regularly, i estimate that you will be a good algorithmist in about 6 months.
Hello, I feel exactly the opposite way. I have no problem with algorithm, I like very much short challenge exercises. But I have more difficulties when I have to do small changes to a big piece of software. I have encounter developpers that are a lot faster than me. They were producing cleaner code than me, but they did not like recursive functions and hated tricky algorithms. I am jaleous of their productivity.
I dunno, a lot of these things are brain teasers and not useful in our everyday programming lives as you say. Brain teasers can strengthen your brain and thus make you a better programmer, but rarely will you be able to apply the learnings from those brain teasers to common problems. I think I'll tease my brain just enough to be more productive, not just so that I can "feel" like a better programmer...
Maybe it's because I've come from a more mathematical background, but I'm pretty much exactly the opposite. I'll see the problem, come up with an algo in my head to solve it and then the actual programming is just a remaindered implementation detail.
Obviously it helps if you know the final language well when you're coming up with the solution, but ultimately the algorithm is the only difficult bit.
I am also an undergrad CS major from UTD (whoosh!). It is VERY interesting that you would bring this up because now I am pursuing a masters in CS at a different university and grad algorithms is the class I struggled with the most! I ended up having to spend a lot more time on it in grad school and learned a lot more. I think UTD might have a slight deficiency in teaching algorithms IMO.
I am like this to, something that really help me overcome this shortcoming of mine was start solving problems from http://www.spoj.pl/ and http://uva.onlinejudge.org. After solving a few hundred problems coming up with basic algorithms became a trivial task.
Mathematics. In my experience, ability to program complex algos comes from the complexity of your maths background. Find a way to do more maths and you'll get better at algos. (abstract/logic maths, not just simple number crunching).
Also, building "pet" projects helps, since you'll know exactly what you want as an end product of an algo.
Algorithms aren't really something you can just sit down and bang out. It takes a certain amount of time to do the abstract thinking, put the details on paper, then implement. Coming up with a famous algorithms that we hear of everyday won't happen overnight. They are hard to design, thats something you have to accept and move past.
This is why you must not drop out of college/university and possibly do research. You will learn to put robust foundations behind your code, ideas, concepts. Bonus point, you will probably directly be in a niche area, with a nice address book and contacts of real people having real complex problems. Great for a startup.
I sometimes feel like I have the opposite problem. I can come up with the algorithm which requires an understanding of why it works on the problem, but then I lose my train of thought while coding it (albeit this is happening in stressful interview write-on-the-whiteboard-while-I-judge-you situations).
finally, somebody like me :). How the hell should i scale?
I write tests, but do too much testing as the code size grows and that is very limiting.
Ohh, on losing the train of thought thing, this is the very reason why i don't use IDEs, because they freeze at random times. One technique that worked for is solving the problem first of board in a sufficient detail, or writing down as comments how you are going to solve the problem, and slowly fill in the details.
Include me there :D but it's mainly because I don't like to code algorithms, I can if i need to, but I don't enjoy, and to be good you have to enjoy.
I enjoy creating interfaces and front-end so I am good at that .