Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What is the big idea with algorithms?
12 points by mlLK on May 18, 2009 | hide | past | favorite | 25 comments
Firstly, I'm a non-CS major and am much unlike many of you mathematical whiz kids who make up this board, so having stated that, I hope you'll get a better idea for why this book I just checked out at the library, Algorithms in a Nutshell [http://oreilly.com/catalog/9780596516246/], gets me so hot.

To me algorithms are as interesting as the formulas that makeup our Universe brought to us from physicists, granted they're both inspired from the same basis, which is to express a set of quantities algebraically that hold true as a solution to a problem. However, physicists and computer scientists are both working towards opposite ends, since physicists attempt to generalize a set of observations from the physical world into a law, computer scientists attempt to materialize a set of observations from the abstract world into a set of instructions, called an algorithm.

When I open this text-book's ToC it reads like a road-map into the beginning of the mostly unknown Universe of computer programming. I want you to briefly summarize why algorithms are so important, where they came from [their genesis], how you use them on a daily basis, whether or not they're more important than design, and what the formulas of the future will do for humanity.

[EDIT] More bluntly, summarize [and categorize, if you'd like] the evolution of algorithm design starting with insertion sort and ending at your heart's content [or AI].



http://artemis.cs.yale.edu/classes/cs460/Spring98/contents.h... is a very good resource on the parent science of algorithmic information theory.

AIT and computational complexity theory arguably underlay all other sciences. They allow us to define randomness (http://www.cs.auckland.ac.nz/~chaitin/georgia.html), pattern, probability (http://www.scholarpedia.org/article/Algorithmic_probability), inference (http://www.wisegeek.com/what-is-solomonoff-induction.htm), ignorance, and perhaps one day consciousness.

They allow us to prove, with absolute certainty, whether we are talking to a deity ( http://scottaaronson.com/blog/?p=56), and to prove that some questions will forever be beyond our grasp.

Even discounting Zuse and Wheeler's "It's from Bits" conception of information and computation as the basis of physics (http://en.wikipedia.org/wiki/Digital_physics), these disciplines promise to help us solve our hardest and most interesting problems by, first of all, telling us whether they are solvable at all, and if so, whether they are solvable before the heat death of the universe. Consider that Goedel, Cohen, and Turing struck down the Continuum Hypothesis from Hilbert's list of open problems, allowing us to abandon a futile line of inquiry and focus on other problems. Current developments in quantum computability theory may similarly strike down vast swathes of sterile ideas, directing us toward questions whose answers are solvable.


References and a sincere, open answer. You deserve a lot more karma than the sophomorish comment currently leading the thread.


No kidding, this answer is as nearly loaded as my question.


this suspiciously sounds like a final essay question for someone in college.

seriously, though, this is a pretty broad, abstract concept. an algorithm is, more or less, a set of instructions to solve a problem. they didn't have some sort of magical genesis, and everyone uses them on a daily basis.


School's out bucko, I wouldn't stoop that low. :D

This is more or less a metaphysical inquiry into where computer science is headed. Algorithms + data begets design patterns, design patterns + objects begets...


If you want a good feel for this stuff, try downloading and going through this course, "NAND to Tetris": (Link is not the course, but a talk about it.)

http://video.google.com/videoplay?docid=7654043762021156507

The book is only $26, last I looked. The course software is all Open Source.

You'll see how digital logic is used to construct components like logic units and memory, which is then used to construct a computer, for which you create a computer language, which you then use to write an operating system, and finally, you program games on it.

This will then give you the wherewithal to really understand Godel-Escher-Bach:

http://www.amazon.com/Godel-Escher-Bach-Eternal-Golden/dp/04...

If you want, you can read the book first, but then go through the course and read it again. The 1st time you read it, much of it will be lost on you, but the 2nd time, you'll have many Ah-HA! moments.

One key is Automata Theory. Understand that, and you can understand what you are trying to ask about.


The timing seems right. Shouldn't have had that last beer...


It's a little hard to tell what you're really asking, so forgive me if this isn't what you're looking for. Worrying about algorithms in the "mathematical whiz kids" sense is extremely important when you have to be worried about speed and resources. Take databases, for instance. Their search and compression algorithms are extremely optimized (by mathematical whiz kids), and they yield phenomenal results. If everyone had always been satisfied with a simple, straightforward solution, Google would be devastatingly slow and require an enormous amount of space more than it already does.

I've had to look into the efficiency of my algorithms when doing batch processing on massive XML files - I managed to cut run-time by 80% by paying more attention to my algorithm's efficiency.

On the other hand, if I'm just sorting 5 entries alphabetically for some table, it's better to implement a quick-and-dirty algorithm that will take an extra 2 milliseconds to compute, than for me to spend half my day worrying about it's efficiency.

So generally speaking - people designing low-level systems are very worried about their algorithms, with good reason: If their code is slow or wasteful, their platform can't be used. If you're designing a web app, you'd probably waste more time than it's worth if you delve into that. It's just a question of priorities.


Very simply some problems are impossible to solve in a reasonable amount of time or at all; these problems may looks simple, but their complexity can explode.

For instance chess is relatively easy to program where as go is quite a bit more difficult. ---------------------------------------

Here's a very good quote from wikipedia that might make it clearer:

The thesis is widely considered to be a good rule of thumb for real-life problems. Typical input lengths that users and programmers are interested in are between 100 and 1,000,000, approximately. Consider an input length of n=100 and a polynomial algorithm whose running time is n^2. This is a typical running time for a polynomial algorithm. (See the "Objections" section for a discussion of atypical running times.) The number of steps that it will require, for n=100, is 100^2=10000. A typical CPU will be able to do approximately 109 operations per second (this is extremely simplified). So this algorithm will finish on the order of (10000 ÷109) = .00001 seconds. A running time of .00001 seconds is reasonable, and that's why this is called a practical algorithm. The same algorithm with an input length of 1,000,000 will take on the order of 17 minutes, which is also a reasonable time for most (non-real-time) applications.

Meanwhile, an algorithm that runs in exponential time might have a running time of 2^n. The number of operations that it will require, for n=100, is 2^100. It will take (2^100 ÷ 109) ≈ 1.3×1021 seconds, which is (1.3×1021 ÷ 31556926) ≈ 4.1×1013 years. The largest problem this algorithm could solve in a day would have n=46, which seems very small.

http://en.wikipedia.org/wiki/Cobham%27s_thesis


Understanding algorithms and how to analyze them will make you a better programmer. Should you memorize every algorithm and clever trick that others have come up with? I haven't. However knowing that there are different approaches with various space/time trade offs gives me options to consider when coding something up. It also helps make me aware of the pitfalls of doing things a certain way.

Could I whip up a tree balancing routine on a whiteboard on command? No! But, I'm aware that such algorithms exist and what situation I'd need to apply them to.

I think this is what you should shoot for. As for Algorithms in a Nutshell, if it is the book I'm thinking of then bravo. I like how the authors chose to present the algorithms using code and not the awful psuedo-code found in most algorithms texts.


I think the space/time trade off is most interesting, given how impertinent it is in physics, and certainly constitutes a place for rethinking in how we approach mathematics and science on a whole.


To understand why algorithms are important, and how they are used, it might help for you to give up your mental model, described in your original question, of the link between physics and algorithms. They are not both attempts to represent some set of quantities. Rather, think in terms of physics as stating the equations (describing how the values relate to one another), and algorithms as methods for solving the equations (calculating the values).

So for example, physics describes the answer to "where will this spacecraft be five years from now?" while to compute a numerical answer you need to apply some particular algorithm (like a differential-equation solver).

Most of the algorithms in "Nutshell" are what you might call core-level methods of figuring things out. For example, the book describes different sorting algorithms. From your question, you sound more interested in what kinds of problems demand sorting (for example). For that, you might try a bigger-picture book, for example Steven Skiena's "Algorithm Design Manual." It goes into less detail about actual implementation, and more detail about what the algorithms do for you, and when you might apply them.


When I first learned to program coming up with the algorithm was everything. I thought of how to move the data, compare it, and finally compute and represent the solution. As I grow more experienced the algorithms come naturally.

I now think in terms of rules, context and complexity. I mostly think about distributed algorithms and the key to me is always a small set of rules that all nodes must follow. Rules are generally reactions to events, involving communication and/or state changes. Much of the time the rules follow from some constraint, such as complexity requirements or limited information. There are often different possibilities so you consider the trade-offs based on your context. I approach computational problems in the same way. I figure out data elements need to interact and then come up with some reactive rules. The computation is the chain-reaction of those rules.

The algorithm follows from the rules. Essentially it is a formalization of the ideas into an iterative instruction framework. The algorithm can then be implemented on a computer by any competent engineer and may be analyzed mathematically (e.g. for formal proofs).


For me, the study of algorithms is cs. Hacking is just a means of implementation. Also, I think it's a huge misconception that algorithms is all about math -- it's about problem solving in general.

As Dijkstra said "Computer Science is no more about computers than astronomy is about telescopes."


It's part of the distinction between a master and a journeyman in the development trade. If you dont have a basic understanding of algorithms and the analysis of algorithms, you will never be a master.


Not entirely related, but check out the concept of Kolmogorov Complexity. It should definitely interest you.


I would guess 80% of software developers would be better off spending their time * learning business * improving their communication skills * talking with their customers


"I'm a non-CS major and am much unlike many of you mathematical whiz kids who make up this board"

Are there any math whiz kids on HN? (other than cperciva)

There are many smart people on HN, but I don't think there are math whiz kids. This is a hacker forum, not a math forum like AoPS. Sometimes we discuss stuff like complexity and algorithms, but most of the time, we rant on Ruby x Python or why kids these days want to build social networks for dogs instead of sending astronauts to Mars, or why big corporations are evil and small startups rock, or why PG is a supreme master and every single essay of his is nothing less than the truth. We are such cool kids we even have our own hacker-ghetto slang... including hideous terms like "ramen-profitable" ;-)

I know a forum where there's even a Fields medalist, and that is on quantitative finance. Whoever says that HN is made of math whiz kids probably knows very little math.


What is a math whiz kid? I know there are a few professional mathematicians here -- professors, post-docs, etc.

...not a math forum like AoPS

AoPS seems to be about math competitions. Math competitions have about as much in common with real math as the space marines from DOOM (demons, plasma guns) have in common with real marines (marching, painting stuff, acronyms). Real math is about building theories that are as general and powerful as possible, or making tools, or finding and solving interesting problems in the real world.

Math competitions are about how many reductive insights you can come up with in two hours' time while sitting at a desk you're not allowed to leave except perhaps to go to the bathroom.


I agree with everything you wrote. However, it seems to me that AoPS is populated with math enthusiasts, while HN is populated with hackers. AoPS is indeed mostly about math competitions, and yes, math competitions have little to do with real math, but just count the number of great mathematicians who performed amazingly well in the IMO: Terry Tao, Misha Gromov, Grisha Perelman, etc. My point was not that AoPS is about real math, but rather that it's more likely to have math whiz kids than HN.


choose any difficult profession and any highschool or college level competition, you can exhibit people who did well in both, does that mean one is the predictor of the other? I don't think so.


That's quite true. I try not to fall under the spell of the selectivity bias.

I know a bunch of math grad students who competed in the IMO and the Putnam, and I know another (smaller) bunch of math grad students who despize math competitions. It's a matter of personality, I guess. Some are problem solvers, other are theory builders. We need both.

All in all, the point was that math competitions, per se, attract math "whiz kids". Hence, AoPS should be populated by, at least some, math "whiz kids".


> Whoever says that HN is made of math whiz kids probably knows very little math.

To generalize that: the term "whiz kid" is only used when you know so little about something that you believe it to be a black.


...black box. I have no idea how that happened, honestly. (Now I can't correct it, either.)


Funny that people upvoted your comment even though it made little sense ;-) Now it makes sense.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: