Hacker News new | comments | show | ask | jobs | submit login

It is designed to estimate the amount of time it takes to run an algorithm using a data structure like a binary tree instead of a different one like a linked list O(n)instead for linked lists. I am trying to learn it myself, so bare with me if I make any mistakes.

They throw words like polynomial and I've asked people with comp sci degrees to define what they are. I get back replies like "We learned it as a polynomial and never got a definition of it." OK look, here is what it means and some people with a comp sci PHD cannot even tell me, poly means many, right, and nomial like numbers or formulas or rather in computer science and algorythm. You have to study how algorithms work, I asked this on Hacker News and many tried to bury the story because most couldn't even answer that, and thanks to those who did. I am glad to see someone else bring it up and hope it sheds more light on it.

Consider this, that you really cannot get a good estimate of how long any algorithm can run because you got other factors in play. You got multicore CPUs now, and some of the work is offloaded to GPUs as well (like Bitcoin mining programs using GPUs, well now algorithms can use GPUs for some of the processing), plus you got caches on processors you didn't have in 1971, not only that but the law of thermodynamics applies to computers and electronics and many people forget that, so if your CPU is over heating it could slow down if you have a variable fan it could cool the CPU down and it runs faster for a bit and then heats up and slows down a bit, most operating systems are multitasking so the program that runs the algorithm might take longer if the Apache HTTPD server is in use with heavy traffic on the server the algorythm runs on, not only that but Anonymous might have sent a DDoS attack to your server the algorithm is running on. I hope those with a PHD in computer science figure those as factors when they explore this issue.




>It is designed to estimate the amount of time it takes to run an algorithm using a data structure like a linked list instead of a different one like a binary tree.

Wat?? Excuse me, but it seems you have no idea what you're talking about.

>Consider this, that you really cannot get a good estimate of how long any algorithm can run because you got other factors in play

That has absolutely nothing to do with the complexity of the algorithm. Also, it _can_ be estimated, too.

I wonder if you're trolling or just clueless


> I wonder if you're trolling or just clueless

Given the curious interpretation of ‘polynomial’ by means of an only somewhat correct translation, I’d tend to the latter.


Well I admit I don't understand it fully. That is no reason to accuse me of trolling or downvote my comments.

http://en.wikipedia.org/wiki/Polynomial

Is Wikipedia not correct on this issue? I was told on one hand to look stuff up on Wikipedia, and on another I found out it is not always correct.


‘poly’ means many, yes, and ‘nomial’ in this context can stand for ‘terms’ or ‘expressions’ (not really numbers, but maybe formulae as you said). However, ‘polynomial’ does not mean ‘many terms’ when used by mathematicians and, in extension, computer scientists.

A polynomial function, in this context, means a function that is/can be expressed as a sum of powers (usually restricted to a finite number of addends) of its input variables with appropriate prefactors. For example,

  f(x) = 1
is a polynomial function, but it certainly does not have many terms (nor numbers). That the finiteness of the sum is an important requirement when it comes to complexity can be seen by the fact that the exponential function can be expressed as an infinite polynomial:

  exp(x) = 1 + x + 1/2 x² + 1/6 x³ + …
In other areas, where people are more concerned with nondivergent or differentiable functions, this requirement is often dropped and nearly everything that doesn’t have poles in x is called ‘polynomial’.

Furthermore, you didn’t only get downvoted for misinterpreting polynomial, but also for the strange mixing of theoretical CS, which is concerned with the complexity of algorithms at hand, and real-world effects such as slowdowns due to other programs multitasking, CPU caches etc. etc. which are rarely taken into account when deciding whether a given algorithm is remotely feasible to implement (runs in time, say, n^10 or faster) or just too slow.


Ah I see, I am exploring theoretical areas of computer science for which there are no clear answers yet. Downvotes for you, bad HN user for making us try to think of new theories. :)

So a Polynomial doesn't need be many of them, it can be just one of them. The poly in the name threw me off I admit. I would have assumed that was a mononomial for just one and two of them are a binomial but apparently polynomial is used instead even for only one or two? I hope you understand my confusion there. Thank you for clearing that up.


Ah I see, I am exploring theoretical areas of computer science for which there are no clear answers yet. Downvotes for you, bad HN user for making us try to think of new theories. :)

In science in general, including computer science, "theoretical" doesn't mean fringe or unexplored, it means foundational and abstract. CS theory uses the word "theory" in the same sense as music theory. CS theory deals with well-defined, rigorous analysis of computation in abstract or mathematical terms.

I would have assumed that was a mononomial for just one and two of them are a binomial but apparently polynomial is used instead even for only one or two?

A two-termed polynomial is also called a binomial. "Binomial" and "monomial" are special cases of the more general term "polynomial".


Just as poly- and mono- are prefixes of Greek origin[0], you’d probably use ‘di-’ rather than ‘bi-’ here (binomials are a little different, again), just as you have monosaccharides, disaccharides and polysaccharides[1] (aka carbohydrates).

The relevant point is that it usually doesn’t matter whether you have one, five or 500 terms in a polynomial, as the largest one will certainly dominate for sufficiently large input sizes[2] and all terms in a polynomial essentially behave the same way (being differentiable, having no poles etc.).

[0] The only thing wrong with homosexuality is smashing a Greek prefix onto a Latin root.

[1] Latin: uno-, bi-, pauc-, multi-, Greek: mono-, di-, oligo-, poly-

[2] cf. http://www.wolframalpha.com/input/?i=x%5E20%3B+x%5E20%2Bx%5E...


Well I am trying to understand it better. Instead of explaining it to me better, I get downvoted instead.

Please explain it further and tell me how you can estimate the time when a DDoS attack is being done on the system, or the CPU is overheating, I'd really like to know how you estimate that. Apparently I'm clueless and in need of many clues.


Your comment was not phrased along the lines of "Can anybody please explain this?" It was phrased as though meant to impart wisdom. That's why it got downvoted and why people are correcting you — because you (apparently) meant to ask a question but ended up making a bunch of erroneous claims instead.


Big O notation is an expression of a theoretical concept that applies to the algorithm itself. It is independent of any hardware that might be used to implement an algorithm.

Specifically, asymptotic complexity is an expression of the number of abstract operations or units of memory an algorithm will require. The meaning of "operation" or "unit of memory" depends on the algorithm. In the case of comparison sorting algorithms, for example, the operation in question is the number of comparisons.

Even if the CPU time allotted to a running algorithm changes due to system load or overheating, the algorithm still performs the same number of operations. Actual running time under system load can be calculated, but this is unrelated to big-O notation or complexity.

--------

Regarding your explanation, it is excessively wordy and uses awkward phrasing. Your subsequent complaint about the terminology used in responses to your previous questions about P=NP demonstrates a serious lack of prerequisite knowledge. You learn the definition of "polynomial" in algebra; teaching you algebra is far beyond the scope of a discussion on HN.

As someone who is primarily self taught but also finished most of a college CS education, I can understand your frustration. In order to understand one thing you need to learn three other things, and there's no map that leads you from what you know now to what you want to know. Until someone makes that map, or you know enough to make up that map as you read Wikipedia, you'll have to learn things in the order they're taught in school.

So here's what you should do if you want to understand what people are talking about on HN:

1. (Re)learn basic algebra from start to finish. Use Khan Academy. You absolutely need this to understand what a polynomial is.

2. Accept Wikipedia as a source of information. Use a dictionary and other Wikipedia articles to look up terms you don't understand.

3. Related to #2, try to study cognitive biases, critical thinking, and logical fallacies. Studying these concepts will help your brain process information, such as difficult to understand Wikipedia articles. Check out lesswrong.com.

4. Study basic CS. Look for an Introduction to Computer Science course on Coursera or elsewhere.

5. Study algorithms. Take an algorithms course on Coursera or elsewhere.


I studied computer science at the University of Missouri Rolla. My instructor Dr. Pyron said that this stuff wasn't needed to learn Computer Science and that most of it was hoaky and kind of hinky. But he also told our Freshman class that "Ethics don't matter" as well. So I think I was robbed of that opportunity to learn it. BTW I feel that "Ethics do matter" and told him of my opinion, and he said it wouldn't work in the computer business.

I had a TA for College Algebra who couldn't teach it, and the professor was nowhere to be found. They made Catch-22 Jokes that he was like Major Major you could only see him if he wasn't in his office. It made me feel better but did not help me learn. I was in the Delta Tau Delta fraternity and they helped me out, but claimed the TA had messed up some of the problems that could only be solved with Calculus which I didn't learn yet. To make matters worse I got caught up in the party college thing and underage drinking. I quit eventually. But at least I didn't get forced to meet "Alice" like some unfortunate souls. I still have my Intro to Pascal and Pascal with Data Structures books in my basement, I should re-read them and try stuff with Free Pascal.

In 2005 I had a car accident and almost died, was in a coma, and lost some memories as a result. I may just have to relearn everything all over again. Since I am on disability, free online classes and sources are my only hope. Thanks.


With 25 years of experience in computer industry, you should be explaining us all how we can estimate the time when a DDoS attack is being done:).

Anyways we are talking here about complexity of the algorithm, and not its actual speed or performance which will depend on a lot of factors. But complexity tells for example how many steps it will take for the algorithm to solve the problem. The algorithm can be used by Humans or super computers, so performance can vary from days to microseconds, but complexity will remain same.


I think I covered that in this ebook: http://www.smashwords.com/books/view/314107 :) BTW it is a joke written by MIT's AI SciGen software to show how hard it is to tell if a thesis is serious or a parody written by clever AI software. Even if I mark it as parody and Computer Science Fiction some people are taking it seriously. Even when it says stuff like added 27 25Ghz Intel 386 chips to the network. :) The 386 chip does not run that fast.

OK complexity, then why is polynomial time used to describe it? I know some compilers can optimize machine code better than others, how does that affect the complexity? If the performance can vary from days to microseconds, well that is a very big gap. Video gamers always complain about 'lag', and days well that is one heck of a 'lag'! :)


The point of all this isn't to estimate exactly how fast it will take to do a task. It's to compare how efficient one way is against another. Newer, faster computer will do them faster. A DDOS or CPU overheating will slow them down. But that's not the point.

Lets try a crap example I just made up. Imagine you want to go with your friends to see a movie. Here's a bad way to organise that. You drive to your first friends house. Ask what they want to see. Then you drive to each of your other friends houses and tell them what friend 1 wants to see. Then you drive to friend 2's house and ask them. And then drive to each of your friends houses (including friend 1) to tell them all what friend 2 has. Can you see how with a large group of friends, that would be insane? It really doesn't matter if you have a fast or a slow car. It's just crazy. A better plan would be to drive to each friends house, ask what they want to see and write it down on a piece of paper. Then at the end -- you can drive around again and tell everyone what movie you are all going to see.

If you've got 2 friends -- in the first plan you get to visit 4 houses. And in the second plan you get to visit 4 houses. With 3 friends it's 9 and 6. And then 16 and 8. And 25 and 10. 100 and 20.

Now lets say the person doing plan 1 discovers this amazing thing called the telephone. Plan 2 person is still using a car. Person 1 can do things so much faster. They don't have to leave home and can call in a manner of seconds.

But let's think it through. 4 calls vs driving to 4 people. 25 calls vs driving to 10 people. Still a big win for plan 1 at the moment. But how about 10,000 calls vs 200 car trips? 1,000,000 calls vs 2,000 car trips?

Does that help?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: