Hacker News new | comments | show | ask | jobs | submit login
Khan Academy: Algorithms (khanacademy.org)
358 points by lobo_tuerto on Mar 15, 2015 | hide | past | web | favorite | 69 comments



When I first tried this course a few month ago, the little helpers (think clippy) were driving me nuts!

Half way through writing a line of code they would pop up and tell me that my code couldn't execute, maybe I should add a semi colon. It's really distracting and of course code can't execute when you are in the middle of typing an if-statement. It is not helpful at all.

On top of that there's no "submit" button (because it is supposed to test continuously) so sometimes when I was done it wouldn't test the code and made it impossible to get a "completion" on parts.


Yes, that's truly annoying. I have a few code examples that are complete but it refuses to accept them without giving any reason. If I try to step forward it says I have to go back.

Also in the first example it complained when I was using Math.round for the index calculation and only wanted to accept Math.floor. (although not applicable in this case, I think Math.floor is not the best choice since it won't behave like ints in negative cases)


Edit: Why the downvote? Disagree? I would love to hear more about why this attitude is helpful.

I love algorithms, AND I think they are necessary in interviews, but I feel like the tech community has become ridiculous in how they interview for them lately.

Example problem: Given two words in a dictionary, transform one word into another, changing one character at a time, using only words in the dictionary.

A good interview would recognize most people have no clue how to solve this, give hints and still pass candidates who make a reasonable attempt but fail. Now it seems to be that most places will just fail you, if you don't know the most optimal answer and can't write it out without any syntax mistakes.

This is ridiculous, and I have experienced it and this attitude around major tech companies. What this has resulted in is Algorithm trivia. Your ability to succeed is your ability to memorize every algorithm question and approach in existence.


> Edit: Why the downvote? Disagree?

Please don't do this. The HN guidelines ask you to "resist commenting about being downvoted" because "it makes for boring reading and never does any good". It isn't that we don't understand how provocative it can be to have a comment downvoted for no apparent reason—we know the feeling all too well. Hence the word "resist".

https://news.ycombinator.com/newsguidelines.html


It's easy to see why: algorithms are foundational, and they're the kind of right or wrong test that give binary, merit-based substance to the nebulous process of assessing a candidate.

Still, I agree that they're overused. The truth is, the vast majority of modern programming doesn't even involve intensive algorithm writing and is more focused on the design and construction of apps that are nothing more than CRUD operations and business logic.


I think this is actually not a problem of overusing algorithms, but a problem of correctly separating the job titles related to software development.

There are software developers whose work actually requires no knowledge of algorithms, mathematics, or even CS besides their language, framework, and some architectural concepts.

On the other hand, there are "software developers" whose primary work is to develop algorithms and computational methods.

For historical reasons those roles are not always properly and clearly separated.

This can be one of the reasons why we have intimidating algorithmic interviews that are irrelevant to the actual work of the developers who are being interviewed.


If you don't have a library of algorithms and methods memorized, you won't be as productive in solving real-world problems as the one who does. Often the difference can be dramatic.

Many good algorithms are quite counter-intuitive and non-obvious, so there is very little chance that you would come up with one yourself during the course of working on a problem. In practice, people who haven't spent enough time memorizing algorithms tend to choose the wrong path while solving real-world problems, as they can't even anticipate the direction in which the correct solution lies because, again, many good algorithms are non-obvious and can't be thought out on the spot.


I think it's more important to be aware of the kinds of problems for which you don't need to reinvent the wheel, and to know the terminology and domain keywords which will help you evaluate the correct approach in a given situation when you encounter it. The useful knowledge is not the specific implementation details of the algorithm, but the ability to look at a real problem and realize 'this is like looking for the longest matching substring' or 'this is similar to finding a convex hull'. Then you can go and try to find the best way to implement a convex hull calculation in your particular circumstances. Just because you don't know an optimal convex hull algorithm off the top of your head doesn't tell me much.

What I'm looking for in asking somewhat algorithmic questions in an interview is the ability of the coder to reduce a problem to one which has probably already been solved. A separate but interesting problem is, given such a solution, how well can you implement it - but that's more of a fizzbuzz exercise in many cases.


Sure, I agree about reduction to known problems, but still, understanding algorithms completely is more powerful than manipulating them as "black boxes", as the former allows to build upon ideas of algorithms and produce variations of algorithms, and generally gives a lot of insights and patterns of algorithmic problem solving. The relation is kind of like below:

(No algo) < (Blackbox algo) < (Whitebox algo)


Is this supposed to be a bad interview question?

To me this seems like a good one only because it gave me immediately some talking points to put on a whiteboard.

- Build graph with vertices being words and edges being all words differing by one character - search graph for shortest path between given two words

Implementation would be another matter as there are a number of complexity gotchas. This would be a job for hitting CLRS.

This seems like something most people could come up with to start as it seems pretty obvious but a single graph theory course I took some years ago must have biased me a bit.


> Implementation would be another matter as there are a number of complexity gotchas.

I interviewed at an internet darling company. They had the main interviewer (who was a bit rude) and another shadow interviewer who takes notes on everything you do an say. All interviews are like this.

The coding gotchas quickly make you super nervous as everything you do and say is recorded. You have 15 minutes. Plus whiteboard is not an editor. I don't code linearly like writing a text book.

I have come to a realization that fresh out of university me performs better at interviews than 7 year experienced me.


I just took a shot at solving this puzzle for the /usr/share/dict wordlist (70k words when filtered to strictly alpha), using Python.

Assuming the "first build a graph" approach is the most efficient one, it's definitely taking me longer than 15 minutes to get a reasonably efficient solution. Just for the graph-building part. I'm taking a two-step approach, first connect the matching words that differ in length by one, so filtering for (a == b[1:] or a == b[:-1]), assuming a is always the shorter word of the pair. Checking all words of length N vs length N+1 that way took a few minutes on my (not very powerful) netbook. Not entirely happy with it, but no way I could optimize that further in 15 minutes anyway (if there is an even better algorithm), and that part of the calculation is done.

edit: just thought of a better way

    aa = sorted([words of length N])
    bb = [words of length N+1]

    # now make a sorted list of the longer words, truncated on either side,
    # (paired with the original for retrieval later)
    bb1 = sorted([(w[1:], w) for w in bb] 
        + [(w[:-1], w) for w in bb])

    # now we can do a merge operation on aa and bb1, and find 
    # the matches in O(len(aa)+len(bb1)) instead of O(len(aa)*len(bb))
    #
    # a merge operation that I can't be arsed to write out here, but I
    # promise it'll involve some super clever usage of the itertools module
    #
Now for step 2, words of equal length...


I think the reason you may of being downvoted is that, this is just an instructional course on algorithms. It's not a discussion about obnoxious interview questions. There are plenty of such discussions on Hacker News, but here it seems very inappropriate.

I would agree with that, you're contributing to a discussion to be sure, just the wrong one, I'd rather read about this somewhere else.


>Given two words in a dictionary, transform one word into another using only words in the dictionary.

What's that supposed to mean? Search the dictionary for other words that contain the letters you want and put them into the source word, while also taking out unwanted letters from the source word?


It indeed is interesting question. The solution which comes first to mind is creating a graph, with words as vertices, edges will be only if two words have a difference of single character. Once this graph is created, it's matter of running shortest path algo between two vertices. However, creating the graph itself could be a bitch, depending on size of dictionary.


You have the additional requirement that you change one character at a time, and end up building a graph of neighbors and doing BFS on it.


I've never heard this question before, but I assume it's finding words with the lowest Levenshtein distance between wordA and wordB and then wayfinding the route from wordA to WordB using the least steps possible?


correct, I think it is actually a good interview problem for discussion and don't mind it. What I do mind is the attitude of some of my colleagues that if a candidate can't immediately jump to the optimal answer, then they are a pass.


Sounds like something I'd likely fail. Without putting pen to paper, I can get from taking the two words and expanding Lev distances until we find a word in common, and then mapping a a route from A to that word, then from that word to B, but whatever I wrote would probably be horribly inefficient, at best.

I would also wager that whomever was asking that sort of question would probably be looking for some optimal form of the answer, and having never encountered it, oh well.


Can you englihten me on how edit distance helps here? I don't see why insert/remove/move is relevant in this case.


I'm in college right now and I've been wondering...do professional programmers actually think about sorting algorithms on a day-to-day basis? How relevant is this knowledge when it comes to actually making something?


Ask a simple question, get a simple answer: no. While there are some programmers who develop libraries, the majority will just consume well-understood sorting algorithms which are baked into their languages/standard libraries/etc of choice. If you find yourself coding mergesort and you are not in a job interview, a nightmare, or a nightmare of a job interview, you are probably doing something wrong.

That said, most of the reason we teach sorting algorithms is to teach algorithms in general, and particularly the consequential lesson that "There are often many methods to doing something that work; these methods are often different in extraordinarily consequential ways; the right method for certain problems -- even ones which look trivial on the outside! --may require literally years of R&D to develop."


I personally think distributed networks with smart contracts are going to be "a thing" soon, and in that kind of environment virtual cpu cycles are constrained to a level where an Intel 8080 processor looks like a speed demon.

In that type of environment, you need to do a lot of thinking about basic algorithm science, including exploring the many types of sorting algos.

(Here's some code I'm working on right now in that vein, which helps to perform efficient percentile calculations in a smart contract https://github.com/drcode/ethereum-order-statistic-tree)


Even ones that look trivial on the inside might take years of R&D to develop.


Any examples ?


Boyer-Moore (finding a string in another string) is a good example: http://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_sear...




I had a professor who told the class that if they ever wrote a sorting algorithm professionally, they should be fired. Depending on where you go, college is some cross between academia and training to get a job, and usually biased towards academia. While not directly relevant to daily programming, understanding time complexity and whats going on under the hood are very useful for understanding many things that I do use on a regular basis.

For instance, adding an index to a database table is something I regularly do for performance, and the reason is based on principles I learned in college. The runtime complexity of an indexed table is way better. Any monkey coder can add the index too, the difference is that I know what's going on under the hood (finding things in a ordered tree data structure are way faster than scanning through the whole table).

I was grumpy learning some things in college early on, and the more I advance my career, the more I appreciate the things I learned even if I don't code them up, or even use them, on a regular basis.


> I had a professor who told the class that if they ever wrote a sorting algorithm professionally, they should be fired.

For straight up sorting, yes.

But, some other algorithms have exactly the same structure as sorting.

As a toy example, take the famous skyline problem. See https://briangordon.github.io/2014/08/the-skyline-problem.ht... for the details:

"You are given a set of n rectangles in no particular order. They have varying widths and heights, but their bottom edges are collinear, so that they look like buildings on a skyline. For each rectangle, you’re given the x position of the left edge, the x position of the right edge, and the height. Your task is to draw an outline around the set of rectangles so that you can see what the skyline would look like when silhouetted at night."

One approach to solving the problem is equivalent to merge sort: you write a merge function that merges two lists of in-order non-overlapping rectangles, and go from there.


Which turns out not to be a toy problem at all - with the x axis representing time, and the y axis representing things like 'resource utilization' or 'seat reservations' or 'room bookings', all of a sudden you've got the basis for any number of resource optimization or allocation problems.

A related problem (not the same problem, but amenable to the same sort of approaches), for example, comes up when displaying overlapping appointments in a calendar timeline view.


Indeed. And the merge sort approach to solving these intervall-as-a-datastructure problems parallelize nicely.


>>I had a professor who told the class that if they ever wrote a sorting algorithm professionally, they should be fired.

Not many out of the box libraries would have find N min out of a stream/iterator...draw the moving median, moving min/max and so on. You won't find a red/black tree with linked next nodes to do the task efficiently. So sometimes there is that.


I realize that you chose one example off the top of your head, but it seems like an incredibly rare case. The majority of developers can understand "Queries involving this column are slow, it needs an index". Slightly more sophisticated developers can use an EXPLAIN statement (or it's equivalent) to see how many rows are being scanned and apply indexes where necessary. How many hours of coursework is it worth to obtain the knowledge "finding things in a ordered tree data structure are way faster than scanning through the whole table"?

I'm not sure if I've ever run into anyone who said "The things I learned in college have turned out to be pretty much worthless in my professional career. I wasted a lot of time learning irrelevant information." It's interesting that the majority of every CS education is always valuable, to every person, with no exceptions.


Eh, you'd be surprised by the number of devs that simply don't know what indexes are, other than magic-go-fast-juice. I've seen people put a separate index on every column, so that "it'd be fast for all queries"...without thinking that most of their queries used at least two predicates. I've seen people go on to spend tons of time and money on things that fundamentally cannot work because they simply are incapable of thinking through the complexity of what their asking. The fact that modern systems are so fast can let someone get pretty far before hitting a brick wall.

OTOH, I'm not sure full coursework is needed for that. Just make your way through Sedgwick's algorithms book in a month or two and you're in the top 1% probably.


Most places where I have worked, this is simply not the case. N in this case is somewhere between 100 and 300. Database indexing SPECIFICALLY is something that very few people grasp. Even Senior-Senior people.


You make a good point and I think it's because sometimes the objective is not clear.

When you study sorting you really do it because:

1) It's a simple, frequent and clear problem that everybody understands

2) People just need to think about it for a while to figure out a correct algorithm normally O(n^2) and think about optimizations from there (that will probably still make it O(n^2)).

3) You can study different techniques to solve that problem: like divide-and-conquer (mergesort), using a data structure (heapsort), divide-and-conquer+randomization (quicksort), not going for comparisons but using the structure of the data (radixsort).

4) You can learn to apply big-O notation for efficiency and compare different algorithms

5) You can study the limits of a problem (not an algorithm) like the omega(n log(n)) limit of comparison sorts and the omega(n) limit of sorting in general).

This also happens with the less clear but also rich problem of the Minimum Spanning Tree that has 2 famous algorithm (Prim and Kruskal) that can be implemented with different data structures having a great impact in efficiency.

So the real problem is that sometimes teachers just focus on teaching sorting but don't explain (and sometimes they don't have it clear either) that it's not sorting but a framework of mind what you want to give them. Sorting is normally already implemented in the popular and not so popular programming languages libraries.


I agree with the reasons the parent provided, especially that teachers teach the content without providing a "why?"

What I suspect generally that I cannot prove (yet): When teachers teach things that are easy to teach but not directly important to learn, students are distracted by the surface irrelevance.

Of course the underlying concepts and designs of sorting are important to understand. But, that the GP asked, "Do professional programmers actually think about this? Is this relevant?" means the curriculum has a problem. The problem is: students are asking meta-questions that should've been answered by the "why?" mentioned above.

In sum, I agree with the parent, especially with:

> So the real problem is that sometimes teachers just focus on teaching sorting but don't explain (and sometimes they don't have it clear either) that it's not sorting but a framework of mind what you want to give them.

It's easy (possibly... lazy? Again, this is what I suspect that I cannot prove) for a CS department to declare "Students will learn [list of topics] by examining and implementing sorting algorithms."

By contrast, it's a hard to 1. interest students by presenting them with problems not fossilized exercises. 2. ensure to students' parents and taxpayers and employers that they know the "basics|fundamentals|theory"


In most complex jobs (that is, jobs with a theoretical component someone writes books about), the theory part mostly comes to the fore when something goes wrong, and you have to diagnose and fix the problem.

So, in this case, programmers would use this stuff when a sort takes up too much time or too much RAM for what it's meant to do. Then you need to know what big-O is and how to fix it.

It seems a bit odd to spend all this time preparing for things that don't happen every day, but, really, that's what people pay the big bucks for: Someone who can smooth over life's little difficulties, at least in that specific realm.


This blog post by Joel Spolsky makes a case for teaching low-level stuff that I find very compelling.

http://www.joelonsoftware.com/articles/fog0000000319.html

Someone who hasn't thought about sorting algorithms is likely to implement a "Shlemiel the painter" solution without even realizing that their problem is fundamentally sorting related.


In most universities there is a distinction between software engineering and computer science. Computer science deals more with things like writing and optimising compilers, and the interface between hardware and software, and writing an optimised sorting algorithm is a good introduction to that kind of thinking.

Software engineers (like engineers in other fields) are in the business of taking stuff made by scientists and building things out of it that businesses and consumers use (like spotify and bridges and dialysis machines). They wouldn't do much in the realm of specific algorithm design, it's mostly about research and system architecture.


So I come from the other end of the spectrum. Studied non-CS in college, and after graduating I became a self-taught developer and am now working as a "professional programmer".

I had always believed that I wouldn't need to know the CS theory because I was able to write code that worked and didn't find it too complicated. When I decided to become a developer I learned by launching a few websites, which performed horribly, but I didn't realize until I got a few users. It took me a while to even figure out what was wrong, but eventually I learned about Big-O and realized my algorithms were far from optimal. Started scrambling to teach myself all CS theory I could find online, just so I could build things properly. Then there is a phase where you are constantly worried there is something else from CS school you are missing which will cause your software to blow up, but eventually you get more confident (with shipped code/and a better understanding of CS theory).

So in the end, I think the CS theory in school saves you from a lot of head banging and ignorance after you graduate. However, you will probably do a lot of this in your undergrad, so I guess it all balances out in the end!


No, but we do think about efficient computational solutions to our problems. These sorting algorithms are simply starting points for you to start thinking how algorithms solve problems, how to optimize them, how to construct data structures, and how to apply asymptotic analysis on your computational solutions. If all goes well with these algo courses, you should end up thinking like a computer scientist ;)


no, but you need the type of thinking that goes into predicting/implementing/evaluating/debugging the performance of a sorting algorithm (unless you are doing some kind of simple CRUD app for years, which is a perfectly fine way to make a living)


Yes it does matter, though its not usually very grand. Think about it this way, most of those algorithms you learned in college took someone in academia years to write. The algorithms I write in my day-job are usually MUCH more simple.

For example, I remember one developer (who know works at Amazon funny enough), who used dual for loops to iterate through data and find values in multiple places in our application. This caused a noticeable UI slow down. I ended up just loading this data in a Hashmap and it became instantaneous.


The majority of my data work for the last dozen or so years has been via SQL and I am only concerned with sorting from a performance perspective, larger hardware (read minis and mainframes) usually have fully developed databases so the real work is done by the OS/db engine.

It then becomes a game of ensuring adequate access paths and the like and using a query analyzer to see if what you put in executes as you like. I am more than amazed at the tricks a good query engine can pull but readily take advice it offers when a new index/access path is required.

Since this covers the sorting the next issue is, making sure the project requests are asking for the right data. A lot of time we gain efficiency by analyzing what they want and making their request better fit that (see Bob, you really didn't need that million record request when all you wanted were this set here)


In the same way that mathematicians seldom solve quadratic equations, programmers seldom implement sorting algorithms. They are still a hugely important building block, and if you don't know how they work you will have little hope of creating good algorithms for situations when the off-the-shelf libraries can't help you anymore. Loads of programmers don't aspire to technically challenging work though and they will likely do fine, every now and then littering with accidentally quadratic code.


You can generally use them to model a situation or improve it. For instance, if you're doing a dictionary, you'll be looking into a trie or Bloom filter, and it's generally like that I'm using it.

Skiena's Algorithm Design Manual is great for the former approach, and has a cookbook feel to it, since most of it is just an index of different algorithms to use for different situations.


On a typical day, no I do not think about sorting algorithms in particular.

Generally speaking though, I do think about Big-O notation when writing something that isn't trivial. If I do have to implement some particular known algorithm, then I typically look to the language itself for a solution. Failing that I look for a suitable library, and only after exhausting those potential solutions do I write my own.


We don't think about sorting specifically, but we do think about performance and the structure of data all the time. How data is stored very much effects how quickly it can be retrieved. This is pretty universal – Twitter needs different storage then Google – you'll be surprised how often you think in these terms.


It's very important, but I wouldn't implement a sort; I would be making sure that I'm using the right one for the moment.

The same goes for data-structures. For example, I might need a set of unique values, but, when I code, I would choose an enum-set or a hash-set based on what problem I'm solving.


I've been doing this for a living for about 15 years now and have had to implement a sorting algorithm from scratch a couple times for special-case scenarios.

Graph algorithms, and linear algebra complexities, on the other hand, I've had to worry about a bunch.


I will occasionally think "okay, so this bit is obviously going to cause O(n^2) database queries. Lets fix that." and add in a select_related(). But most of my performance-improving comes after gathering actual data.


Nope, there are ready-made libraries for every fairly common language out there.


It's about time; this is great progress. it seems like they've updated a lot of their computer science stuff -- it used to just be a basic introduction to JS and not really anything theoretical.


For others who may be confused (as I was), note in the sidebar that this course is for "introductory computer science algorithms." While I suppose it is most certainly a collection of algorithms, I would generally expect such topics to be listed under "data structures" or "discrete math" at most universities.

In my experience, "algorithms" denotes a higher level, more advanced course (OBSTs, flows, stable marriage, ILP/LP...)


While the algorithm course at my university consisted mostly of calculating complexities and was much more difficult in general, there is also Sedgewicks' Algorithms book, which covers many of the topics in this course. So "algorithms" is certainly the correct term.


That was my thought exactly! My data structures & algorithms course covered most of these topics and more and that was the 3rd CS class required in the curriculum. The 3000 level algorithms course I took as part of my CS program, included Big O notation, NP completeness, P vs NP, advanced uses of graph algorithms, dynamic programming, reducability.


Taught by Cormen, no less


Gee, I saw quicksort but not heap sort!

Heap sort is O( n log(n) ) in all cases including worst, and quicksort is O( n^2 ) worst case. O( n log(n) ) is the Gleason bound, the fastest sort there can be on average from comparing pairs of keys so that in that sense heap sort is asymptotically the fastest possible. Dropping the technique of comparing pairs lets us consider radix sort which can be faster, still. The lecture does cover merge sort; good.

For graphs, the min cost network flow problem is linear programming, and it turns out that if the arc capacities are integers and start with a basic integer feasible solution, then the simplex algorithm maintains integer solutions and, if merely an optimal solution exists, will find an optimal integer solution.

It turns out that this fact is in practice one of the best tools for linear integer programming which generally is in NP-complete. This progress on integer programming is worth noting.

For cycling in the simplex algorithm for the min cost flow problem on networks, W. Cunningham, long at Waterloo, has a nice solution based on what he calls strongly feasible basic solutions.

For the simplex algorithm for the network flow problem, there are, compared with the simplex algorithm in general, some enormous simplifications -- e.g,. a feasible basis corresponds to a spanning tree of arcs -- and, thus, astoundingly high performance on astoundingly large problems -- this fact should be noted.

Maybe I missed it, but I didn't see trees mentioned. Both AVL trees and red-black trees are good examples, that is, balanced binary trees good for collection classes. And B-trees, multi-way branched balanced trees, long important on hard disk (and likely now on solid state disks) for database should also be mentioned.

Since need to teach heap sort, should also mention that the heap data structure is good for an easy implementation of fast priority queues, e.g., look at 1 billion numbers one at a time, in any order, and end up with the 100 largest, efficiently. Also there is a modification of the heap data structure that has good locality of reference when doing a lot of virtual memory paging.

And for hashing, the clean solution is

Ronald Fagin, Jurg Nievergelt, Nicholas Pippenger, H. Raymond Strong, Extendible hashing-a fast access method for dynamic files', "ACM Transactions on Database Systems", ISSN 0362-5915, Volume 4, Issue 3, September 1979, Pages: 315 - 344.

I'm not thrilled with recursion since in practice it can strain implementations of the call stack.

Course over. Hope you enjoyed it!


graycat: you may be hellbanned. (I don't see anything obnoxious in your comment history which is why I mention it.)


For some reason when I read "Khan Academy" somewhere I instantly associate it with Star Trek and I can't take it seriously.


Well done, you've noticed a tremendous character flaw and now you can strive to correct it.

Honestly, how are you going to live life if any reminder of pop culture can distract you from a serious subject? And how are you going to interact with people if noticing such a thing (even if it's tremendously obvious) makes you feel the need to shout about it? Best to knock it on the head now.


Dude you need to get something for your soreness. I don't take everything seriously and comments on the internet are definitely one of those things. You are like a loser klingon captain whose ship is about to blow up because the Enterprise is firing its photon torpedoes at him. And no I don't care about you downvoting either.


Jesus dude. It was just an anecdote. It's a hell of a thing to call that a "tremendous character flaw" based on like, literally no evidence other than your assumptions.


Just for clarity, genuinely being incapable of taking anything seriously which sounded like a pop culture reference, would indeed be a tremendously damaging flaw.

Of course I don't really believe parent poster has that problem: it was making a little fun as mild chastisement for the terrible comment. I honestly think you could have interpreted it correctly if you hadn't been so quick to take offence.


If you don't actually believe the poster has that problem, why respond the way that you did? Doesn't that make your comment at least as terrible?


Just a heads up: The HN community tends to dislike jokes, especially jokes meant to do nothing more than make another laugh. In fact, I have found the HN community weighted towards discouraging humor even when it is used not only in a one-note joke, but also as an effort to further a discussion (as good satire should).

This naturally seems to stem from their conscious efforts to promote a 'serious'/on-topic culture and humor's nature as being subjective and often hard to detect.


Well bill gates took it seriously when his daughter was using it, seriously enough to give them a few million dollars. Perhaps you should rethink things.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: