Hacker News new | past | comments | ask | show | jobs | submit login
Users of my iOS Game Teach Me a Lesson MIT Didn’t (aaroniba.net)
186 points by aaroniba on July 6, 2011 | hide | past | web | favorite | 77 comments



The author is one cocky guy. This article mostly annoyed me.

>I figured that I, the author of the game, with my degree in mathematics from MIT, and years of puzzle-solving experience, would be much better at solving my own puzzles than random users downloading my app would be.

>I thought there was a mistake because some users were solving puzzles faster than I was! Were they cheating?


Anyone who knows Aaron will tell you that he's actually pretty humble. I think it's easy to imagine the thought process without it being especially cocky: "I built this game and have spent more time solving these puzzles than any of these guys. How are they so much faster?"


OK, Aaron.


I think it's safe to say he was being tongue-in-cheek.


I actually got the exact opposite feeling: he was cocky at first, but then realized he was wrong. And even wrote a blog post about it. That's pretty humble. :)


His claim seems reasonable. I'd bet on him ;)

I liked the article a lot. I'm an "iterate and repair" guy but am trying to use careful thought, logic and planning up front more often. I think that the value of the article is that it shows that neither approach is perfect by itself.


tldr: MIT doesn't teach everything. Author discovers rapid iteration, the value of throwing away code, and possibly source version control because users of his game solved it faster than he did with superior "logical" reasoning.


Don't take this so lightly. I've seen a lot of brilliant engineers, especially new CS grads, get stuck in this hole. They have the intelligence and knowledge to "correctly" solve problems, and are driven to always to get them right the first time.

These grads can have a hard time "just fucking doing it", especially when it comes to getting started on very hard problems. It can take a few years of real world experience for them to get comfortable with doing something that's known to be wrong and moving from there.


Excellent point. The comment wasn't a slight against the author. Far from it, I wish I had a mind even 25% as analytical as his.


I appreciate the tl;dr though since I've got the gist of the post and decided it's not one I want to read right now.


I don't understand. They know how to do things, yet they can't do things?


When theory meets practice, it's expected that the application doesn't come immediately.


Think about a brilliant poet and essayist who has written absolutely brilliant poems and essays, but is scared by the thought of writing a book. How could he ever write something so long!

This is what we're talking about, a problem of practice.


I'm not surprised. Even my no-name state university did not teach source control or any editor/IDE skills. I picked all of that up from the other students and learned even more by coding myself into a corner, fun!

But that's all because I knew I wanted to be a practicing programmer, not just a computer scientists.

A lot of university cs programs have this attitude that practical knowledge (and I'd argue that source control is beyond practical, it's essential) is yucky and should be left to the technical and vocational schools.

But things like iterating and heavy editing and re-writing are in no way programming specific, the exact same principles apply to writing plain old English. Does anyone know if English departments teach these things?


Iterating, heavy editing, and rewriting should be taught before you get to college.

Really they are the most important part of any sort of English composition. Getting words on paper is easy, getting the right words on paper is hard. As soon as you begin writing papers for school (in middle school, perhaps), you should be learning all of those skills.

If you don't know them by college (which is entirely possible), they might be offered in a remedial 'English composition' course.


What he calls "Iterate and Repair" is basically a greedy algorithm. Greedy algorithms are great where they work, but there are also situations where they break spectacularly. I guess the algorithms course is no longer required at MIT?!


Most greedy algorithms I've seen do not have a repair step.  But whether I call it "iterate-and-repair" or "greedy" doesn't change the message, does it?  Or are you saying that I shouldn't be surprised that iterate-and-repair works so well on this puzzle?


Really?

OP isn't right that this is greedy either. What your users are doing is an iterative search with a backoff strategy. They pursue one path through the search space, and when they don't find the solution, they remove some links (backing off) and try a new path through the search space.

The clever bit is whether you can automate their decision mechanism for how they're proceeding through the space step by step, and what points you can fall back to when something does wrong (and how to recognize when you've gone wrong).

The fact that people can do this by the seat of their pants somewhat shouldn't come as a surprise to people who study cognitive psychology, but certainly might be a surprise to engineers. After all, that's why everyone who's not a social scientist loves Malcolm Gladwell's books.


Iterative search with a backoff strategy is called backtracking, no? Seems an obvious way of solving the problem, especially since if I was to write a solver I'd definitively do it in Prolog.


I've heard it called a bunch of stuff, but yep, that's one of 'em! Sorry i was writing while exhausted :)


To pick a nit, I don't think the human, "cognitive" strategy really has any "back-off."

In my experience (playing Monorail!) the "fast" approach is basically guess, repair, repair, repair, ... If you're trying to keep a mental stack of tentative moves to "undo," as many people do, e.g. when doing a Sudoku, that's a different, more "logical" approach, to me. The fast approach has no state except the current state of the board.


What I'm saying is that you shouldn't draw such general conclusions from seeing a simple algorithm work in a particular case. Your problem-game turned out to be algorithmically simple. This probably makes it more appealing as a game. But most things in life will not be as simple, and this includes large coding projects.

What you call the repair step I see as running the greedy algorithm again, from a different node in the search graph.


The conclusion I drew from this experience was that I should get over my fear of making mistakes. I could have been more clear on that in the post.

There's probably some optimal level of fear-of-mistakes for any given project. Monorail made me realize my brain was tuned too far in the fear direction. I stand by that generalization.


"It turns out to be easier and faster to iterate from an existing but wrong solution, than to deduce a correct solution from scratch. Even if you have to occassionally press the “clear” button to start over."

This is what I was referring to. Yes, if you tackle coding projects that are challenging mostly by size and not by algorithmic difficulty, simply plowing ahead is the right attitude. But sometimes you will encounter real challenges, and for those you will need all the top-down design and logical tricks that you can muster. To the extent that my personal experience is relevant, for my PhD thesis I've designed and implemented a novel algorithm. I went through three non-functional versions before I realized that I actually needed to spell everything out on paper first, before writing a single line of code. After that, I was done in two weeks.


Something tells me trying and failing three times contributed much more to your solution than writing everything out on paper.


Can you define "algorithmically simple"? This looks like hamiltonian cycle with an extra constraint that there is one bounded region. I wouldn't be surprised if this problem was NP-hard.


You're right, there are probably instances of this problem for which greedy algorithms are quite slow.


"iterate and repair" is a pretty standard form of local optimum search, like in http://en.wikipedia.org/wiki/Hill_climbing or simulated annealing. Why are you surprised that works for exploring the decision tree of a puzzle game?


Regular expression search is often greedy and also repairs by next examining a less greedy partial solution


How is it a greedy algorithm? He didn't say anything about the "intuitive" decisions maximizing immediate gain at every decision point.


I'm sure the users he describes can be trusted to make locally optimal choices.


Greedy algorithms run the same way every time given the same choices. Why? Because, if options are received in a deterministic order, it'll make the locally optimal choice.

With iterate and repair, people are using an internal heuristic to make a choice and refining that over time. These are not necessarily the locally optimal choice.


They won't run the same way if you start from a different node in the search graph, which is probably what the players are doing.


I was being nice before, but it's obvious that you don't know what a greedy algorithm is.

Please don't make offensive comments like, "I guess the algorithms course is no longer required at MIT?!"


A greedy algorithm is an algorithm that makes locally optimal choices. The classic example, if you've taken an algorithms course, is the travelling salesman problem with the nearest neighbor heuristic. This starts at an arbitrary node, and will usually generate different solutions based on the node that is chosen to start at.

The very presence of the "internal heuristic" that you mention is usually enough to make the algorithm greedy.

Otherwise, thanks for being nice, and try to take less offense at what people say on the Internet.


It is not the presence of an internal heuristic that makes an algorithm greedy, it is the choosing of locally optimal solutions. For your claim to hold, you would need to demonstrate that people make locally optimal solutions, at least in the context of this problem.

That is most likely a wrong assumption. Greedy algorithms usually do not find globally optimal solutions, yet the people solving these problems are finding -the- solution. That is enough to suggest that the heuristic people are using is much more complex than merely choosing local optimums.


It was interesting to read both of you guys. I can see a lot of ego on both sides... But that's a good thing :)

No one is wrong or right. It just depends on what you believe users are doing. One thing is certain: users use "local search" techniques as opposed to "systematic" search. The salesman problem describes this: the algorithm operates using a single current node and moves only to neighbors of that node.

However, depending on the "local search" algorithm used, those may be greedy or not: - "Hill Climbing" is greedy. The algorithm is affected by local maxima, ridges and plateaux. - "Simulated annealing" is not greedy. It uses "gradient descent" and does not always pick the "best" neighbors.

I believe the heuristics used by the users: - are not very precise ("Well, it is better, right?") - may change over time ("Oh, look at this!") That's why, I think it is not a "greedy" algorithm with deterministic results, but it is NOT complete anyway (hence, the clear process).

I can't wait for you guys to kick my butt :) ps: I'm a business major...


"Quid in nomine? Quod dicitur rosa Odorem suavitatis, ut non alio nomine.

All you guys are trying to label, taxonimicate the "lay person's" strategy into your hierarchy.

Humans are damn smart, and the cocksure MIT nerd got his ass handed to him.


this is true for the most part. Humans have always been the "try it and if it kills somebody try it another way." i.e. we used to eat tobacco leaves. Now we smoke them. riddle me that.


Mentioning MIT makes this a nice, low-brow analog of "Worse is Better" by Richard Gabriel:

http://en.wikipedia.org/wiki/Worse_is_better


Every programmer could benefit from taking creative writing courses in college.

You are forced to purge your mind of every disorganized thought onto paper, and then go back through as often as necessary until it's right.

This may not be a great "programming" strategy, but it certainly makes the design process more fun and productive.


One of my students (CS) once took a creative writing class, and her professor, a man well-known in the creative writing community, told her he thought computer programmers make better writers. She was pretty sure this had a lot to do with understanding revision and outside input. When you write a program, the compiler (or possibly your unit tests) tells you "it's wrong! Fix it!" and doesn't always give a lot of guidance as to how, but you figure it out and come out the other side with a better product. A lot of the other students in the class took the critical comments of the professor (and peers) as almost personal attacks on their perfect creation.

All of which is to say that the Computer Science/Creative Writing benefits may go both ways.


Writing is 90% editing: so is programming.


As a designer (and soon-to-be programmer) I totally agree that iteration is key. Iteration is so important to producing a polished product with the best possible aesthetics and experience.

However, I really do not agree that the right approach is to not think about the best possible design beforehand. Ramming your fingers in there and fumbling around blindly is a terrible idea. Think beforehand about the best possible solution and then repair and iterate on that.


Yes - this is exactly the advice that BWK gives. Rule #2 specifically.

    Keep it simple.
    Build it in stages.
    Let someone else do the hard part.
Sourced from: http://rixstep.com/2/2/20080125,00.shtml


  > Let someone else do the hard part.
Obviously not everyone can follow this strategy or none of the hard parts would get done.


You're missing the meaning. You should be using libraries and other small tools, to assemble more complex and powerful tools. Not reinventing the wheel. That is the way of Unix.


Unless different parts are hard for different people.


As someone who has used trial and error processes my entire life to figure out problems, am I missing out by not being a deliberate planner?


When I was a kid I had the top score in Galaga at my local arcade. I held the spot for months. Then one day I came in to find my score doubled by someone. The arcade owner told me that it was a guy who went around to each arcade and ran up the scores. Like a game playing gunslinger.

Now on the web any time I find a fun game and play it for awhile and improve, I come online to find that the high score has been run up by some aspieoverlord to unwinnable levels.

So how do game designers overcome that demotivating unwinnable high score phenomena?


There should be 3 levels of high score lists should be 1. personal 2. friends 3. global

this actually applies everywhere you are showing group merged data related to a particular person.

i.e. your height, your friends heights, and global heights

imagine a news article that has interactive graphs that query g+ or fb and use the info to put a friends layer next to the global info.


Instead of social circles, how about geographical locations?

1. local (city or neighborhood)

2. statewide

3. country

4. earth

It is more analogous to the parent comment.


Personally I'd love to see replays of "impossible" scores. Similar to the speed run videos for games like super Mario on YouTube


This reminds me of the famous Knuth quote: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."

In this case, I'd say the premature optimization was simply trying to prove the correctness of each move before it was made. In the bigger picture, programmers have a meta-algorithm for how they solve problems. It's tempting to sit down and figure out an O(1) or maybe O(n) algorithm - whatever is the fastest possible - and only then start to program. In reality, many of these optimizations, especially on a life-scale, can be counter-productive, because the cost of designing, implementing, and maintaining them far outweighs the benefits. One of the hardest parts of good engineering is true simplicity.

So I'd say Knuth's suggested philosophy applies to problem-solving at large, and this is one perspective on the author's epiphany.


Eliminating invalid moves is not a "premature optimization", but is at the heart of heuristic search. By eliminating impossible/invalid states, you eliminate them from the search space, which not only speeds things up, but prevents the discovery of an invalid "solution". Put another way, discarding invalid moves reduces the branching factor of the problem -- a fundamental optimization.


This is exactly how I solve everything, and it's why I'm so bad at programming on a whiteboard in interviews.


Reminds me of this strategy for finding Hamiltonian circuits in graph theory. http://alanhogan.com/asu/hamiltonian-circuit/


Good thing he didn't make a sudoku app or he wouldn't have learned the same lesson... good luck taking an iterative approach to that.

Seriously though, sometimes it's good to approach a problem using that advice from the chess teacher in Searching for Bobby Fisher: "Don't move until you see it"

http://www.youtube.com/watch?v=V36gt6_uJy4&t=7m34s


Oh, iterative approaches often are used when stuck in sudoku. deduce everything you can, but once you can't, guess where there are the least options remaining. erase if you can't get there from here.


Sure, you can guess at Sudoku if you want (though it's often considered poor form to design a puzzle that requires a guess), but the (intended) point was that you don't jump right in and start iterating from the beginning.


This game is similar to Mortal Coil (flash frontend to a programmable backend)

http://hacker.org/coil/

But that allows automated solutions. As far as I know, all of the best solvers there use a greedy algo, although there are some cases where you can prune massively by using logical deductions as well.


It sounds a bit like simulated annealing.


Simulated annealing is a probabilistic algorithm.

The state space for this game is not stochastic. It's deterministic.

Standard single-agent search techniques apply here (i.e. A* and its variants) and essentially encapsulate this "iterate and repair" idea (especially IDA*).


It doesn't sound like the users mentioned are using a completely deterministic algorithm to determine the next solution to try, though - it sounds more like they're making random tweaks. Albeit the tweaks are guided by intuition, which makes it somewhere in between the two - neither completely deterministic nor completely unguided.


Sounds like MIT could have taught him a lesson about selecting the correct tool for a job, then -- there's no real sense to use a non-deterministic algorithm to solve this class of problem. Beyond the spirit of just experimenting, of course.


Except the non deterministic algorithm worked better. Did you read the blog post at all?


It's a false conclusion. His non-determinisitic algorithm is decidedly less efficient than A*: it's essentially following a similar algorithm except it's randomly choosing what node to expand next, sometimes repeating expanding the same node.


The author of the blog post used a deterministic algorithm; he observed some of his users using a non-deterministic one that appeared to work better.

The fact that the algorithm is executed by a human matters, because the human processing system is optimised for highly parallel tasks, such as estimating which of many possible next moves is most likely to improve the solution.


This problem cannot be solved by A*. This is a combinatorial problem which is likely even NP-complete.

Also, we're talking about algorithms executed by humans.


The problem can most certainly be solved with A* or one of its variants.

Just because a problem is NP-complete doesn't preclude the use of deterministic algorithms. For example, you can solve small instances of TSPs with standard search techniques.

I'm not sure what distinction you're trying to make with your last statement.


Can you explain how you'd solve this using A* efficiently? You will see what I mean. A* finds the shortest path, this problem requires a Hamiltonian cycle, two entirely different problems.

Finding a Hamiltonian cycle has much more in common with solving a sudoku puzzle than with finding the shortest path in a graph; both are problems for which no polynomial time algorithm is known. Now, this problem is about finding a Hamiltonian cycle in a restricted class of graphs, so there is a chance that you could solve it efficiently, but it is non obvious. If you know how, perhaps you should publish a paper on it: that might be a major breakthrough.


A-star is not limited to performing just pathfinding; consider it more a graph traversal algorithm (which fits well with this state space). I would use one variant in particular, IDA-star, for solving this game. If we consider one or more nodes connected together as a "cluster", starting with an "empty cluster", the start state is a set of clusters, and the end state is one single cluster that represents a cycle. A "move" would connect two clusters together.

There is nothing particularly publish-worthy although it would make an interesting state space for a graduate-level AI class assignment.

(Another state space -- game -- that we can use A-star to solve are sliding tile games, which have nothing outwardly to do with pathfinding per se.)


What you are describing is exhaustive search on the state space and is orders of magnitude slower than the randomized algorithm, especially when executed by humans (which is why I added the qualifier "efficiently"). Also, A* is for finding the shortest path, and the length of the path in this case is irrelevant. In fact the length of all paths is the same.


The used method reminds me of randomized sorting algorithms and how they're often more efficient than deterministic methods.


Forgive me for I am a student of HtDP and not SICP, but when this author was attending MIT wasn't 6.001 still in place? The Scheme REPL, which I imagine was introduced to students of 6.001, encourages an iterative approach to problem solving.

Perhaps this author didn't learn about iteration from MIT, but implying that MIT is entirely ignorant of this seems unfair.

edit: formatting


Or possibly he wasn't being so pedantically literal.


I suppose MIT students don't learn guess and check.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: