>I figured that I, the author of the game, with my degree in mathematics from MIT, and years of puzzle-solving experience, would be much better at solving my own puzzles than random users downloading my app would be.
>I thought there was a mistake because some users were solving puzzles faster than I was! Were they cheating?
I liked the article a lot. I'm an "iterate and repair" guy but am trying to use careful thought, logic and planning up front more often. I think that the value of the article is that it shows that neither approach is perfect by itself.
These grads can have a hard time "just fucking doing it", especially when it comes to getting started on very hard problems. It can take a few years of real world experience for them to get comfortable with doing something that's known to be wrong and moving from there.
This is what we're talking about, a problem of practice.
But that's all because I knew I wanted to be a practicing programmer, not just a computer scientists.
A lot of university cs programs have this attitude that practical knowledge (and I'd argue that source control is beyond practical, it's essential) is yucky and should be left to the technical and vocational schools.
But things like iterating and heavy editing and re-writing are in no way programming specific, the exact same principles apply to writing plain old English. Does anyone know if English departments teach these things?
Really they are the most important part of any sort of English composition. Getting words on paper is easy, getting the right words on paper is hard. As soon as you begin writing papers for school (in middle school, perhaps), you should be learning all of those skills.
If you don't know them by college (which is entirely possible), they might be offered in a remedial 'English composition' course.
OP isn't right that this is greedy either. What your users are doing is an iterative search with a backoff strategy. They pursue one path through the search space, and when they don't find the solution, they remove some links (backing off) and try a new path through the search space.
The clever bit is whether you can automate their decision mechanism for how they're proceeding through the space step by step, and what points you can fall back to when something does wrong (and how to recognize when you've gone wrong).
The fact that people can do this by the seat of their pants somewhat shouldn't come as a surprise to people who study cognitive psychology, but certainly might be a surprise to engineers. After all, that's why everyone who's not a social scientist loves Malcolm Gladwell's books.
In my experience (playing Monorail!) the "fast" approach is basically guess, repair, repair, repair, ... If you're trying to keep a mental stack of tentative moves to "undo," as many people do, e.g. when doing a Sudoku, that's a different, more "logical" approach, to me. The fast approach has no state except the current state of the board.
What you call the repair step I see as running the greedy algorithm again, from a different node in the search graph.
There's probably some optimal level of fear-of-mistakes for any given project. Monorail made me realize my brain was tuned too far in the fear direction. I stand by that generalization.
This is what I was referring to. Yes, if you tackle coding projects that are challenging mostly by size and not by algorithmic difficulty, simply plowing ahead is the right attitude. But sometimes you will encounter real challenges, and for those you will need all the top-down design and logical tricks that you can muster. To the extent that my personal experience is relevant, for my PhD thesis I've designed and implemented a novel algorithm. I went through three non-functional versions before I realized that I actually needed to spell everything out on paper first, before writing a single line of code. After that, I was done in two weeks.
With iterate and repair, people are using an internal heuristic to make a choice and refining that over time. These are not necessarily the locally optimal choice.
Please don't make offensive comments like, "I guess the algorithms course is no longer required at MIT?!"
The very presence of the "internal heuristic" that you mention is usually enough to make the algorithm greedy.
Otherwise, thanks for being nice, and try to take less offense at what people say on the Internet.
That is most likely a wrong assumption. Greedy algorithms usually do not find globally optimal solutions, yet the people solving these problems are finding -the- solution. That is enough to suggest that the heuristic people are using is much more complex than merely choosing local optimums.
No one is wrong or right. It just depends on what you believe users are doing. One thing is certain: users use "local search" techniques as opposed to "systematic" search. The salesman problem describes this: the algorithm operates using a single current node and moves only to neighbors of that node.
However, depending on the "local search" algorithm used, those may be greedy or not:
- "Hill Climbing" is greedy. The algorithm is affected by local maxima, ridges and plateaux.
- "Simulated annealing" is not greedy. It uses "gradient descent" and does not always pick the "best" neighbors.
I believe the heuristics used by the users:
- are not very precise ("Well, it is better, right?")
- may change over time ("Oh, look at this!")
That's why, I think it is not a "greedy" algorithm with deterministic results, but it is NOT complete anyway (hence, the clear process).
I can't wait for you guys to kick my butt :)
ps: I'm a business major...
All you guys are trying to label, taxonimicate the "lay person's" strategy into your hierarchy.
Humans are damn smart, and the cocksure MIT nerd got his ass handed to him.
You are forced to purge your mind of every disorganized thought onto paper, and then go back through as often as necessary until it's right.
This may not be a great "programming" strategy, but it certainly makes the design process more fun and productive.
All of which is to say that the Computer Science/Creative Writing benefits may go both ways.
However, I really do not agree that the right approach is to not think about the best possible design beforehand. Ramming your fingers in there and fumbling around blindly is a terrible idea. Think beforehand about the best possible solution and then repair and iterate on that.
Keep it simple.
Build it in stages.
Let someone else do the hard part.
> Let someone else do the hard part.
Now on the web any time I find a fun game and play it for awhile and improve, I come online to find that the high score has been run up by some aspieoverlord to unwinnable levels.
So how do game designers overcome that demotivating unwinnable high score phenomena?
this actually applies everywhere you are showing group merged data related to a particular person.
i.e. your height, your friends heights, and global heights
imagine a news article that has interactive graphs that query g+ or fb and use the info to put a friends layer next to the global info.
1. local (city or neighborhood)
It is more analogous to the parent comment.
In this case, I'd say the premature optimization was simply trying to prove the correctness of each move before it was made. In the bigger picture, programmers have a meta-algorithm for how they solve problems. It's tempting to sit down and figure out an O(1) or maybe O(n) algorithm - whatever is the fastest possible - and only then start to program. In reality, many of these optimizations, especially on a life-scale, can be counter-productive, because the cost of designing, implementing, and maintaining them far outweighs the benefits. One of the hardest parts of good engineering is true simplicity.
So I'd say Knuth's suggested philosophy applies to problem-solving at large, and this is one perspective on the author's epiphany.
Seriously though, sometimes it's good to approach a problem using that advice from the chess teacher in Searching for Bobby Fisher: "Don't move until you see it"
But that allows automated solutions. As far as I know, all of the best solvers there use a greedy algo, although there are some cases where you can prune massively by using logical deductions as well.
The state space for this game is not stochastic. It's deterministic.
Standard single-agent search techniques apply here (i.e. A* and its variants) and essentially encapsulate this "iterate and repair" idea (especially IDA*).
The fact that the algorithm is executed by a human matters, because the human processing system is optimised for highly parallel tasks, such as estimating which of many possible next moves is most likely to improve the solution.
Also, we're talking about algorithms executed by humans.
Just because a problem is NP-complete doesn't preclude the use of deterministic algorithms. For example, you can solve small instances of TSPs with standard search techniques.
I'm not sure what distinction you're trying to make with your last statement.
Finding a Hamiltonian cycle has much more in common with solving a sudoku puzzle than with finding the shortest path in a graph; both are problems for which no polynomial time algorithm is known. Now, this problem is about finding a Hamiltonian cycle in a restricted class of graphs, so there is a chance that you could solve it efficiently, but it is non obvious. If you know how, perhaps you should publish a paper on it: that might be a major breakthrough.
There is nothing particularly publish-worthy although it would make an interesting state space for a graduate-level AI class assignment.
(Another state space -- game -- that we can use A-star to solve are sliding tile games, which have nothing outwardly to do with pathfinding per se.)
Perhaps this author didn't learn about iteration from MIT, but implying that MIT is entirely ignorant of this seems unfair.