Hacker News new | past | comments | ask | show | jobs | submit | more jamis's comments login

My intent was definitely not to portray learning as painful. Learning is a joyful thing. But it's not something you can acquire by passively staring at the world. You need to exert yourself if you want to do more than gain a passing acquaintance with things.


Yes; but playing is not passively staring at the world, nor does it involve a lack of exertion, nor yet again lack of a "passing acquaintance". The point of play is that you can exert yourself without having much hard work of willing it. A child with a new toy is highly engrossed, focused, and certainly not passive nor satisfied with a "passing acquaintance".

I guess I reacted to the screwed-up faces in particular; they looked like they were in pain.


Our problem here is the overloaded nature of the word "play". I do not disagree that children learn through play. But I'm using the word differently in my presentation: I'm using it to refer to activities that you (as an adult) pursue casually, as a way to relax or enjoy yourself. If you play the guitar very well, for instance, you might take an hour in the evening to just "play", singing along, etc. This serves many purposes, and is definitely valuable, well-spent time, but it does not serve to improve your guitar playing. To accomplish that, you'd need to spend some time working on guitar techniques that you are less accomplished at.

You can see my problem, hopefully. Ambiguity in language has been the downfall of more than one well-intentioned presenter!


You have overloaded play in a very specific way in your response; playing an instrument. I'd have preferred it if you'd chosen something else, because I think it introduces a third meaning for play (specific to instruments) apart from both the focused play like that of a child and something an adult pursues casually (which I doubt should be called play).

My hobby (as well as my only form of transport) is motorbikes. I do it for over an hour every day. I'm definitely getting better at it, because of an iterative process of analysis and experimentation. If I just looked at it as a means of transport, I wouldn't be so eager to stretch myself; I'd be content with getting from A to B in a safe and not very efficient manner.

It's completely different to programming, so in some ways it is relaxing; it's certainly highly enjoyable; but it is also simultaneously energizing and tiring. It uses a completely different part of my brain, and gets the adrenaline going.

I think if guitar-playing was my hobby, I'd be focused on improving and having an iterative experimentation / analysis cycle. But if guitar-playing was something I did to make music, perhaps to entertain other people, then the focus is elsewhere. It would no longer be play; it would merely be doing.

What I'm getting at is that play (my definition of it, at least) necessarily implies stretch goals, doing things that you have room to improve at, because without challenge it's boring. What makes it different from work is intrinsic motivation, and hence the lack of a need for application of will.

But you're right in another respect, this is all coming down to a disagreement over the use of words, rather than a disagreement in concepts etc. The start of the presentation just rubbed me the wrong way, and I've made far too much out of this small thing...


For me, it is interesting because fairly simple means can produce complex (and to me, beautiful) results. It's intrigues me to explore this and see what can be done with it.

It is also particularly interesting to me as a way to explore the features and syntax of a programming language. Maze algorithms are handy (and entertaining), but any algorithm would do. The idea is to implement the algorithm in the language of your choice (preferably one you are are not experienced in) and see how the language lends itself to the implementation.

Naturally, not everyone will share either of these interests with me. But I'm okay with that. :)


Nearly all of the algorithms I described extend well into multiple dimensions. I'm not sure how Eller's would work in 3D, but there is probably a way. And the Binary Tree and Sidewinder algorithms seem like they ought to be possible to adapt to 3D, but I don't immediately see how. The others, though, are all trivially expandable to n-dimensions (just add "up", "down" and any other directions you like to the list of possible moves).


You're absolutely right. Both this one and Aldous-Broder both have a worst-case where the algorithm never terminates. As for whether the algorithm is "unacceptable", that depends on the application. For games? Yeah, this is probably far from your best option for generating mazes. But for cases where you absolutely must have a uniform spanning tree, your options are limited. Wilson's is much better than Aldous-Broder, but still not perfect. Robin Houston has described a variant that combines both Aldous-Broder and Wilson (doing AB until about 30% of the field is filled, and then switching to Wilson's) which empirically improves the odds quite a bit, but as long as you're doing a blind random walk, you're pretty much never guaranteed to finish.


Ok... that's what I thought. Thanks.

By the way, thanks for this series of articles... I've found it most diverting. I guess I'm the sort of person who enjoys "recreational maze generation" :p


I don't believe Test::Unit does this intentionally; it's just a side-effect of the implementation (load all tests into an array, and iterate over the array).


We do have a CI server, and as you said it works well for catching failing tests. However, it requires that you commit and push your changes in order to test them, which means you are effectively publishing untested changes to your entire team. The same for any kind of distributed testing, unless you are using a shared volume to host your sandbox.

I'm running a Mac Pro with 8 cores, so there is a fair bit of parallelization I can do locally too. Unfortunately, the tests all depend on the database, and while I can certainly use tools like deep-test to spin up separate DB's for each worker, I've found that doing so adds a full 60 seconds to the test run. I fear that until we eliminate the database from (most of) our tests, super-fast runs will continue to elude us.

CI and distributed tests are good things, no question, but I'm still looking for ways to make it possible to run my tests locally in TDD-fashion. I'm far from out of ideas, it's just a matter of making time to experiment.


This is totally a personal/team comfort question, but is there any reason why you can't have two remotes? "git push jamistest" might use a few bits on a spinning platter somewhere, but that is cheap, and there is no reason your team has to see it if you don't push it to the master repo, any more than they see changes you keep on your local repo.


Aside from me simply wanting to be able to quickly run my tests locally, you mean? :) Mostly it's just an issue of configuring that so it works for all the programmers. Each would need their own remote, and each would need to be hooked into CI. Definitely possible, it just hasn't been a priority.


Most of the time you need only some of your tests (the ones you're working on currently). Then, preloading the framework becomes the real bottleneck. But it's solvable too, via spork (I believe, there is spork-testunit, but I never tried it).

Most of my coding is done in minute loops (add test / watch tests fail / add couple of lines / watch tests succeed). YMMV.


spork is friggen awesome, and if you are on MRI/Yarv you really should use it.

http://spork.rubyforge.org/


Hey Jamis,

Your CI server should be able to accept a patchset and run it without committing it. TeamCity does this (or roll your own CI server can do this too!).


http://www.astrolog.org/labyrnth/algrithm.htm#perfect says that both the Aldous-Broder and Wilson's algorithms will generate "all possible Mazes of a given size with equal probability". But neither meets your criteria of "efficient", since neither is even guaranteed to finish. I'm curious, too, whether there is an efficient algorithm with the same property (generate any valid maze with equal probability).


Good point. I've removed the bit about O(log n), since aside from being misleading, it really wasn't even relevant to the point of the article.


Yeah, the recursive backtracker is my favorite. Nicer results, and its very flexible. The other algorithms that I'm going to review are interesting for various reasons, and you can learn a lot about the structure and "essence" of graphs by implementing them, but they aren't as generally useful for maze generation as the recursive backtracker.


You're right, of course, about the documentation being awful. I was actually working on fixing that at the end, but every time I'd spend a few evenings writing docs (which was a few more evenings where I didn't get to do what I wanted to do), people would ask questions on the list not covered by what I'd just documented...and I'd get discouraged all over again. Docs help, for sure, and I painted myself into a corner where there was too much to document in the amount of time I could afford to spend on it. My fault.

I actually used a website for handling feature requests and patches (lighthouseapp.com), and it worked great. But even the best tested patch for a known bug still needs review. It needs to be applied and tested locally. It needs an update to the ChangeLog. And eventually it needs to be bundled and released, each new release requiring (at minimum) some release notes and a blog post announcing it.

It was a bunch of little things that got more and more annoying. I would have loved to distribute the load across more devs, but aside from a few who would review patches on specific topics (Scott Chacon, for instance, helped with git issues), it was all me, all the time.


"It was a bunch of little things that got more and more annoying."

What you need to do is write a script to automate those small but repetitive tasks! I heard about this great bit of software to do that for you called Capi.. oh.

I was going to add, I think a factor contributing to the lack of ongoing support you received is simply capistrano's place in the stack. Unlike Rails or other libraries, it very much has a "deal with it once then forget about it" usage pattern, which would seem to discourage ongoing participation. Or if there was some kind of missing feature "blocking" the envisaged deploy scenario, that would create a pressing need to "get this in NOW!" - and then forget about it.

The simple position of Capistrano as the deploy mechanism - a critical "bottleneck" through which one must pass but then safely forgotten - could be the reason for the "all me, all the time" dynamic.

Anyway, glad to see you don't seem bitter. Good luck and looking forward to future projects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: