Hacker News new | past | comments | ask | show | jobs | submit login

It's not crazy at all, it greatly simplifies development to use callbacks for actions rather than manually encoding the necessary state into the URL. Techniques such as this are what enable a single developer to be so productive by automating boring and time consuming stuff.



Except it doesn't appear to work robustly, which makes it poor design. "Automating boring and time consuming stuff," is all well and good if it actually produces a functional system, but that concern is secondary to robustness.


> but that concern is secondary to robustness.

To you, but that's a value judgement, it obviously was the other way around for pg. Had he not taken those shortcuts, there would be no hacker news at all; be thankful he automating that boring stuff and bothered to build the site.


It'd work robustly enough if the links didn't expire, and if we believe other posts on this page, the links are expiring due to memory limits on the system. (The other possibility is a timeout, I guess, which is easily fixed.) If it's running out of memory to store the closures it would run out of memory to store the interaction state.

In other words, there's a problem here, but it's not the programming model that pg chose.


Except the links do expire, so its not robust. I expect that when I visit a web page, I can let it sit for an extended period of time before moving on to the next page and have it work. HN doesn't work.

Furthermore, the technique of holding important state authoritatively in memory like this is not a good web-development practice for various reasons. Doubly so if its state data which can be round-tripped. Links should not break when the web server or cache (I'm not sure which one it is) runs low on memory. So yes, there is a problem with the programming model that pg chose.


If he hadn't used that technique, there would be no hacker news for you to use at all. You're entirely missing the point that this is a technique to make hobby programming more fun, it's not about being robust or best practice, it's about making programming simpler so pg finds it worth his time to build this site in the first place.


blahedo's point was that the technique was not fundamentally a problem from a robustness point of view, and I disagree with that point. It is a problem, and I was pointing that out.

Your point seems to be that since Hacker News is a "hobby project," that we may forgive sacrificing a bit of robustness to make the programming exercise more pleasant. That point was not clear to me from your original posting. Rather, the point seemed to be that the technique was good because it was clever and fun, and I disagreed with that sentiment.

PG seems to be saying elsewhere that it was used as a rapid prototyping technique. That seems to be a fair justification of the technique, in my estimation.


> If it's running out of memory to store the closures it would run out of memory to store the interaction state.

Not necessarily. The way I would approach this is to keep the link cache in memory, but have the links contain the minimal necessary state to reconstruct the link from disk-based storage in the case where the cache is gone. That gives the same excellent median performance without any breakage.


> The way I would approach this is to keep the link cache in memory, but have the links contain the minimal necessary state to reconstruct the link from disk-based storage in the case where the cache is gone.

It's not a cache, and you clearly don't understand the issue. These are callbacks to closures, embedding the state in the URL is what they attempt to avoid because doing that is tedious.


Sorry, bad choice of words. I was just shooting from the hip here in response to the parent that suggests you can't both keep something in RAM for the majority of cases and still make it robust.

The fact that embedding state in the URL is tedious is neither here nor there, but in any case it's a pretty garbage excuse. You know what's more tedious than writing code to pass a few integers around in links? Thousands of people losing carefully written paragraphs of enlightened prose on a regular basis. In fact the more carefully considered, the more likely the text is to be lost. If it took 24 hours or even 12 hours for links to expire then maybe you could justify the approach, but it seems to be well under an hour on average before a given closure is purged. This site seems hardly so complex as to be gaining much from a pure continuation approach, and if you can't ease this problem in a lisp then are all of us building services for non-hackers doomed to life of bitter tedium?


You seem to not be aware of the architecture of this site, it's all run out of ram, no database; just simple lazy load on demand files on a single server running in a single process. Because of this, memory is tight, that's why those closures are purged. It simply means pg hasn't had the time to convert more of the prototyped code into production stateless code that always works. But that's his prerogative, this site is a hobby for him, he doesn't need to make any excuses about anything. If you don't like his site, go somewhere else.


There's something on disk isn't there? Or does a power outage mean poof it's all gone?

Anyway, you're totally right that it's his prerogative to build a site however he sees fit, and it's my prerogative to leave, but it's also my prerogative to complain about it and call it half-assed. I do build websites as well, so I'm not just armchair commenting.


> Or does a power outage mean poof it's all gone?

Poof the closures are all gone; the state of the articles and comments are of course rebuilt from disk state on an as needed basis.

> I do build websites as well, so I'm not just armchair commenting.

I appreciate that, I just use a similar framework and understand why one would choose to use callbacks and not bother ever replacing them. It has to matter enough to bother and to pg, it doesn't yet.


Simplifies development? This is not a complicated piece of software and the techniques to build it are well known. You wouldn't need any more state than the id number you need for the callback anyway.

Building broken software is always much easier than building robust correct software so this is hardly a good argument.


> You wouldn't need any more state than the id number you need for the callback anyway.

I don't think you grasp the issue... the link is expired because the callback no longer exists to link to.


No, I understand the issue. But you're presupposing a specific implementation here. If you were just designing this in, for example, PHP then you'd just need one piece of state: the page # (for more...) or the parent comment id (for the comment) and so on.

The real issue is that there's a whole bunch of saved state on the server for operations that could be (and should be) completely stateless.


No, the issue is time, specifically, pg's time; using callbacks takes less programmer time than manually building every URL statelessly. Yes, it could be done another way, but it wouldn't exist at all if he'd had to do that for every link because it'd have taken too much of his time.


What is it about Arc or the architecture of HN that makes callbacks so easy and stateless URLs so hard? I've written a forum in under 3 hours the "traditional" way in PHP. I find the stateless concept a lot simpler in general. None of these links should require any server state at all.


Generated html; callbacks make for rapid prototyping by grabbing necessary state directly from the environment rather than making you specify the state field by field in the URL. Your 3 hour prototype doesn't come close to the features in this forum so it's not comparable. No links ever require sever state if you take the pains to manually specify routing and parameter information for your links, but that's something callbacks eliminate the need for by trading server state for programmer time.


I think you massively over-estimate the functionality of this site. I can actually see the immense value of using closures for maintaining state in an application that actually needs state (any site containing a progression of forms, for example). HN is ridiculously simple for a site. With few requests requiring requiring much in the way of previous state.

I understand why it's designed this way -- he's got a tool meant for more complex tasks than what it's used for on this site. He used that tool because it's what he knows. But I really can't see it saving that much programmer effort in general.


I think you over-estimate what you can do in three hours and underestimate how much non UI stuff there is in trying to keep out garbage and prevent voter rings and score karma and vary behavior based on that karma and probably a dozen other little small things.


I certainly don't think 3 hours is enough for this site but then pg has put in a lot more time in than just his initial prototype as well. But the functionality of closures and server-side state aren't needed for this kind of site at all. Nothing you mentioned above seems related to that functionality either.


It's not relevant whether they're needed or not if you use closures as your default linking mechanism because it's always the easier route. They're not needed for any site ever, they're just damn convenient. You go back through the app replacing closures with more verbose and direct linking as time permits; clearly pg hasn't found the time yet.


You could route URLs for this site with a moderately sized switch statement. The code to route everything would be smaller than the code needed to just route closures before you've setup a single one.

Obviously replacing closures with directly linking is more difficult than just using direct linking in the first place.


Incorrect; routing closures is free, automated by the web framework, something no web framework does for direct linking. Look at Rails, every Rails programmer spends a large amount of time figuring out and managing routing. Look at Seaside which uses callbacks just like this site, you can build the entire app without spending a second on routing URL's because callbacks do it for free as part of the framework, completely automated.


You get crappy URLs with closures; no other web framework provides ugly URLs like that -- but if they did -- they could do it with the same amount of effort. But routing in rails is specifically designed to separate URL presentation from the underlying action.

Hell, you can get routing for free in PHP if you just name your files like: comment.php, topic.php, upvote.php, downvote.php, etc.


Users don't care what the URL looks like, Amazon does quite a bit of business with its crappy URL's and closures allow linking to actions without having to have a resource for every action. This is extraordanaryily useful when building complex applications.

Look, I use both styles daily and I'm telling you it'll be over my dead body before I allow someone to take closure style ugly links away from me. You aren't going to convince me that manually routed URL are always preferrable.


Users who bookmark and search engines care about non-random non-expiring urls. The urls on HN are wrong in every way a url can be wrong.

Closures create a resource for every closure instance which isn't very scalable or efficient -- it is in fact the problem with this very site. It might be useful for building complex applications but it's just a liability and a waste for something as simple and busy as HN.

I'm trying to convince you that manually routed URLs are always preferable but they should be for a site like this.


It's not productive to have a site that randomly locks out visitors. No matter how clever the code design is, this is a product flaw.


No, it's a design choice, he favors ease of programming more than user experience. You might not agree with that choice, but it's not a flaw, he did it on purpose and knew the consequences.


A better name might be technical debt. It is a flaw from the perspective of someone seeing the error message. But for the developer it is a way of saving development time, which can be paid off later to fix it.


Yes, this is a better way to say it.


It's not a feature so it's a flaw.

You're right that pg did it this way in the beginning to save time but years have gone by and the site is now more central to his business - especially as a tech demo. This back and forth argument presupposes that there isn't a better fix than the naive 'use old-style code' solution.

Anyways, the discussion is worth having. Only by pointing out problems do you fix them.


You're justifying code that doesn't work on the grounds that it was quick to write. (Facetious comment: if the website doesn't have to work, I can write the whole thing in under a minute. Someone wrote a HN clone on these lines a few weeks back, but I don't know how long it took them.)

Given that the HN code was written by an increasingly busy man in his spare time, his use of an unreliable but quick-to-write implementation technique may be an acceptable trade-off. But it doesn't make the design any less crazy.


> You're justifying code that doesn't work on the grounds that it was quick to write.

No, the code works fine for an acceptable period of time, it doesn't just not work.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: