Hacker News new | past | comments | ask | show | jobs | submit login
Reminder: "Broken gets fixed, but shitty lasts forever" (jamesgolick.com)
169 points by abhay on Feb 16, 2011 | hide | past | web | favorite | 34 comments


I think the responses here, and especially this blog post, are completely out of line. Yes, he's a community leader, but he still has the right to write lazy code to solve problems quickly. It's not like he published it as a gem, blogged about it, and tried to convince a bunch of people to use it. It was just a tweet about a fun hack.

I agree. Being "a leader" doesn't mean you aren't allowed to have fun anymore.

The blog post reads as, "I have some pet issue that I am not important enough to solve; so someone more important than me should solve my pet issue for me". Well, sorry. The world doesn't work like that.

(Maybe you would become a leader if you didn't spend hours of your day thinking about how one-off scripts could be hacked by a malicious server?)

It's also a little cowardly to write a post like this and then disable comments on it.

I think it's a perfectly valid criticism.

One of Rails' biggest influences on the Ruby community (aside from making you feel like a jerk if you don't write tests) is that there is one "best" way to do common things and you should only deviate from the convention when necessary.

This is great most of the time, because it allows people to jump right into code they've never seen before (or haven't looked at in a while) and know exactly what's going on without having to follow a bunch of code paths and random classes. Having trouble with a poorly documented gem? Just jump into the code because all gems are laid out the exact same way.

However, often when you know something is pretty standard functionality (like doing an SSL request with net/http), you know it's been solved many times before and just find the first article on Google about it and do it that way. And you're about 100 times more likely to just blindly copy and paste the code when it comes from the guy who wrote the most popular Ruby HTML parser.

I see the same thing all the time in iOS development. Everyone just uses the same patterns that Apple puts in their documentation, which is why Joe Hewitt's Three20 framework was such a big deal.

Is "the guy who wrote the most popular Ruby HTML parser" actively promoting this, though? I know that everything thinks Twitter is an advertising platform, but it could be he just thought it was a cool hack and wanted to show his friends. It's invigorating to get positive feedback about something you think is cool even if the actual feat accomplished is relatively small, like flipping a flag on Ruby's SSL module.

He didn't write a tutorial based on this methodology and I definitely don't see it pimped around anyway. gist is a pastebin, after all.

..which is why Joe Hewitt's Three20 framework was such a big deal.

Three20 wasn't a big deal because it was different from Apple. The noise behind Three20 is that it was (and still is, to a large degree) a cluttered mess of a framework that made it near impossible to use one part without importing the whole thing. I believe that Joe Hewitt has basically said as much on Twitter as well.

If you were to instead point out Apple's template code as a good example of code that is widely-used and not very good, you would have a much better analogy..

Oh yeah I agree that Three20 is a cluttered, monolithic mess. What I should have said was that Three20 was the first major attempt (that I know of) to break away from the strict nested-controller way of making data-driven apps by implementing a pretty nice URL-based application model.

Matt Gallagher has also done a good job of deconstructing the Apple Way and try new approaches: http://cocoawithlove.com/2009/03/recreating-uitableviewcontr...

Does recreating UITableViewController to be slightly more flexible really constitute a new approach?

It's not slightly more flexible, it's hugely more flexible. As someone who has apps that consist of a complicated web of tableviews, Matt Gallagher's work is amazing and helped me immensely.

Although, I think this link is better: http://cocoawithlove.com/2010/12/uitableview-construction-dr...

If you remove "to be slightly more flexible" from my previous post, you'll find the real point I was trying to make.

I think he's referring to TTNavigator, which is a really cool idea (and an idea I stole for my use in Cydia.mm, fwiw).

I find it repulsive that poor diligence on the part of a programmer can be blamed on some ‘community leader’ setting a bad example. As a programmer, all code that goes into your project is your responsibility. When things go well you’re the only one who gets paid, therefore when they go wrong you’re the only one who should get the blame.

See also: http://enfranchisedmind.com/blog/posts/fyi-my-open-source-us...

Your approach is unpractical and I doubt you follow it yourself. Almost nobody that uses opensource libraries reads all of the code. As with journalism, science and many other things in life, you have to trust upon others to deliver work that conforms to certain standards. Even though every newspaper article and scientific paper that you accept is 'your responsibility', you would be pretty miffed if an article in the free local paper was completely false.

If you give away toilets for free, it's your responsibility to make sure they aren't lined with sodium and will blow up after the first flush. If you give away a toilet for free, you are an asshole if you know it has a huge hidden hole and you don't warn takers about it. If you write a piece of software that claims to be doing something, but it actually contains malware, then you are violating the law (as much as when you give me a nice statue that contains a spy camera).

It has never been possible or allowed to give just anything you want away for free, without you being liable in any way. That hasn't changed with software and no license in the world can exempt you from certain basic responsibilities concerning your 'gift'.

Are you speaking morally? From my understanding the warranty disclaimer in many licenses (GPL, BSD, MIT) does exempt you from responsibility...

Claiming you are exempt from any responsibility does not actually legally exempt you from any responsibility. You can't legally bring a spy camera into my home by presenting me with a free gift that contains a hidden camera. You can't legally blow up my home, even if I accept a free device from you and press the buttons you told me to press, unless you were very clear about what the device would do (and no, claiming in the license that it is not meant for anything does not nullify the website and the examples of what the code is obviously supposed to do). Going from the realm of the obvious to the realm of the somewhat less obvious: if you give me a free bicycle, knowing it's fron fork is starting to give way, you are responsible for my skull fracture if you don't tell me about it. The practical problem, usually insurmountable, is proving you knew about it. More broadly: the consequences of known defects that obviously have disastrous consequences are the responsibility of the provider. This also holds for software and doesn't depend on the license.

Now my point is not that this is the case here or that such cases are common or even important. The point is that the blanket statement of my parent is plainly false.

Of course, IANAL, but you don't need to be a lawyer to understand jurisprudence. There are plenty of known cases where people intentionally harmed others and tried to plead guilty based on not actually performing the action that lead to the damage. It doesn't fly.

Here's a reference/cheat sheet to using SSL with verification with Net::HTTP:


It shows different ways of providing Net::HTTP with verification certificates.

Thank you. This is the best example I've seen of how to make Net::HTTP do the right thing with SSL certificates.

The reason the VERIFY_NONE fix got so popular is that, by default, if you turn on SSL support in Net::HTTP in an otherwise working codebase, your calls fail with certificate errors. The docs (http://www.ruby-doc.org/stdlib/libdoc/net/http/rdoc/index.ht...) have absolutely no helpful information except for mentioning the default of VERIFY_PEER. If you're noodling around on your own you'll discover VERIFY_NONE works well before you discover how certificate stores work.

This isn't the first time I've heard this gem (also repeated as broken gets fixed, shoddy lives forever). Anyone have the original attribution for it?

While true, it has to be balanced with the fact that you are tasked with creating a product to earn (or save, etc) the company money. Perfection can never be reached, and you have to say 'Good Enough!' at some point and just ship.

And really, if it's crappy code and lasts forever... Isn't that good enough? If you never have to touch it again, then it was as perfect as it needed to be. If you DO have to touch it, it was broken and will get fixed.

I'm not advocating writing crappy code. I'm just saying that you have to draw a line somewhere. I've met amazing coders that will fiddle with the code forever and never finish because they just keep making it better, more readable, prettier, more extensible, or some other thing forever.

The point in general is valid, but picking someone out and showing them up in public was unnecessary.

Using sensible wrappers around Net::HTTP would resolve the issues with the unreasonable defaults.

Wrest (http://github.com/kaiwren/wrest) would be a good-fit here: it defaults to VERIFY_PEER for SSL connections, has client-side caching support, comes with a fluent API, redirect support, callbacks, patron (curl) back-end etc. etc.

The project is being actively maintained, and we're working to add async support using EM for now.

The very best criticism of code is written in code.

Example code should be exemplary.

That's harder, of course when the library has unreasonable defaults.

DO we have a term for this in the small?

Those small utilities you wrote a decade ago in perl when your company was tiny, over a couple of days that end up a staple of daily use in a company 10 years later and any attempt to re-factor or replace them 10 years way is met with extreme resistance? What do we call that - Crappy-enough software? (I say this as the author)

(Actually, going back and looking at said code, it's not badly written at all, it's entirely readable, a few perl scripts, no spaghetti, and it's clear how they interoperate just by reading them. So it's not crappy or badly written - it's just overly simplistic.)

reminds me of something gabe newell (co-founder, managing director of valve software) once said: "a game won't be delayed forever, but a game can suck forever."

The fact that you can't delay "forever" is a cheap semantic trick. You can delay for so long that you run out of money / investor patience.

There's a vast graveyard of game companies which can attest to it.

I suppose one has to excuse Gabe Newell for not understanding this, because his company has been unsuccessfully trying to go out of business in exactly this fashion for most of its existence. Their failure to go bankrupt during any of their long delays is as notable and impressive as any of the games they've actually shipped.

Valve is an extraordinary company. That quote does not apply to non-extraordinary companies and hence is idiotic to repeat as a general rule.

It is as meaningless as if you quoted Feynman saying "think about a problem long enough and you'll solve it".

The implicit assumption in either case is one of an underlying quality process and that is one helluva an assumption to make.

Frankly it's instructive to have a high profile snafu.

And how can you get upset with someone with that twitter pic?

If James Golick considers this to be important enough to criticize other people and even write a blog post about it, why does he write a fix?


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact