Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Do you smoke test? (samsaffron.com)
33 points by seanp2k2 on Feb 22, 2013 | hide | past | favorite | 19 comments


The thing missing from this setup for me is a ramp up of changes on release, and automated rollback based on service metrics/KPIs.

Don't release to the world. Release to 1%, then 5% then 20%... and so on. And if the load time starts going up, or the # errors start going up, or numerous other things - last release gets rolled back and alarms ring.

That way I don't break the world - I break a much smaller N% of the world that demonstrates the problem - which then promptly gets rolled back by the friendly neighbourhood release bots.

I write sucky code - so I write tests to help drive my design and catch errors.

But tests are code too - and I write sucky code - so I write release processes[1] they help save me when I write sucky test code.

If you do continual delivery you need that second layer of metric-driven tracking of release quality or writing sucky test code will catch you out at some point ;-)

[1] and yes - the release ramp up / roll back code is potentially sucky code also - but it's just one bit that does one thing so gets tested a lot more than all the new sucky code I write ;-)


Yes, I love this approach, when we reach high scale we are surely going to take a similar approach


That is awesomely cool but a lot of work to setup. For smaller companies not worth the effort.


> not worth the effort

Until it is, of course. Growing up is hard.


How does one set this up?


> You only get to see the "real" page after a pile of JavaScript work happens.

Fix that. If you can't test your config and your authoring by crawling server-rendered documents, others can't crawl them either, and they aren't really part of the World-Wide Web.


Googlebot executes JS now (has for a couple of years, I believe), and let's face it, it's really the only crawler that matters.


Would you happen to know what they're using to do that now?


Googlebot is Chrome, more or less.


Perhaps I'm missing the point, but I'd be wary of advocating pure automation as a replacement for a good old-fashioned manual eyeballing.

The point of a smoke test is that for all your care in testing and deployment, you still might have missed something. Stuff goes wrong, and not always in an immediately machine testable way. You don't need a full QA department, you just need to open up the bits you've changed after a deploy and have a play to make sure nothing's obviously out of place. It's a small step to make a non-optional part of your process and it has a potentially huge payoff.

Adding the automated integration test is a great thing to do, but complementing it with a human is even better.


> This is my fault, its 100% my fault.

> Often when we hit these kind of issues, as developers, we love assigning blame. Blame points my way … don’t do it again … move along nothing more to see.

I have never found this to be a constructive 'culture'. Instead of placing blame on others or feeling guilt myself, I try to see if there's a way of improving the process itself instead of relying on people not fucking up. Because that's going to happen to everyone. Bugs are still going to happen, but if the process (be it deployment, testing/QA, dev, w/e) eliminates potential points of failure, they will be less frequent and hopefully less severe.


Same here, I find that approach often poisonous, embracing and eliminating the suck is far more productive.


Don't get me wrong, I get that PhantomJS and CasperJS are nice. But isn't it an issue that you're only actually running your tests on one engine? That's the cookie that keeps me with Selenium; although I'm eager to try out Browserling as well :)


Isn't that what QA is for? My team is lucky enough to have a dedicated tester that has to approve every build in test and staging before we're allowed to deploy to production. It can be a pain but it does catch a lot of sloppy code.


I wonder what this does to the mindset/culture of the team knowing that "the tester will catch it". Does this lead to lots of sloppy code that the tester now has to catch? Vs not having the tester and having a mindset that "I better make sure my stuff works". Obviously there's a balance that must be met, but this is something we've debated at my current job. Thoughts?


I think people are going to try to avoid the tester experiencing errors. Why would anybody write something badly if it’s just going to come back to them in a couple of hours as wrong? But any half-decent culture is probably already thinks this way.


It is likely to be way cheaper and way more thorough to have a slew of automated tests... by no means I am saying fire all QA and automated them, but being able to run 10000 ui tests in one hour has some major advantages.


So you didnt run your selenium tests / phantom tests against your staging environment before deploying to production?

That is testing 101...

Having a smoke test is still usefull because often you dont want to wait for those automated test to complete.


Good to know we will let you go this time. Next time you will be required to remove the Facebook login.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: