Hacker News new | past | comments | ask | show | jobs | submit login
Ditch all alerts that aren't actionable (gabrielweinberg.com)
95 points by pathdependent on May 8, 2012 | hide | past | favorite | 26 comments



I feel much the same way as Gabriel does, in this post.

The method of using email as a sort of alert queue is a terribly broken system. Yes, it's a system that I have to deal with every day. Yes, I get 100-200+ emails a day, between alerts and recoveries... often of which are rolled up in "alert packages", which are just as useless.

There needs to be a separation for all the to-do list, alert-spam, bullshit that typically ends up in my email inbox. I'd really love it if I could get back to using my email, ya know, like an electronic replacement for mail. Little communications or thought out letters. Not this inundation of crap that I currently deal with.

The startup that fixes this, wins.


I did the same thing when cutting over our old system to the new one: I fired an e-mail for every WARN in the system, including 404s that have always existed. 1k+ emails that day plus GMail disabling SMTP from our server was the sign that e-mail is meant for alerts, not logging.

We've since tried out and signed up with NewRelic with an extremely low error threshold of 0.01%. As a result, we have more data than we had in the past for our errors, notifications on when the threshold is crossed (not for each error), and, most importantly, less dev work to manage them.


I'd like to see those other alerts brought into some sort of dashboard. I'm imagining an app that runs down the side strip of my right-most monitor, collating various business/social metrics, site/earning stats (Analytics, AdSense, etc), domain/billing reminders, etc.


Different email accounts?


I was expecting this to be a post about machine monitoring.

It's not, but the same rule applies.

Much of the past year has been tweaking Nagios and other alert settings. Some have been disabled entirely, some simply set to much more sane values (as in: if this value is hit, things are not sane).

My read:

If you can't take some action within an alert interval to change / make things better, you don't need to know about it.

If alert A means that condition B has already occurred (e.g.: site is down, whatever), then alert A is 1) useless and 2) will be polluting your (likely already overstressed) mental bandwidth / handheld device constraints at a time when it's least convenient.

E.g.: while you're trying to read a really, really crappy vendor manual in Android's PDF viewer, which helpfully resets you back to page 1 of 892 every time you navigate away from the document -- say, because a new mail/SMS alert has just arrived.

The anomalous condition should be logged for forensics, and explored for predictive ability, but if it's not triggered until the shit's already hit the fan, ditch the alerting, keep the logging.


Consequently, I'm often trying out newsletters and then ditching them after a week or two.

I remember you doing exactly that with Hacker Newsletter. :)

That is great advice though - I had several Google and Twitter alerts setup, but then I realized I was never doing anything with them so I axed 'em. Another thing to watch out for is recurring todo list items that you never actually do.


Conversely, it is the basis for pragmatic advice for companies or non-profits that manage to get someone to subscribe to a newsletter: Don't send a newsletter for the sake of sending one!

Non-profits that send monthly updates that merely highlight a member are particularly infuriating to me. Tell me of a new project when it launches or a big milestone reached. Letters sent for the sake of "just checking in" make me ignore subsequent emails. I get newsletter blindness almost as fast as ad blindness.


To wit:

    Date: Mon, 7 May 2012 18:01:40 +0000 (UTC)
    From: Codecademy Team <contact@codecademy.com>
    Subject: Get started with programming!
It appears these will be arriving weekly. Thanks for rewarding my early interest, Codecademy!


Interestingly, I remember hearing much the same advice given, except in regards to Starcraft strategy. In Starcraft (as every strategy game) scouting has always been incredibly important, particularly in the early game - you want to know what your opponent is up to, so you can react to it. But Day9 (a popular Starcraft commentator and player) once pointed out that there's no point scouting so early that there's nothing you could detect that would cause you to change your plan.

Put mathematically, if P(X|Y) = P(X|~Y), then knowing Y really doesn't help.


One of those things that seems obvious, but that teams often fail to do.

My last job "errors" would get logged and the team would get emailed. We'd get 5+ error emails a day, but they were always expected and always ignored. Of course, when a real error comes in, you just write it off as one of the expected errors. 30 minutes later users are screaming at you....


Email is meant for asynchronous communication (or at least it is supposed to be).

Any immediate alerting that relies on email communication is broken. You should not be constantly having to watch your inbox simply to be parsing messages to figure out what alerts are meaningful. Let the machines do that work for you by setting up and constantly refining your alerts correctly.


yawn Not a very good rehash of: http://teddziuba.com/2011/03/monitoring-theory.html


Well, yeah. For me its:

- alert = something that i need to know right f. now

- passive email (does not generate sound alert, etc) = something that I need to know within the next few hours

- everything else: goes right into a file that i can lookup if needed, but rarely do

- finally theres always some program generating some heavy logging activity with zero useful data in it so if its really overboard i disable it, just for the sake of grepping speed if i need to look at the archived logs.

didn't even need a blog post! ;)


This post, short and to the point, is excellent as a thought-provoker. I am currently struggling with finding ways to keep the noise level down, due the number of hats I wear and the large number of systems, and people, competing for my attention and time.

I am, simultaneously, running my own startup, doing freelancing development work for clients, and running the Infrastructure Consulting arm of another successful small company. That's all not to mention running my own personal sites and web projects. Each of these positions result in my getting a HUGE deluge of email, automated alerts, and spam - every single day.

I've tried a number of different methods for reducing noise, and none of them have worked very well for me yet. I tried setting up folder routing rules in Outlook - it resulted in my missing a couple of key alerts for client systems which resulted in very unhappy clients asking me, "Aren't you guys monitoring our systems..." when they had to call to let me know their critical systems were down.

I've tried scheduling set times during the day for email (and ignoring it outside of those times) and, while this has been my most successful method to date, it still just makes me dread the Email Hour in the morning and the other one in the Afternoon - resulting in my subconsciously finding excuses to skip "Email Hour"...

If any of my fellow HN'ers have suggestions in this regard, I'd love the feedback!


I would argue that "false-positive txt messages about server outages in the middle of the night" are actionable from the point-of-view of the recipient, as they are not immediately known to be false-positives in most cases.


Informational alerts are useful, even though you rarely act on them. I get a mail every few minutes from a script on my web server (it gets archived automatically). In case the server explodes, I can see the status of that application just before it went down, that is when the informational alert will become an actionable alert.

My conclusion is, use many alerts, you never know when they become useful. But use the filter functions of your email client (auto-archive in Gmail for instance) even more.


That's more logging than alerting, however.

You should have your logs sent to an accessible location, be it an specific email address or location.

Having the errors sent to your standard email address just clogs up your own inbox, and causes you to ignore serious errors of the kind "The server is about to go down" until you get a phone call telling you the server is down.


There is an analogous saying in programming: "Do not check for error conditions you don't know how to handle." Tongue-in-cheek but contains a grain of truth.


My friend Mike Rowe (great name for a computer company) says it this way: "You don't need an altimeter on your car, so you know how far it is to the ground when you drive off a cliff"


> Mike Rowe (great name for a computer company)

https://en.wikipedia.org/wiki/Microsoft_vs._MikeRoweSoft

This Mike Rowe? It seems that this name is no longer so great for a computer company.


Yeah man, if I can't find the verb, it's outta here.


After reading this, I think I may write some filters for my incoming email to just keep tha actionable stuff. Now determining exactly how to do that is going to take some time.


It'd be interesting to see a service that could act as a floodgate for notifications, with scheduled release times when you're ready for a deluge of distractions.


Agreed - I feel that as we have more filters, customisable searches, auto-folder moving and so-on then we're actually creating a big headache of curation, categorisation and taxonomy-creation for the end recipient.

What would be great IMO is for some of this to happen a little bit more automatically with the option for me to give the system feedback as to how well it did with it's filtering attempts. This would allow me to, over time, state my intentions (e.g. I really only want to be bothered with REAL problems) rather than implement the mechanisms to do so. I often feel I work for the machine - not the other way round.


Yeah I quite like the idea of using machine-learned automation to do categorization and filtering of information for us. It seems that this is an ideal role for the computer, an extension to the mind doing the hard/tedious work of curation and classification so we can just sit back and observe.


If I was sufficiently focused on productivity to follow this advice, I wouldn't spend so much time reading HackerNews.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: