For example, a customer posted a nice video review of Improvely on YouTube today which I can find through Google limiting the date range to today with the "Search Tools" button. No e-mail from Google, despite the alert set up for the brand name.
On the other hand, I have one set up for "Surface Pro" and get daily e-mails when the big tech blogs mention it. Smaller blogs and forums, which are no doubt talking about Surface often too, never show up in those alerts. The e-mails even say "News" up top .
A few years ago, every mention would trigger an alert. Something did change. 3rd-party apps like Mention  alert me more often.
But the blog post OP cites provides some convincing evidence that they were gone for at least a moment last July. http://googlesystem.blogspot.com/2013/07/google-alerts-drops...
I guess they changed their mind and brought them back? Weird.
which suggests that, when those feeds were originally created, the delivery mechanism somehow involved GReader.
I went back to Google Alerts and re-copied each of the feed URLs and see that in the new URLs, the path looks like:
Not definitive, but it hints at some kind of connection.
If anyone can recommend similar things, I won't mind :)
My company http://www.Alertification.com takes a more general approach and alerts you when something on any public website changes. For example, you'll get an email or text message when an Amazon price drop occurs, when a college class opens up, or even when concert tickets go on sale.
I made a one off app for the nexus 4 release to account for low inventory, and got quite a bit of traffic for low/no effort.
Conventional search ranking algorithms give you some score from 1 to 0 and the only meaning of the score is that a document with a higher number is more likely to be relevant than a lower number. The results usually are good at the top and they gradually get worse as you go down. You stop either when you're satisfied or when it feels like a waste of time.
Suppose, however, you wanted to search scientific papers or news articles about a topic and see the results ordered in time. All of a sudden the junky documents that were hidden are visible; the results are embarrassing even for world-class search engines.
You might say, "let's filter out documents that have a score less than, say, 0.8".
It doesn't work, at least not very well. You run into two problems. Search engines that crush TREC search evaluations have worse than 70% precision when the score approaches 1. Also, you'll see plenty of cases that are obviously a direct hit and the score is 0.5.
The difficulty of the problem is one thing, but the academic approaches people have taken in IR are another part of the problem. The methods used for most TREC evaluations are designed NOT to give search engines credit for "knowing what they know", because to score well on "knowing what you know" you need to do a super job on easy queries and recognizing they are easy queries, and if you don't do that, how well you do on hard queries won't shine through.
Another one is the whole idea that you need to normalize scores from 0 to 1. You don't. A while back I developed a topic similarity scoring system that just counted the number of common traits things have in common, rather than using a dot product or K-L divergence or anything like that. It turned out when the score was 40 you knew the results had to be good because 40 pieces of evidence is a lot of evidence. If you had 4 pieces of evidence, it was clear things that were iffy. I might have gotten "better" results in some sense with a more complex algorithm, but the scores from the simple count were meaningful -- from my point of view, the better algorithms are stupider because they are erasing their knowledge about their own confidence.
It's also a big problem in machine learning: often you use the SVM or Bayes or a neural network and you get some score and if you say the score is greater than some threshold and it is in the class otherwise it isn't. Because these algorithms almost always get the wrong idea about the prior distribution, you often make a "failing" machine learning algo very useful if you do logistic regression on the output and use that to convert the output into a probability score.
Anyhow, if you want to learn about this and stop making 'stupid' intelligent systems, stop what you're doing and read the issue of the IBM Systems journal about IBM Watson because that's what Watson is all about -- it converts all of the signals it gets into comparable probability estimates, and then uses decision theories to take actions that maximize it's utility function. (i.e. "business value")
The IBM journal publication - is it this one? http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=617771...
More info on how Watson is using UIMA: https://blogs.apache.org/foundation/entry/apache_innovation_...
How many web pages get created every day that contain the word "recipe"?
I'm certain you'd be buried in notifications if Google sent you an alert for every recipe, so it has to have some selection mechanism to send you particular recipes...
The problem with your similarity scoring idea is that it fails badly in adversarial conditions (as I'm sure you are aware). It's easy to work around that failure, but then you end up using something like dot-product. I'm not at all convinced that normalizing the scoring is throwing anything away at all.
On another point:
Would you mind explaining this a bit more: Because these algorithms almost always get the wrong idea about the prior distribution, you often make a "failing" machine learning algo very useful if you do logistic regression on the output and use that to convert the output into a probability score.
Adversarial IR is a problem that came with Google and will go away with Google.
Bing has the problem because too they are trying to be Google.
If you accept Sturgeon's law,
and realize that it's more like "99.9% of the web is crap", you can look at it as a whitelisting problem rather than a blacklisting problem. If you search for "WOW Gold" you're going to get some article from Wired about how people are working 18 hours playing video games under horrific conditions in a Satanic mill somewhere in Shenzhen. And that's it.
Google can't whitelist because of business and political reasons. Smaller companies, particularly vertical focused, can.
As for the prior, I was working w/ Thorsten Joachims and an undergrad years with classifying papers from the physics arXiv. If you want to separate out astrophysics (which was the biggest category) from anything else, the number of negatives in your training set will greatly outnumber the positives, and under this situation the SVM gets the idea that it's safer to bet against astrophysics than it really is. If on the other hand you have a balance number of pos and neg examples, it's also getting a wrong idea.
We tried using the SVM out of the box and had lousy examples and then Joachims told us to try
and we found we could tune the cutoff to get performance that was much more satisfying.
Most machine learning books go on for hundreds of pages about Kernel theory and whatever and spend two or three pages on ROC analysis (and it's friends, like logistic regression -> probability score.)
A big problem with things like TREC and Kaggle is that need to pick one definition of "accuracy" so that a whole crowd of intelligent but unwise people can fight for the last 0.2% percent, but it doesn't lead to applications because in the real world the cost of some mistakes is worse than other mistakes, and you could use simple methods and ROC/logistic analysis to make something that maximizes business value with 1/10 the effort.
Not sure about the Google Alert if it is useful at all this way. I can manually search every day of course, but these are things I thought could be perfectly automated and done by Google Alert.
I had alerts like:
"This" -"Not that" -site:notnews.com
The filters stopped working for me. I removed and re-added the alerts. Now I'm pounded with results. They were amazingly effective before.
I can't say I'm surprised, I don't see much of a business model in it, but I was surprised that it randomly happened to all of my alerts and I haven't seem a word about a change.
The new Google Trends UI is still useful for hot topics (Hey, Oracle just won the America's Cup!), but less useful for spammers.
Google certainly look to be actively walling their data off.
See also Google Keyword Tool which has took quite step backwards recently, and removing organic keyword sources "(not provided)" from Google Analytics.
Bad times for marketers.
"Has Google Alerts been sending fewer results the past few years? Yes. Responding to rumors of its demise, I investigate the number of results in my personal Google Alerts notifications 2007-2013, and find no overall trend of decline until I look at a transition in mid-2011 where the results fall dramatically. I speculate about the cause and implications for Alerts’s future."