Google Alerts pretty much only alerts me of news stories. Unless it would show up in Google News, new links never make it to my e-mail.
For example, a customer posted a nice video review of Improvely on YouTube today which I can find through Google limiting the date range to today with the "Search Tools" button. No e-mail from Google, despite the alert set up for the brand name.
On the other hand, I have one set up for "Surface Pro" and get daily e-mails when the big tech blogs mention it. Smaller blogs and forums, which are no doubt talking about Surface often too, never show up in those alerts. The e-mails even say "News" up top .
A few years ago, every mention would trigger an alert. Something did change. 3rd-party apps like Mention  alert me more often.
It may have had something to do with the Google Reader shutdown. The reason I say this, is that I just recently launched an RSSOwl instance, where I had populated RSSOwl with a list of feeds some time ago... including quite a few Google Alert RSS feeds. Strangely none of the Google Alerts were working, and when I started investigating, I found that all of them had a path that included something like
which suggests that, when those feeds were originally created, the delivery mechanism somehow involved GReader.
I went back to Google Alerts and re-copied each of the feed URLs and see that in the new URLs, the path looks like:
Not definitive, but it hints at some kind of connection.
As jrochkind1 says, they certainly did drop the RSS feeds, but restored them recently. I'll update that detail, but I can't help but suspect that I'll be needing to go back and update it again in the near future...
Like others have said, Google alerts only notifies you of news articles.
My company http://www.Alertification.com takes a more general approach and alerts you when something on any public website changes. For example, you'll get an email or text message when an Amazon price drop occurs, when a college class opens up, or even when concert tickets go on sale.
One thing that I hope people don't miss is that the problem "Google Alerts" solves is an information retrieval problem that is still unsolved (at least in the open literature ;-)
Conventional search ranking algorithms give you some score from 1 to 0 and the only meaning of the score is that a document with a higher number is more likely to be relevant than a lower number. The results usually are good at the top and they gradually get worse as you go down. You stop either when you're satisfied or when it feels like a waste of time.
Suppose, however, you wanted to search scientific papers or news articles about a topic and see the results ordered in time. All of a sudden the junky documents that were hidden are visible; the results are embarrassing even for world-class search engines.
You might say, "let's filter out documents that have a score less than, say, 0.8".
It doesn't work, at least not very well. You run into two problems. Search engines that crush TREC search evaluations have worse than 70% precision when the score approaches 1. Also, you'll see plenty of cases that are obviously a direct hit and the score is 0.5.
The difficulty of the problem is one thing, but the academic approaches people have taken in IR are another part of the problem. The methods used for most TREC evaluations are designed NOT to give search engines credit for "knowing what they know", because to score well on "knowing what you know" you need to do a super job on easy queries and recognizing they are easy queries, and if you don't do that, how well you do on hard queries won't shine through.
Another one is the whole idea that you need to normalize scores from 0 to 1. You don't. A while back I developed a topic similarity scoring system that just counted the number of common traits things have in common, rather than using a dot product or K-L divergence or anything like that. It turned out when the score was 40 you knew the results had to be good because 40 pieces of evidence is a lot of evidence. If you had 4 pieces of evidence, it was clear things that were iffy. I might have gotten "better" results in some sense with a more complex algorithm, but the scores from the simple count were meaningful -- from my point of view, the better algorithms are stupider because they are erasing their knowledge about their own confidence.
It's also a big problem in machine learning: often you use the SVM or Bayes or a neural network and you get some score and if you say the score is greater than some threshold and it is in the class otherwise it isn't. Because these algorithms almost always get the wrong idea about the prior distribution, you often make a "failing" machine learning algo very useful if you do logistic regression on the output and use that to convert the output into a probability score.
Anyhow, if you want to learn about this and stop making 'stupid' intelligent systems, stop what you're doing and read the issue of the IBM Systems journal about IBM Watson because that's what Watson is all about -- it converts all of the signals it gets into comparable probability estimates, and then uses decision theories to take actions that maximize it's utility function. (i.e. "business value")
Isn't Google Alerts simply based on keyword/phrase matches? So if I want to get an alert for the keyword "recipe", it'll give me web pages that are about recipes, as well as articles that simply have the word 'recipe' (ie "Customer development is the recipe to startup success"). I don't think it ever marketed itself as a topic search alert system.
I thought Google Alerts were to tell you when specific phrases were encountered, like "MyMostlyUniqueFullName" or "MyCompanyName". "Recipe" doesn't seem that useful -- or at least doesn't match my M.O. for Google Alerts.
I'm guessing that most Google Alerts are "vanity alerts," as your examples suggest. The putative decline of the service could thus fit with the idea that such functions are meant to be subsumed by Google+.
I once tried to search for Avatar the movie, but SERP returns many Avatar the video game. I added -game, but realized that if a blog titles "Why I like Avatar (the movie, not the game)", it might be filtered out with the -game keyword.
Nice points. This is pretty much why the old Google blog search failed, too.
The problem with your similarity scoring idea is that it fails badly in adversarial conditions (as I'm sure you are aware). It's easy to work around that failure, but then you end up using something like dot-product. I'm not at all convinced that normalizing the scoring is throwing anything away at all.
On another point:
Would you mind explaining this a bit more: Because these algorithms almost always get the wrong idea about the prior distribution, you often make a "failing" machine learning algo very useful if you do logistic regression on the output and use that to convert the output into a probability score.
and realize that it's more like "99.9% of the web is crap", you can look at it as a whitelisting problem rather than a blacklisting problem. If you search for "WOW Gold" you're going to get some article from Wired about how people are working 18 hours playing video games under horrific conditions in a Satanic mill somewhere in Shenzhen. And that's it.
Google can't whitelist because of business and political reasons. Smaller companies, particularly vertical focused, can.
As for the prior, I was working w/ Thorsten Joachims and an undergrad years with classifying papers from the physics arXiv. If you want to separate out astrophysics (which was the biggest category) from anything else, the number of negatives in your training set will greatly outnumber the positives, and under this situation the SVM gets the idea that it's safer to bet against astrophysics than it really is. If on the other hand you have a balance number of pos and neg examples, it's also getting a wrong idea.
We tried using the SVM out of the box and had lousy examples and then Joachims told us to try
and we found we could tune the cutoff to get performance that was much more satisfying.
Most machine learning books go on for hundreds of pages about Kernel theory and whatever and spend two or three pages on ROC analysis (and it's friends, like logistic regression -> probability score.)
A big problem with things like TREC and Kaggle is that need to pick one definition of "accuracy" so that a whole crowd of intelligent but unwise people can fight for the last 0.2% percent, but it doesn't lead to applications because in the real world the cost of some mistakes is worse than other mistakes, and you could use simple methods and ROC/logistic analysis to make something that maximizes business value with 1/10 the effort.
I am using Google Alerts for a few months now. I set up a few alerts for physiotherapy jobs with some keywords like 'job' 'physiotherapy', 'vacancy' et cetera (in Dutch). But I only get once a month an alert and even then it only found some sort of news article about the profession, not anything about jobs. I cannot imagine that so little jobs are put online for this profession. Twitter search or plain Google search gives me more relevant information.
Not sure about the Google Alert if it is useful at all this way. I can manually search every day of course, but these are things I thought could be perfectly automated and done by Google Alert.
I created service that performs predefined search and notify via email if new results appears. You can search job aggregation websites.
the service url is http://vertascan.com
email me at sasha vertalab com and I will help to set up scanner for you.
I saw your study when it was released. Really excellent work. I suspect Trends will survive in some form but it has really changed over the last 24 months. It's becoming far less useful as a research tool, which seems to be Google's overall trend from transparent->opaque.
Not just auto spam; a lot of sites haunted the trends board and wrote stories. The most hilarious example was when 4chan bulked searched for "justin bieber syphillis" and created a burst of stories across the web.
Yeah, I'm a little worried that I'm going to have a demarcation problem when I do the 5-year followup - where Trends will have changed so much that I won't know whether I should consider it alive or dead. Even while I was researching it for its entry, I was coming across all sorts of complaints from different time periods about how the functionality was being constantly chopped down.
"Has Google Alerts been sending fewer results the past few years? Yes. Responding to rumors of its demise, I investigate the number of results in my personal Google Alerts notifications 2007-2013, and find no overall trend of decline until I look at a transition in mid-2011 where the results fall dramatically. I speculate about the cause and implications for Alerts’s future."