Shifted 1 week: gpa calculator
Shifted 2 weeks: final grades
Shifted 3 weeks: academic suspension
Shifted 4 weeks: academic dismissal
I do note with distress, however, that:
searches for "Why is my poop green?" peaked in March 2010 before subsiding, and that it's correlated with "hiv symptoms in women" and "how to get a guy to ask you out".
Meanwhile, "why is my poop black?" is correlated with "How to say I love you in French"
That does it, I'm definitely getting a new monitor.
While we're on the subject, "how to propose" oddly enough seems to have a spike at the end of every year and a huge trough at the beginning, and it's correlated with "kenneth cole watches".
(C'mon guys, lift your game, if you're giving out an engagement ring you should be getting a Patek Philippe in return, not a Kenneth Cole.)
Oh, and one more thing: "divorce lawyers" has a spike in the middle of every year (including a particularly large spike last year) and troughs at either end. Maybe cold weather makes people want to nest and warm weather makes 'em want to leave?
That being the case, can anyone come up with an explanation for this? http://correlate.googlelabs.com/search?e=accident&t=week...
The other (non-mutually exclusive) explanation here is that different people are searching for the same thing (e.g. a news item they saw on TV about a recent car accident) but using different terms to do it (e.g. accident, car accident, fatal accident).
- small business development
- us copyright office
- education grants
- legal advice
A bug report: I get 500 Internal Server Error when inputting non-ascii characters:
Maybe they could use a different (and probably more computationally intensive) correlation to fix this.
A great leading indicator of 'selling a home' seems to be 'european airfare' and 'florida apartments.' So here's what you do. Take out google adwords for these searches, offer 'great deals' in return for your zipcode and email address.
Then you can use these addresses to send inquiries about home sales and get in on sales before they hit the market!
There, go make money.
P.S. If this actually works, be nice enough to let me know :)
If that's too much to ask, it could at least provide a way of skipping the step of manually entering the returned search terms into Trends.
In our Approximate Nearest Neighbor (ANN) system, we achieve a good balance of precision and speed by using a two-pass hash-based system. In the first pass, we compute an approximate distance from the target series to a hash of each series in our database. In the second pass, we compute the exact distance function on the top results returned from the first pass.
Each query is described as a series in a high-dimensional space. For instance, for us-weekly, we use normalized weekly counts from January 2003 to present to represent each query in a 400+ dimensional space. For us-states, each query is represented as a 51-dimensional vector (50 states and the District of Columbia). Since the number of queries in the database is in the tens of millions, computing the exact correlation between the target series and each database series is costly. To make search feasible at a large scale, we employ an ANN system that allows fast and efficient search in high-dimensional spaces.
Traditional tree-based nearest neighbors search methods are not appropriate for Google Correlate due to the high dimensionality of the data resulting in sparseness of the data. Most of these methods reduce to brute force linear search with such data. For Google Correlate, we used a novel asymmetric hashing technique which uses the concept of projected quantization  to reduce the search complexity. The core idea behind projected quantization is to exploit the clustered nature of the data, typically observed with various real-world applications. At the training time, the database query series are projected in to a set of lower dimensional spaces.
Each set of projections is further quantized using a clustering method such as K-means. K-means is appropriate when the distance between two series is given by Euclidean distance. Since Pearson correlation can be easily converted into Euclidean distance by normalizing each series to be a standard Gaussian (mean of zero, variance of one) followed by a simple scaling (for details, see appendix), K-means clustering gives good quantization performance with the Google Correlate data. Next, each series in the database is represented by the center of the corresponding cluster.
This gives a very compact representation of the query series. For instance, if 256 clusters are generated, each query series can be represented via a unique ID from 0 to 255. This requires only 8 bits to represent a vector. This process is repeated for each set of projections. In the above example, if there are m sets of projections, it yield an 8m bit representation for each vector.
During the online search, given the target series, the most correlated database series are retrieved by asymmetric matching. The key concept in asymmetric matching is that the target query is not quantized but kept as the original series. It is compared against the quantized version of each database series. For instance, in our example, each database series is represented as an 8m bit code. While matching,
this code is expanded by replacing each of the 8 bits by the corresponding K-means center obtained at training time, and Euclidean distance is computed between the target series and the expanded database series. The sum of the Euclidean distances between the target series and the database series in m subspaces represents the approximate distance between the two. Approximate distance between target series and the database series is used to rank all the database series. Since the number of centers is usually small, matching of the target series against all the database series can be done very quickly.
To further improve the precision, we take the top one thousand series from the database returned by our approximate search system (the first pass) and reorder those by doing exact correlation computation (the second pass). By combining asymmetric hashes and reordering, the system is able to achieve more than 99% precision for the top result at about 100 requests per second on O(100) machines, which is orders of magnitude faster than exact search.
So many git commands...
"correlation is not causation"
I searched: suporte, cadeira, filho, barata, coelho, figueira, orkut.
I don't understand what the correlation is here. Is this just matching queries by frequency of search?
So you could have completely random and unconnected search phrases/queries "correlating" because the quantity and time/date are matching?
Maybe people avoid searching for anything war-related around the holidays.
Also, there seems to have been a huge drop-off in this search over the last few months.
BTW, how does "google" correlate (0.98) with "kratom"???
This is strange:
US Web Search activity for losing weight and rental homes (r=0.9418)
Did hemorrhoids cause the GFC?
Seriously though - DFT -> key -> build giant R-tree. You can probably munge the key to get the week offset. Seems like a straightforward mapreduce problem :)
Is it as simple as more "normal" people use the internet?
county detention center
pain in back
el paso tx