Do you mean in school or in university?
Because if somebody is taking linear algebra in university and still didn't understand that this is an extremely important topic, then he should maybe go back to school or take a gap year or something to improve his general education.
From a university lecture in linear algebra, I expect that it carries me through as many important topics as possible while giving me the tools to form a good formal and intuitive understanding.
The motivation part should be solved by that time.
Sure and unless your goal is to do pure mathematics research, part of that is providing some motivation in terms of understanding applications. There's nothing wrong with a class on LA (or any other topic as far as that goes) spending some time motivating study of the topic by showing how it can be useful... even explaining how it might used to make millions or billions of dollars.
Right now in nearly every Math book that is bought by a College student or High School student has motivations in Phyaics. So a person could go through HS, get their Undergraduate degree in CS and then never figure out how math could be useful in their job at all. When students ask "when am I going to use this" you have already failed in teaching your students how to apply this.
Once they have reached this point either they will have a really bad impression of math and just give up OR they will believe whatever you say after words and get rewarded ten years down the line.
I really liked it when we went over PageRank in one my university lectures. It made me think "Wow, I can actually have an impact using the things I've learned here".
Because not everyone professor romanticizes the subject or discusses its importance, not teaching anything other than surface details about problems.
I was that kid.
I had to take differential equations, linear algebra, and discrete systems for my CS undergrad. I loved math, and I did my best but I eventually got bogged down and lost interest.
Since then I’ve come to appreciate more why you would want that advanced math as a programmer, but my experience at the time was that I could brute force almost anything with enough for-loops.
It just didn’t occur to me that there were interesting problems that were too complex to brute force, or that brute force would lead to such tangled code in some cases that it wouldn’t be debuggable.
Frankly, I doubt more liberal arts would’ve convinced me. I needed to get knee deep in big bad code and domain specific problems before I’d realize what I missed.
Also is search considered a solved problem?
In any case, PageRank is a method for estimating quality of a page based on the amount of inbound links, not a solution to all of search.
But it's a property of the web at the time, not something universal to the search problem, e.g. it's not a statistic that exists if you want to search books.
I think the work being done on question answering (given a question and a document that answers the question, provide a concise answer) is a place where a lot of interesting work is being done, both in academia and at Google with the snippets of web pages it provides.
In order to rank papers I think you would instead have to rank people, so that the people who have written on Paper X can in Paper Z reference the authors of Paper Y.
But of course it would need a CiteRank of equivalent quality to pagerank to be at all useful.
A paper X has been updated to reference a response or later work Y, which itself referenced X from the start in such a way as to make the 'version' of X in the reference unknown. Citation-trawling software might bite hard on a loop like that :P
Anyway, I also wonder why having cycles makes PageRank useful and lacking them makes it less so -- you can still count inbound links and such with a DAG, and huge huge amounts of the content of the web would exist in DAG-equivalent subtrees, wouldn't they? I could have this pretty wrong, haven't looked at the paper in years and should go do so!
The next–very unsolved–problem is being able to "understand" natural language queries and "understand" source materials such that a user can ask for something and get it.
"Understand" is in quotes because because it means something rather specific.
Ha, not only is search not a solved problem, I would posit that search is getting WORSE.
Computer knowledge is a particularly good example for how search is degrading with time.
Try to figure out how to do X on the Beaglebone Black (I presume the Raspberry Pi has a similar problem, but it's not something I'm that familiar with).
The problem is that the Linux implementation for the Beaglebone went from weird distribution (Angstrom) to mainline Debian Linux kernel 3.8 -> 4.4 -> 4.14 in a VERY short time so the number of links to new stuff stayed flat.
Consequently, the old Angstrom stuff almost always fills the initial search positions for quite a ways even though it's completely useless.
This is occurring in other things, as well. Stack Overflow, for example, has no way to mark an answer as "This was correct 5 years ago but is now wrong."
Effectively, the web is becoming sclerotic and search engines are following it.
I REALLY miss old AltaVista's feature where it would give you a graphical representation of the clusters in your search so you could drill down into a less popular grouping. The fact that nobody has recreated this makes me wonder ...
Not counting comments? What more could you ask for?
What more could I ask for?
Which comment is the correct one? There are always multiple "No, that isn't correct, this is the one true way" comments. One posted 5 years after the flurry is unlikely to get many votes.
How about bad information not showing up in my search at all?
How about ageing out votes so it makes sense to come back to a topic and revote?
And this doesn't even account for the information that is simply wrong but nobody cares enough (or has enough karma) to fix.
Curation isn't always bad.
Interesting point - perhaps a hybrid scheme to decide the ordering of the answers, that balances upvotes and submission date.
It would need to be carefully tuned though - for some questions, answers will age badly ("What's the best way to do parallel stream processing in Java?"), but for others, they essentially won't age at all ("Why is there a small numerical error in this floating-point calculation?").
Perhaps it could be tuned by tag, as a means to estimate how the answers will age.
> How about bad information not showing up in my search at all?
You don't want the system to be overly sensitive to undeserved downvotes.
> Curation isn't always bad.
Of course, but traditional curation isn't on the cards simply because of scale - StackOverflow isn't like an academic journal - and we're whining about a system that works incredibly well.
Look at YouTube comments, or Yahoo Answers, and you see what a shitshow it can be when the Internet tries to have a conversation. It's a small miracle that intellectually worthwhile forums like this one can ever work. StackOverflow does a lot right.
a ton has happened! since pagerank, theres been a ton of advances around nlp that has changed the way queries are processed prior to information retrieval. for example, google's rankbrain seems to do a lot of the heavylifting around word similarity.
I certain wouldn't, since I still encounter things that I know are on the internet but Google can't find. It's possible that the next advance won't be actually indexing the web but rather figuring out what the user wants rather than what they requested.
Amusing anecdote regarding this issue.
- I teach an introductory online chemistry class.
- If the students are determined enough, they can/do cheat on their quizzes.
- In one of my quizzes, I give the students a formula for a pretend material and ask them to compute its molar mass.
- If you perform the calculation, the molar mass works out to something like 108 grams / mole.
- If you try to Google the answer, Google is smart enough to know that my compound is unstable.
- Instead, Google provides the molar mass for a _related_ material (86 grams / mole)
- Each semester, I find a handful of students who dutifully tell me the answer is 86 g / mole.
Reminds me of my metal work craft classes in school. We were making our own wrenches from scratch, and that requires a bit of geometry and drafting to make the blueprint (and the template). A couple of guys decided to cheat by pressing an existing wrench against the paper and using a pencil to copy the shape. Sounds like fine idea in theory, but the result is obviously fake, and also distorted enough to be unusable (pencil-surface angle will and pencil thickness will not let you have exact measurements, and inability to maintain the stable angle distorts the shape). Didn’t end well for them, got nearly kicked out.
Google eventually learned to give me documentation in response to stuff like 'bullet collision', but for awhile it was big on youtube links and gun ranges.
Part of it is learning which phrases to use when searching, but I’m sure a big part of it is also Google figuring out what you want.
To get an intuition of why quantum mechanics looks the way it does, Scott Aaronson is extremely helpful: https://www.scottaaronson.com/democritus/lec9.html
The link you posted looks interesting and fun to read, I'm looking forward to going through it after work. Thank you!
P.S. Even this statement is formally quite weak, but hopefully I have clarified myself enough to transfer my intended meaning. Apologies.
Wait a sec... everything already is a category.
Although there haven't been any major announcements, from daily anecdotal evidence I can confirm it's still a major factor to get you into the front pages.
I'd say the major changes since deprecating PageRank in practice are
1. Much higher dependence on CTR and bounce rate once you start showing up in the SERPs
2. Much higher influence of a notion of "trust" on links (not just the quantity, but mainly the quality counts now, too much quantity without quality can actually hurt)
3. PageRank much more disconnected from domain level. A few years ago, you could rank with pretty much anything if your DomainPop was high enough. By now, Google got much smarter about different folders that don't have anything to do with what it's ranking your main site for and it's harder to get them ranking. On the bright side, they also got smarter about the negative SEO influence of subdomains or domain changeovers, which won't cost you as dearly.
So TL;DR: PageRank still exists, but it's been reduced to an input vector.
It out ranks national news papers and all sorts of huge sites, as it should really the info in the post is pretty comprehensive and sourced.
Somehow Google worked that out for itself based on way more than the old page rank back link weighted algorithm.
Yes indeed. In particular how multiplication distributes over addition. It is powerful technology.
But in all seriousness, Google is notoriously tight-lipped about how PageRank works, so I doubt we're going to get any more information than this. I'd love to proven wrong, though!