W3schools is a cancer. They have old, outdated information. Their website, their content, and their business model is all a throwback to last decade. They sit atop of the search results, doing nothing, and raking in cash while providing a poor service. Meanwhile, their very dominance sucks all the oxygen (page views, ad dollars) away, ensuring that better competitors can't grow.
What redeeming feature do they have? If you aren't easy to use, accurate, comprehensive, or responsive...how good a reference site are you?
I've yet to see anyone bash W3schools for no reason, but I see plenty of people defending them for no reason.
htmldog: http://htmldog.com/ is good resource. i know this is a far stretch but they're some really good books out there for quick referencing...i know it's not as fast as opening a new tab, but some of the books: "the definitive guides" are great.
Those are not accidental. No one types "www.jigsaw". That's intentionally affiliating the URL with the word "jigsaw" that comes from the W3's validator. Look at the other subdomains. Those can't be accidents. It's not a wildcard thing at all.
That looks like wildcard DNS to me. The question is where the linkbacks are coming from. It is possible w3schools is creating those themselves, in which case that does seem like unethical SEO tricks. But if not, well, they can't be responsible for how people link to them.
Looks around guiltily - I still use their site too when I need something quick and I know they have it. Sure, some of the info is old. But some of the stuff doesn't change and is just as relevant now as it ever was. If they come up first in a google result and it's for a query I know they have the answer to, you're darn right I'll click it :)
>They have as much power in web dev search results as Wikipedia has in regular search results. That is just wrong.
Except they've been providing useful info for longer than Wikipedia has been around. Sounds kind of right to me. They may not be as current as they used to be, but for the 12 years that I've used them they have helped fill in my knowledge cracks and I've appreciated it.
I don't understand the hate. If you really want someone to hate, hate the W3C for making their info so cryptic or non-existent for years that places like W3Schools had a market.
This is honestly why I use duckduckgo on any of my dev computers. They don't always have best results, but for stuff like java(script) not only do they have the !js/!java syntax for going straight to the docs but they also don't have over-seo'd websites cluttering up their results.
I actually prefer to not go straight to the docs because I find the Mozilla developer site to be somewhat slow (and occasionally really slow), and the search page to not be the greatest. So unless I get the name of what I'm looking up exactly right, there's just too much latency. I forget who taught me this (maybe on here), but when I google I just add a "MDN" to all my js searches and the specific Mozilla page for that topic is (usually) right at the top.
I guess it's kind of like a bang, and way better than having to write "site:developer.mozilla.org"
The trick is to always use DDG, but just add !g to search google when you're not satisfied (or !gi to search google images). You start to use google less and less after you get used to the DDG layout: At first it seems (for some reason) like you're getting less useful results on DDG - but after a while you can see you're getting pretty much the same results, in a different format!
I don't have as many problems with W3Schools as the author of this and many comments below here. What you say is true, but their target audience is mainly newbies, not professionals who know what they are doing. They might provide some outdated information, some things may even be false. But do they deserve to be called a cancer? (Read it below in the other comments, and that wasn't the first time I read it.) After all, doesn't every website with lots of information make mistakes?
Don't get me wrong though, I do agree that their different subdomains are a wrong thing to do, and I fully agree if Google decides to punish them by blocking/lowering results for *.w3schools.com for a couple months. I also agree that the Mozilla Developer Network may be a much better resource, both for newbies and professionals. But if you are that much against W3Schools, why don't you use the search function of MDN instead of Google's general search?
As you say, they contain outdated and out right false information.
Now, it's not merely being wrong once in a blue moon or one or two articles being a bit behind the curve, but the sheer egregiousness of its mistakes plus the lack of action despite being shown to be incorrect (see: w3fools.com) which in the eyes of many qualifies it to be called a "cancer"; certainly in the realm of web development resources, that label seems rather valid.
Having two or three domains with the same content should be OK. After all, many sites have a www and simply a top level domain with the same content. But having more than that should be penalized by the search engine gnomes.
I think Google classifies sites in different buckets. There's the "trusted" sites - the brand names that will only get penalized by their algorithms if they do something very very wrong, and another bucket, where you get less leeway. W3Schools is in the first.
Of course, big brands do get penalized like JCPenny and Forbes, but this often happens when they're called out by the media. Google for the most part, since Panda have said they want to rely more on algorithms rather than on handjobs
2) if they got it because lots of (outdated) sites are linking to them, why do their alternate subdomains get a similar bonus?
3) this whole privileged "authority" bucket crap stinks. it used to be that a really good sub-page on somebody's geocities site could be the no.1 go-to first result for a certain search topic. why? because loads of people that knew it was good would link to it because it was just that good of a comprehensive resource. hearing more and more about this new ranking method makes me wonder whether it's even possible for a small guy to come up top in the results like that. And it's not so much that I feel for this small guy, but rather that I know I'm missing out on a lot of honest good web content that Google simply isn't showing me.
> this new ranking method makes me wonder whether it's even possible for a small guy to come up top in the results like that
If you search for "Remote Unix" on Google (at least for me), a post on my humble blog is the second result. This isn't exactly the narrowest search term, and I'm certainly no juggernaut of a site, so I take that to mean that small random-ass pages can still rank well for a query.
> because loads of people that knew it was good would link to it because it was just that good of a comprehensive resource
This is exactly why w3schools is at the the top of search results, and exactly why the http://w3fools.com was started -- because there are a ton of developers out there that think that w3schools is authoritative, correct, and kept up to date and link to it and use it. At one time it seemed to fill a niche, but no longer, and the cycle needs to be broken. In the meantime, unless you do some kind of sentiment analysis (and arbitrary changes based on the leanings of google, which you seem to be classifying as a bad thing), pagerank at least would probably conclude that the internet still loves w3schools.
> whether it's even possible for a small guy to come up top in the results like that
w3schools is actually about as small guy as you can get. They may have ubiquity, but it's run by like two people.
> I know I'm missing out on a lot of honest good web content that Google simply isn't showing me.
Yes, but there is too much good web content out there for you to see in your lifetime anyway. Meanwhile, defining "good" based on a nebulous query is kind of the crux of the whole problem, isn't it? I'm not convinced there is an answer.
oh yes, subdomains gone wild have to do with SEO, but not in the way outlined by the author.
duplicate (sub)domains resulting in duplicate sites with duplicate pages mostly have a negative impact on the performance of a webproperty in the SERPs.
a link to www1.example.com does not automatically count as a vote for www.example.com, a link to www.example.com does not count as a vote for www1.example.com, that means they now how two websites both with one vote, instead of one websites with two votes.
additionally, you have two websites which are in competition to each other, and each of these websites has duplicated webpages which are in competition to each other. both websites and webpages usually perform poorer - if google has a doubt which of these duplicated pages on these duplicate sites is the best page to point the user to (said that, google is pretty good in stripping doubt out of the equation for subdomain duplicate pages issues)
if you have a webproperty with a "subdomains gone wild" issue, it is best practice to canonical (either via the canonical tag or via HTTP 301 redirect) them to one (sub)domain. it almost ever (depending on how big the issues was) results in a better performance of the canonicalized webproperty - it definitely helps the site on the (organic) linkbuilding front.
there are thousand reason why a webproperty can have a subdomain gone wild issue (it was (once upon a time) even a common black hat practice of spamming google with subdomain gone wild duplicate web-properties of the competition sites if possible (and it's possible with sites with a wildcard subdomain setting (i.e. wildcard.w3schools.com))
but in most cases "subdomains gone wild" does not have a positive impact on the SERPs. (blocking results is not one of this cases.)
but yeah, what the real issues with w3school? why does this sh/t rank so well?
well first of all it should be said: "you are not statistically significant" just because you (in this case we, the HN readers) are not happy with w3school does not mean the average searcher (searching for HTML web dev stuff) is not happy with it. they are happy (hey, they don't know better) and they use it like crazy. they are happy with what they find. as google is measuring SEPR "long click" very effectively they know exactly how well the average search users uses a page/site. if all users would click back immediately, would not stay long on the page, w3school would not be so dominant as it currently is.
secondly: links - i just did some backlinks check ons some obscure w3school URLs, they all have links. now you might say: oh they buy links... does not look like it, they have links from old forums, new forums, blogs, .edu domains, ...
and: where is the competition? mdn does a great job, but it is a resource for developers, for people who know what they are doing. w3school is a great resource for people who do not know what they are doing or what they are looking for - and w3school has a special page for every single thing / tag that we don't even bother to mention anymore (it's outdated, it's old, some say ugly, but it's there and has a unique description text, a unique example on the page) - that means they have a page for most of the people using the internet interested in HTML, people which think of HTML as a "programming language" - for them w3school is the perfect product, no competition in sight.
what's the solution:
in the xoogler book "I'm feeling lucky" the author describes a case where a wrong product keeps ocurring again and again for popular product searches. they tuned the algorithm again and again, the results got better and better, but the product showed up again and again. the engineers didn't know what to do. one day, it was gone. what happened: one engineer just bought the product. it wasn't listed in the store anymore. the issues was fixed.