Though it seems mostly cancelled / sporadic now, it had a lot of interesting people presenting on interesting academic uses of the semantic web / RFD / etc.
A couple times I was there Tim Berners-Lee himself was there too. He's an interesting guy to meet.
Overall though, I think due to business reasons (really companies are not incentivized to share) it has mostly caught on in academia. With a shining example in "microformats" which gained adoption because companies like Google adopted them as a way to make gathering (as opposed to sharing) data.
Personally I found a lot of aspects useful but others not all that well thought out when it comes to practical specs. The community has a tendency to try to build complete taxonomies rather than taxonomies that have long term usability. As a result they become stale. For example, Friend of a Friend (FOAF)  is nice but it is very narrowly speced in some areas but not others. For example, there is a tag for AOL Instant Messenger ID but none for Facebook.
Microformats in a way has some similar issues though not as bad.
However, we got something kind of cool out of the RDF model that underlies it, especially when some sufficiently opinionated developers identified the bad parts of RDF and dumped them. We got JSON-LD , a way for making APIs describe themselves in a way that's compatible with RDF's data model. For what I mean about sufficiently opinionated developers, I recommend reading Manu Sporny's "JSON-LD and Why I Hate the Semantic Web" , a wonderful title for the article behind the main reason the Semantic Web is still relevant.
Google makes use of JSON-LD in real situations: for example, an airline that uses JSON-LD can send you an e-mail that Google Assistant can use to update you on the status of your flight, and Gmail can use to give you a simple button for checking in.
Because of this, there's just not much utility in taking the time to generate semantic markup- it'll be sloppy and incomplete even when done by a PhD student specializing in this subject.
If you just have a vocabulary where everyone can freely define concepts and their relationships in a fuzzy way this original goal will never be tractable- There needs to be some sort of unambiguous shared concept space between disparate sites (which in my estimation appears to not be achievable in any practical sense, due to the difficulty in finding "one true way" to build ontologies.)
Also, I think there were ways to "map" different ontologies, thought I never really explored that.
The test for any proposed Internet standard or system should be "what happens when 4chan hears about it?" I don't see semantic web ideas taking off outside of closed forums and walled gardens like academic research or the military. On the public Internet you'll rapidly end up with Donald Trump mapping to "small penis," etc.
But OWL is not based on OOP, it's based on Description Logic, which is a much more powerful abstraction than OOP and it let's you easily represent things which are very hard with something like Java. OWL includes the concept of complex class, in which you define the logical constraints of the class and then it is inferred automatically by a reasoner. This means that you can build really complex multidimensional hierarchies pretty easily.
For example, you can solve the circle/ellipse this way: the class circle is a complex class which is the intersection of the class ellipse and the class of two dimensional geometric shapes in which both major and minor axis have the same length. Any object that satisfies those constraints is a circle!
About the greek problem: you have to declare that the human class and the complex class which results from the intersection of the class two-legged animal and featherless animal are equivalent. It means that every human is two-legged featherless animal and viceversa.
You can even declare equivalences between ontologies, which lets you build conceptual bridges.
OWL has problems related with the maturity and performance of its implementations, and it remains to be seen if it's possible to treat the web as a gigantic Prolog program, but its conceptual model is powerful and sound.
I think they do, but finding the right mathematical model is very difficult. If it were easy, everyone would be a mathematics PhD.
Learning to program is becoming efficient at recognizing the right spherical cow in any given situation, because such shortcuts are essential to getting shit done.
(To pick one question where something doesn't map onto a hierarchy simply)
Programming is actually great for discovering this. Especially OOP, with its introductory examples of animal taxonomy and shapes.
There are many open upper level ontologies available (I counted 16 when I did a review a few years ago - http://www.acutesoftware.com.au/aikif/ontology.html), but the really complete ones are not publically available (Cyc full version, Googles internal ontology and the countless others held in corporate servers).
A visible example is when you look for organisations and they have a classification against it.
e.g. Google IBM and they call it "Computer manufacturing company" - these classifications are different to many of the standards for specific sets of data
I googled IBM, and did not see it classified as such.
For IBM it says
Computer manufacturing company
Image result for ibm
IBM is an American multinational technology company headquartered in Armonk, New York,
United States, with operations in over 170 countries. Wikipedia
Stock price: IBM (NYSE) USD155.39 +2.71 (+1.77%)
10 Apr., 4:00 pm GMT-4 - Disclaimer
Founder: Charles Ranlett Flint
Founded: 16 June 1911, New York City, New York, United States
Headquarters: Armonk, North Castle, New York, United States
Subsidiaries: Trusteer, FileNet, IBM Global Services, Ustream, MORE
Executives: Ginni Rometty (CEO, President, Chairperson), MORE
Did you know: IBM is the world's eighth-largest information technology company by revenue.
Let's keep using FOAF as an example. The facts about who knows who in FOAF are just bare RDF triples. There's nothing about who's allowed to know who knows who. There isn't even room to specify who's allowed to know who knows who. If any significant number of people had described their friends and relationships with FOAF, all of it would quickly have been slurped into a marketing database.
There's also no room in traditional Semantic Web ontologies to keep track of the provenance of why you believe something, and to disbelieve something that comes from an unreliable source. Every triple is supposed to be a statement of fact that you can derive things from as if it is 100% true. You could use FOAF to say you're married to Tim Berners-Lee, and not even Sir Tim would have a way to say "no you're not".
I will do so again:
<http://example.com/smadge> <http://schema.org/name> "smadge" .
<http://example.com/smadge> <http://schema.org/spouse> "http://example.com/timbl" .
<http://example.com/smadge> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .
Second, although again I am out of my area of expertise, I think you can make provenance statements about statements using reification in Semantic Web technologies. I don't know if this is a good source but it seems to suggest it's possible .
To continue this example: you asserted, in English, "I am married to Tim Berners-Lee". In English, anyone can respond "No you're not".
Then you said it again in RDF, in a way that hypothetically a computer system would use to draw conclusions. And there is no way to say "no you're not" in RDF.
 https://web.archive.org/web/20160713021037/http://dig.csail....  https://developers.facebook.com/docs/graph-api/overview/  https://news.ycombinator.com/item?id=13729525#13740110  https://news.ycombinator.com/item?id=12206846#12207459  https://news.ycombinator.com/item?id=12345693#12346371
The semantic web was a great idea, but in the period from 2000 to 2010, people advertized it as a kind of AGI that would solve all hard problems with junk data.
It is still used in biology, for example in Gene Ontology  but the main use case (People are lazy) is "If your research cannot find interesting stuff, just query Gene Ontology".
I wonder if Facebook won't someday be forced to publish its social graph data in FOAF format the same way Microsoft was forced to publish its Office document specs as part of an anti-trust decision.
Speaking of Facebook, the OpenGraph tags are another example of widely-used semantic data on the web, maybe the most widely-used, since all kinds of sites pull in page summaries, images, and other data from those tags. So while Facebook doesn't make social network data available, it did popularize a format for sharing other types of data (about companies, articles, websites, etc.).
At our company we still use Semantic Web (or rather, RDF) for inference and annotation with medical ontologies (UMLS, Gene Ontology, Human Phenotype Ontology, etc). The ease of use of triples + SPARQL (basically a PROLOG-ish unification scheme) is really powerful (and quite performant when using Jena/Fuseki with Lucene as a text index). But it's a far cry from the "dream" of semantic web like federated queries and OpenAnnotations (now just W3C Annotations). Still, every time someone implements an EAV scheme without even considering an RDF triple store I cringe a bit.
As others have said, classification is difficult under the best of circumstances. And it just doesn't fit with the way the Internet has evolved. We have Wikipedia, not the Encyclopedia Galactica.
We got meta tags that tell us the published date, author and type of web page.
We got schema for job ads.
We got schema for recipes.
We got schema for thumbnails and images associated with a webpage.
We got schema for ecommerce products
Noone knows it's called the semantic web these days. It's just what you have to do to have you page get picked up and highly ranked by google, and to get more links from direct product traffic.
2. We've settled on extracting information from "raw" text (with everything from regexes to recognize flight info in e-mails to getting word statistics from terabytes of garbage) and duct-taping that with special-purpose APIs.
Perception, culture, linguistics, time, reality.
One nice demo of the latest advances is how you can query Wikidata client-side without downloading the whole database for queries like "Directors of movies starring Brad Pitt": http://ldfclient.wmflabs.org
At its core were SEO hucksters trying pass off page rank hacks as a business model for consulting work, during the post-dot-com bust period, when money was scarce and web design couldn't pay the bills anymore.
Many ascended to the priesthood of RESTful web microservice development, where they poo poo and tisk-tisk improper path grammar and noodle with JSON objects, in between periods of intense navel gazing.
So, knowledge engineering scales badly, but there were other problems. There was a big debate in the EU community about what kind of reasoner to use, and for some god awful reason F-logic was chosen, at the time we thought that reasoners like Otter wouldn't be able to scale and do FOL tractably. It's a shame that answersets and MCMC probalistic reasoners were 10 years later - I think that the weak reasoning and poor representation systems were big gaps.
The other problem was institutional, the way that EU semantic web funding worked, and the way that the projects developed. A lot of money was spent, and then there was no money - there was no self sustaining legacy.
I see this tech is supposed to be replacement for paper documents and be the medium for government information arbitrage. The only obstacle for using it everywhere is structural complexity of NIEM and lack of tools. I've spent a bit of time hacking it with XML queries and my mind is blown .
You can interpret NIEM as a type system similar to types in programming languages, but for composing electronic documents; it could be integrated with payment systems. I think progress will go two ways: composing new documents will be happening with NIEM, older docs could be converted with natural language processing.
The latest version 4.0 is dated 2017, and US has spent lots of money to build an XML representation of real-life objects.
The utopia is more than this but I assume that few people will used these tools directly.
One of the most useful aspects of the semantic web is how it enhances the search for information. Some web citizens have become conditioned to see Google as the pinnacle of what we can achieve through search, but we can do a lot better. Let's use an example to illustrate this. Imagine a presidential election was taking place and you want to understand the positions of the candidates on topics that matter to you. Let's say foreign policy was something you were interested in, including their proclivity for war. By allowing for searching on a richer set of metadata you can more easily access the information about the positions of these candidates, without the distortions of Google's page rank algorithms. Think of it like treating the information of the web as a database you can query more directly. That's the main promise of the semantic web.
If your not consuming a lot of public data or providing data to the public it is not very useful other than having a bunch of better graph databases associated with it.
Turns out that keeping presentation and data separate is much, much easier. Hopefully HTML 6 will get rid of everything except for div, span, and form elements.
What happened is that it was pointless.
Build something people want, not the semantic web.
A lot of the solution in search of a problem work that went into semantic web just shifted to the crypto currency space.