I worked on scaling and generalizing ontologies at university and had already switched to working with Big Data / ML at a big company when Palantir tried to recruit me. I talked to some of their senior engineers about their tech and made the point that their tech sounded just like ontologies. I tried to get them to admit what it was so I could be sure I was having an honest conversation with them. They flatly denied it and made it out like the whole thing was their great new idea. I was unimpressed.
I was still interested in working for them. Access to hard interesting problems can be hard to come by. In the end I couldn't take their legendary arrogance and insecurities - to me these are bright red flags of a toxic corporate culture. And they low balled me. I would have temporarily put up with the toxic culture for large piles of money.
Smart decision. Far as ontologies, the Cyc project to create common sense in machines was my favorite at the time. Used ontological, knowledge base if my broken memory is accurate. I was and still am firmly convinced that finding an architecture suitable to solve that problem is a pre-requisite for the AI's we really want. Deep-learning is approximating it but closer to how brain does vision than common sense. Minsky noted at one point he could count number of researchers doing common sense on a single hand or so. That's a hard problem if you want one. Also unbelievably hard to get funded. (sighs)