Hacker News new | past | comments | ask | show | jobs | submit login

Sure, I'll give a few examples.

A guy from BP told me in 2013 that they forked iPython, replaced all iPython references with Palantir and tried to sell it to them for $500K p.a.

For me; back in the day (2010) they were less secretive about their technology which was essentially an ontological reasoner. This was pre the Big Data hype boom - and AFAIK Palantir has never been about Big Data. Ontological reasoners have problems that prevent them from scaling or generalizing so they generally fail. Due to a long long history of failing ontological systems have a very bad name. But they look good for guided demos and has a ton of academic backing so it's easy to sell - as long as you call it something else - which is what they did. So if you want to use ontologies a better open source alternative software is Protege. But for the problems Palantir targets I'd recommend using standard machine learning technology where all the good stuff is open sourced.

As an aside, Peter Thiel also helped found Quid. A start-up that ripped off the Gephi layout engine and charges people $20K p.a. a seat. They've since rebuilt it but like Palantir it's still not solving people's problem and they've evolved into a consulting firm.




They apparently say it right here about the ontology approach:

https://www.palantir.com/palantir-gotham/technologies/

That's especially hilarious given that approach's failures are what led to investment in machine learning in the first place. Such approaches tend to assume precise information, variables, and rules about the world. Most problems Palantir wants to address... the hard ones... are imprecise with hidden variables/relationships. The machine learning techniques did very well on those kind of mess problems. So, research shifted.

If Palantir is using ontologies for that stuff, then that would certainly be a sign for buyers to run. I still encourage academics to look into such approaches with probabilistic, simple methods in case any advances come up. Fuzzy logic was main one in my day. Just stumbled on a claim today a drone AI did human-level performance using that. Some corroboration for R&D in underdog solutions but not production apps. Haha.


Huh, so they are open about it now. They definitely were not when I talked to them. I guess enough time has passed that people have forgotten the hard won lessons of the past.

I worked on scaling and generalizing ontologies at university and had already switched to working with Big Data / ML at a big company when Palantir tried to recruit me. I talked to some of their senior engineers about their tech and made the point that their tech sounded just like ontologies. I tried to get them to admit what it was so I could be sure I was having an honest conversation with them. They flatly denied it and made it out like the whole thing was their great new idea. I was unimpressed.

I was still interested in working for them. Access to hard interesting problems can be hard to come by. In the end I couldn't take their legendary arrogance and insecurities - to me these are bright red flags of a toxic corporate culture. And they low balled me. I would have temporarily put up with the toxic culture for large piles of money.


"I was still interested in working for them. Access to hard interesting problems can be hard to come by. In the end I couldn't take their legendary arrogance and insecurities - to me these are bright red flags of a toxic corporate culture. And they low balled me. I would have temporarily put up with the toxic culture for large piles of money."

Smart decision. Far as ontologies, the Cyc project to create common sense in machines was my favorite at the time. Used ontological, knowledge base if my broken memory is accurate. I was and still am firmly convinced that finding an architecture suitable to solve that problem is a pre-requisite for the AI's we really want. Deep-learning is approximating it but closer to how brain does vision than common sense. Minsky noted at one point he could count number of researchers doing common sense on a single hand or so. That's a hard problem if you want one. Also unbelievably hard to get funded. (sighs)


Thanks. There is some interesting work being done with Deep Learning mixed with Bayesian Models and external databases of facts. The training apparently ungodly slow though. My work these days is visual so CNN works well enough. I'm not an authoritative expert in this field though so I'm taking 6 months off to study it intensely, after which I hope to be able to make some meaningful contributions.


There is nothing wrong with the ontologies approach. Palantir is not aiming for a fully automated approach. Palantir's core product is a an entity graph mostly used and manipulated by humans for analysis. Basically their software is only meant to augment human analysis.

That being said, from my conversations with them, they also have a traditional machine learning team for whenever that approach is needed for a product. But their core product is meant to only help analysis that is mostly done by humans.


There is usually something wrong with the ontologies approach as it rarely works. There is roughly two decades of evidence for this for anyone who cares to look. Five decades if you loosen the definition to include the family of logic and constraint programming - see AI Winter. There is nothing new about these ideas. It always looks and feels like it's going to work which is why humanity has persisted with it for so long and will likely continue to persist for some time to come.

There is a whole generation of better techniques that have come out of machine learning that totally eclipses ontologies and I know Palantir isn't using them. Their corporate culture isn't set up for fostering that kind of applied research.

No-one is advocating for a fully automated approaches. I don't know where that notion came from.

In my view is that Palantir is a consulting company that is pretending to be a tooling company. And their consultants are not worth the money they charge. Just one of many Silicon Valley based frauds.


Is this ontologies within the field of AI not working, or more generally?

Do you have references to any specific discussions on this?

Curious as I'm doing some work of my own (well outside AI) in which developing ontologies strikes me as useful, though I'd prefer not falling into any well-worn traps.

(My use is largely comping up with useful descriptive models of otherwise hairy concepts.)


What bazqux2 said is accurate. I'll go further to say that the kinds of work Palantir is involved in is mostly probabilistic. Especially intelligence work. So, use of models requiring certainty or straight logic in areas rife with uncertainty & degrees of truth seems set up to fail outside easy inferences. One can encode the logical stuff in probability models but harder to do reverse. Hence, their underlying tech should be probabilistic, fuzzy logic, or something similar for best results instead of just some results.

Far as ontologies in general, they have a mixed, track record. They take a lot of work to create. Then, they have to be mapped to real world inputs and outputs. One way they got applied is so called business rules engines or business process management. It's like a subset of ontology approaches of past. Here's a company that uses the real thing for enterprise software with Mercury language for execution part:

http://www.missioncriticalit.com/development.html

Also, Franz Inc, of Allegro Common LISP, covers many of the same use cases as Palantir with their ontological tooling.

http://allegrograph.com/solutions-by-use/

So, there's definitely companies using it for long periods of time for real-world, use cases. Palantir just seemed to be mixing it with hype and secrecy to maximize their sale price later. ;)


I meant generally they are generally not useful. Sometimes they are. It depends on the purpose and what you want to build and who it's for.

Given that you're building a descriptive model it would depend if you're working with facts or with probabilities. If it's facts then Ontologies should work fine, for probabilities I'd recommend Bayesian techniques.

The input for these are usually small. From the sounds of it you're generating the input yourself so you should be safe.


An ontology of technlogical mechanisms (or dynamics):

https://ello.co/dredmorbius/post/klsjjjzzl9plqxz-ms8nww

Particularly in economic and policy discussion, technology is just "technology". A black box. In economics, Solow's Residual is described, by Solow, as "the measure of our ignorance" of factor productivity growth influences -- it's quite literally, statistically, what's left over after accounting for labour and capital.

I see a few quite evident classifications which strike me as useful:

1. Fuels. Apply more energy to something, it tends to happen faster. Wood, plant and animal oils, fossil fuels, nuclear fission, possibly fusion.

2. Material properties. Some things are highly dependent on specific material properties. Conductivity of gold, silver, copper, and aluminium. Ferromagnetism. Hardness of diamond. Softness of graphite. Semiconducting of silicon. Fertilising properties of nitrogen, phosphorus, and potassium. Many others. Point being, you're now locked into availablity and other properties of that material.

3. Specific process knowledge. What used to be called "arts". Most of what's now considered "technology", from agriculture to zymurgy (though zyumurgy's actually fairly close to agriculture...). These approach theoretical efficiency limits.

4. What seem to be dendritic or web structured aspects. Computer chips and Moore's law are today's classic example, but I'd count communications, transport, and trade networks, cities and urbanisations, knowledge itself, and other elements among these. What they have in common is an increasing rate of progress with greater accumulation, modulo retarding factors.

There are several other elements. Sensing and measurement increase various capabilities -- navigation and fine metal machining come to mind. Symbolic processing, from speech and writing to abstract maths and programming. Organisation -- of people, states, business, and finance.

The final element, and one which popped out at me whilst devising the ontology, was the concept of hygiene or pollution factors. They're a distinct class of phenomena which if not addressed tend to put a damper on further growth, everything from infectuous disease in cities to heavy metal pollution, salination of croplands, traffic congestion, spam and fraud in communications and business networks. It's a superset of common categories such as "pollution" or "disease" or "social breakdown".

Anyhow, that's what I'm working on. I find it a useful organising tool, still developing the idea.


1. In semiconductors, we get more out of stuff when we put in less energy due to shrinking the transistors. Even increasing transistors in same node doesn't always result in more work since bigger chips have slower clock rates. I think you need to look at inputs, which include time, more than fuel given it doesn't apply to a lot of things. Even human body which, as you increase fuel, will work slower due to being gorged and then die with exploded stomach.

2. This is true. It's worth noting such dependencies.

3. Elaborate on that.

4. That's true. There's a lot of work on that topic already that you can draw on. I remember some showing that how the cities grew was similar to how bacteria looked. Weird stuff.

Re waste. You can model it as a separate thing that goes up when certain actions happen, then starts bringing them down. Definitely should be considered.


Semiconductors are a case of #4. To contrast fuel vs. dendritic structures:

Fuels feed processes in which energy is crucial. Food and metabolism, almost all ore refining and metalworking, heating and cooking, and transport. Air travel (at any significant level) and Earth-to-orbit space launch are both entirely dependent on fuel-driven processes.

I didn't mention energy transmission and transformation, which is another set of mechanisms, ranging from projectiles (force-at-a-distance) to the simple machines (lever, ramp, screw, pulley, gears), linear-to-rotary and rotary-to-reciprocating transforms. Electricity, in this this ontology, is for the most part an energy transmission and transformation mechanism: to heat, motion, light, sound, etc.

3. See the Ello link for a list. The key is that the understanding is of how to do a process, which approaches some theoretical maximum efficiency. There's probably a learning curve associated, see J. Doyne Farmer and Wright's Law (related to Moore's) of process improvement.

4. You're likely thinking of Geoffrey West. There's a lot of Santa Fe Institute thinking in this idea generally.

The hygiene factors are more than just waste.

An early realisation of this came when I was considering Metcalfe's Law and the Tilly-Odlyzko refutation, of network effects. What I realised was that while yes, additional nodes tended to produce lesser value, each node also had a tendency to impose a cost to others, that being roughly constant. In a message or information network, you could consider this to be the "is this worth reading or not" cost associated with any given message.

See: https://www.reddit.com/r/dredmorbius/comments/1yzvh3/refutat...

(If you have Reddit's RES installed, set to view images, as there's a set of graphs illustrating the cost function.)

Applying that to various group communication sets, you can estimate the cost constant, and it turns out that the maximum supportable group size is a function of that constant. Among other things, Facebook manages to scale to a billion or several members by keeping the negative cost constant really, really low.

That's just one instance.

More generally, there are other phenomena which show examples of cost:

1. The Silk Road increased trade but also created a "commerce" in disease from China to Europe and versa. Similar for interactions with the New World (smallpox, syphilus).

2. Greek and Roman city engineers were conscious of location especially as regarded water flow, with the associations with disease. Clean in, dirty out. And no deisel pumps.

3. Indoor fire gives heat and cooking, but contributes to air pollution. Chimneys help.

4. Disease and epidemics limited city sizes. ~1800 London could not sustain its own population through births given the death rate. Constant in-migration was essential. Life-expectency of new arrivals was frightfully low. This improved tremendously with creation of sewers. By the end of the 19th century, solid waste, sewage, and horse metabolites (solid and liquid) were a crisis for many large cities, which had populations of hundreds of thousands of horses alone. The automobile solved a crushing pollution problem. But you got sewage, freshwater, sanitation, etc.

5. Reducing costs of something inevitably increases the amount of undesirable activity enabled. You need highly differentiated reward/punishment systems to limit these. Highway congestion, cruising, fraud, spam, advertising, etc.

6. Systemic disruptions. Here, the issue is effects which operate in difficult-to-forsee, systemic ways. CO2 and global warming, CFCs and ozone, asbestos, endocrine disrupters, nonnative species introduction, light pollution and wildlife disruption, are all examples.

Some of this overlaps with various other areas -- pollution, ecological principles, health and sanitation, etc. But I think the concept may be more general than any of these, and in terms of a technological dynamic, it has its own space, where the factors act to limit growth unless themselves specifically addressed.


Again, Palantir is not an AI company. They are a data visualization and analytics company. So all your perfectly fine points about ontologies and AI winter are not relevant.


It is an ontology company - see their website. This is how they derive their analytics and visualizations. So my points are relevant.


Hey -- I wrote that BuzzFeed article in May. I'm always looking to learn new things. If anyone on here wants to chat and compare notes, off the record, please don't hesitate to reach out: will.alden@buzzfeed.com


What does "p.a." mean?


per annum = per year


Why do people replace transparent English phrases with opaque foreign abbreviations? We're not writing with quills on vellum anymore.

Even "$500k/yr" is an improvement on "$500k p.a.".


As far as I know, per annum is actually a part of the English language. I typically have seen it used in the context of pricing or finance which seems to be verified. [1] I don't disagree with you about p.a. less clear than /yr though.

1. http://english.stackexchange.com/questions/3933/what-is-the-...


I'd argue "per annum" or p.a. isn't a foreign term and just standard English. English is full of Latin words, transparent, opaque, foreign, those are all words of Latin origin.


> I'd argue "per annum" or p.a. isn't a foreign term and just standard English.

Did you notice that a fluent English speaker had to ask what "p.a." meant? Do you think that would have happened if it had been written "per year" instead?


The same could be argued for any jargon outside of one's own field.


As a counterpoint: I'm not a native English speaker, I do understand "yr" just fine, but it takes me a split second longer to read that abbreviation.

Because p.a. is also part of my language.


How could one tell the OP was fluent?


Complaining about p.a. in a finance context makes about as much sense as complaining about "etc".


Well it makes sense to complain about etc when we have etcd now. Embrace the future!


Per annum, per centum, per mille, per capita, et cetera, et cetera.

These are Latin phrases, borrowed especially in British English as Great Britain was occupied by Latin speakers for nearly 400 years -- 43 CE through 410 CE. Latin continued to be the language of diplomacy, religion, philosophy, and science through the 18th and 19th century.

It is, for all practical intents, proper British English.

https://en.m.wiktionary.org/wiki/per

Though yes, $<value>/yr. is more frequently seen especially in American English.


> borrowed especially in British English as Great Britain was occupied by Latin speakers for nearly 400 years -- 43 CE through 410 CE.

Can you explain how Latin loanwords were loaned into English during this period? In your explanation, please make use of the facts that (1) there were no English speakers in Great Britain before 410 CE, and moreover (2) there was, by definition, no such language as English until Anglo-Saxon migrations into Great Britain (around 450 CE) established a distinct West Germanic linguistic community on the island.


You're more than welcome to explore this yourself.

You are, otherwise, being what the modern English derivative of the Sumerian ansu describes.


You are being worse than Palantir.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: