Hacker News new | past | comments | ask | show | jobs | submit login

There is usually something wrong with the ontologies approach as it rarely works. There is roughly two decades of evidence for this for anyone who cares to look. Five decades if you loosen the definition to include the family of logic and constraint programming - see AI Winter. There is nothing new about these ideas. It always looks and feels like it's going to work which is why humanity has persisted with it for so long and will likely continue to persist for some time to come.

There is a whole generation of better techniques that have come out of machine learning that totally eclipses ontologies and I know Palantir isn't using them. Their corporate culture isn't set up for fostering that kind of applied research.

No-one is advocating for a fully automated approaches. I don't know where that notion came from.

In my view is that Palantir is a consulting company that is pretending to be a tooling company. And their consultants are not worth the money they charge. Just one of many Silicon Valley based frauds.




Is this ontologies within the field of AI not working, or more generally?

Do you have references to any specific discussions on this?

Curious as I'm doing some work of my own (well outside AI) in which developing ontologies strikes me as useful, though I'd prefer not falling into any well-worn traps.

(My use is largely comping up with useful descriptive models of otherwise hairy concepts.)


What bazqux2 said is accurate. I'll go further to say that the kinds of work Palantir is involved in is mostly probabilistic. Especially intelligence work. So, use of models requiring certainty or straight logic in areas rife with uncertainty & degrees of truth seems set up to fail outside easy inferences. One can encode the logical stuff in probability models but harder to do reverse. Hence, their underlying tech should be probabilistic, fuzzy logic, or something similar for best results instead of just some results.

Far as ontologies in general, they have a mixed, track record. They take a lot of work to create. Then, they have to be mapped to real world inputs and outputs. One way they got applied is so called business rules engines or business process management. It's like a subset of ontology approaches of past. Here's a company that uses the real thing for enterprise software with Mercury language for execution part:

http://www.missioncriticalit.com/development.html

Also, Franz Inc, of Allegro Common LISP, covers many of the same use cases as Palantir with their ontological tooling.

http://allegrograph.com/solutions-by-use/

So, there's definitely companies using it for long periods of time for real-world, use cases. Palantir just seemed to be mixing it with hype and secrecy to maximize their sale price later. ;)


I meant generally they are generally not useful. Sometimes they are. It depends on the purpose and what you want to build and who it's for.

Given that you're building a descriptive model it would depend if you're working with facts or with probabilities. If it's facts then Ontologies should work fine, for probabilities I'd recommend Bayesian techniques.

The input for these are usually small. From the sounds of it you're generating the input yourself so you should be safe.


An ontology of technlogical mechanisms (or dynamics):

https://ello.co/dredmorbius/post/klsjjjzzl9plqxz-ms8nww

Particularly in economic and policy discussion, technology is just "technology". A black box. In economics, Solow's Residual is described, by Solow, as "the measure of our ignorance" of factor productivity growth influences -- it's quite literally, statistically, what's left over after accounting for labour and capital.

I see a few quite evident classifications which strike me as useful:

1. Fuels. Apply more energy to something, it tends to happen faster. Wood, plant and animal oils, fossil fuels, nuclear fission, possibly fusion.

2. Material properties. Some things are highly dependent on specific material properties. Conductivity of gold, silver, copper, and aluminium. Ferromagnetism. Hardness of diamond. Softness of graphite. Semiconducting of silicon. Fertilising properties of nitrogen, phosphorus, and potassium. Many others. Point being, you're now locked into availablity and other properties of that material.

3. Specific process knowledge. What used to be called "arts". Most of what's now considered "technology", from agriculture to zymurgy (though zyumurgy's actually fairly close to agriculture...). These approach theoretical efficiency limits.

4. What seem to be dendritic or web structured aspects. Computer chips and Moore's law are today's classic example, but I'd count communications, transport, and trade networks, cities and urbanisations, knowledge itself, and other elements among these. What they have in common is an increasing rate of progress with greater accumulation, modulo retarding factors.

There are several other elements. Sensing and measurement increase various capabilities -- navigation and fine metal machining come to mind. Symbolic processing, from speech and writing to abstract maths and programming. Organisation -- of people, states, business, and finance.

The final element, and one which popped out at me whilst devising the ontology, was the concept of hygiene or pollution factors. They're a distinct class of phenomena which if not addressed tend to put a damper on further growth, everything from infectuous disease in cities to heavy metal pollution, salination of croplands, traffic congestion, spam and fraud in communications and business networks. It's a superset of common categories such as "pollution" or "disease" or "social breakdown".

Anyhow, that's what I'm working on. I find it a useful organising tool, still developing the idea.


1. In semiconductors, we get more out of stuff when we put in less energy due to shrinking the transistors. Even increasing transistors in same node doesn't always result in more work since bigger chips have slower clock rates. I think you need to look at inputs, which include time, more than fuel given it doesn't apply to a lot of things. Even human body which, as you increase fuel, will work slower due to being gorged and then die with exploded stomach.

2. This is true. It's worth noting such dependencies.

3. Elaborate on that.

4. That's true. There's a lot of work on that topic already that you can draw on. I remember some showing that how the cities grew was similar to how bacteria looked. Weird stuff.

Re waste. You can model it as a separate thing that goes up when certain actions happen, then starts bringing them down. Definitely should be considered.


Semiconductors are a case of #4. To contrast fuel vs. dendritic structures:

Fuels feed processes in which energy is crucial. Food and metabolism, almost all ore refining and metalworking, heating and cooking, and transport. Air travel (at any significant level) and Earth-to-orbit space launch are both entirely dependent on fuel-driven processes.

I didn't mention energy transmission and transformation, which is another set of mechanisms, ranging from projectiles (force-at-a-distance) to the simple machines (lever, ramp, screw, pulley, gears), linear-to-rotary and rotary-to-reciprocating transforms. Electricity, in this this ontology, is for the most part an energy transmission and transformation mechanism: to heat, motion, light, sound, etc.

3. See the Ello link for a list. The key is that the understanding is of how to do a process, which approaches some theoretical maximum efficiency. There's probably a learning curve associated, see J. Doyne Farmer and Wright's Law (related to Moore's) of process improvement.

4. You're likely thinking of Geoffrey West. There's a lot of Santa Fe Institute thinking in this idea generally.

The hygiene factors are more than just waste.

An early realisation of this came when I was considering Metcalfe's Law and the Tilly-Odlyzko refutation, of network effects. What I realised was that while yes, additional nodes tended to produce lesser value, each node also had a tendency to impose a cost to others, that being roughly constant. In a message or information network, you could consider this to be the "is this worth reading or not" cost associated with any given message.

See: https://www.reddit.com/r/dredmorbius/comments/1yzvh3/refutat...

(If you have Reddit's RES installed, set to view images, as there's a set of graphs illustrating the cost function.)

Applying that to various group communication sets, you can estimate the cost constant, and it turns out that the maximum supportable group size is a function of that constant. Among other things, Facebook manages to scale to a billion or several members by keeping the negative cost constant really, really low.

That's just one instance.

More generally, there are other phenomena which show examples of cost:

1. The Silk Road increased trade but also created a "commerce" in disease from China to Europe and versa. Similar for interactions with the New World (smallpox, syphilus).

2. Greek and Roman city engineers were conscious of location especially as regarded water flow, with the associations with disease. Clean in, dirty out. And no deisel pumps.

3. Indoor fire gives heat and cooking, but contributes to air pollution. Chimneys help.

4. Disease and epidemics limited city sizes. ~1800 London could not sustain its own population through births given the death rate. Constant in-migration was essential. Life-expectency of new arrivals was frightfully low. This improved tremendously with creation of sewers. By the end of the 19th century, solid waste, sewage, and horse metabolites (solid and liquid) were a crisis for many large cities, which had populations of hundreds of thousands of horses alone. The automobile solved a crushing pollution problem. But you got sewage, freshwater, sanitation, etc.

5. Reducing costs of something inevitably increases the amount of undesirable activity enabled. You need highly differentiated reward/punishment systems to limit these. Highway congestion, cruising, fraud, spam, advertising, etc.

6. Systemic disruptions. Here, the issue is effects which operate in difficult-to-forsee, systemic ways. CO2 and global warming, CFCs and ozone, asbestos, endocrine disrupters, nonnative species introduction, light pollution and wildlife disruption, are all examples.

Some of this overlaps with various other areas -- pollution, ecological principles, health and sanitation, etc. But I think the concept may be more general than any of these, and in terms of a technological dynamic, it has its own space, where the factors act to limit growth unless themselves specifically addressed.


Again, Palantir is not an AI company. They are a data visualization and analytics company. So all your perfectly fine points about ontologies and AI winter are not relevant.


It is an ontology company - see their website. This is how they derive their analytics and visualizations. So my points are relevant.


Hey -- I wrote that BuzzFeed article in May. I'm always looking to learn new things. If anyone on here wants to chat and compare notes, off the record, please don't hesitate to reach out: will.alden@buzzfeed.com




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: