
Cyc - mdszy
https://en.wikipedia.org/wiki/Cyc
======
catpolice
I worked for Cycorp for a few years recently. AMA, I guess? I obviously won't
give away any secrets (e.g. business partners, finer grained details of how
the inference engine works), but I can talk about the company culture, some
high level technical things and the interpretation of the project that
different people at the company have that makes it seem more viable than you
might guess from the outside.

There were some big positives. Everyone there is very smart and depending on
your tastes, it can be pretty fun to be in meetings where you try to explain
Davidsonian ontology to perplexed business people. I suspect a decent fraction
of the technical staff are reading this comment thread. There are also some
genuine technical advances (which I wish were more publicly shared) in
inference engine architecture or generally stemming from treating symbolic
reasoning as a practical engineering project and giving up on things like
completeness in favor of being able to get an answer most of the time.

There were also some big negatives, mostly structural ones. Within Cycorp
different people have very different pictures of what the ultimate goals of
the project are, what true AI is, and how (and whether) Cyc is going to make
strides along the path to true AI. The company has been around for a long time
and these disagreements never really resolve - they just sort of hang around
and affect how different segments of the company work. There's also a very
flat organizational structure which makes for a very anarchic and shifting map
of who is responsible or accountable for what. And there's a huge disconnect
between what the higher ups understand the company and technology to be doing,
the projects they actually work on, and the low-level day-to-day work done by
programmers and ontologists there.

I was initially pretty skeptical of the continued feasibility of symbolic AI
when I went in to interview, but Doug Lenat gave me a pitch that essentially
assured me that the project had found a way around many of the concerns I had.
In particular, they were doing deep reasoning from common sense principles
using heuristics and not just doing the thing Prolog often devolved into where
you end up basically writing a logical system to emulate a procedural
algorithm to solve problems.

It turns out there's a kind of reality distortion field around the management
there, despite their best intentions - partially maintained by the
management's own steadfast belief in the idea that what Cyc does is what it
ought to be doing, but partially maintained by a layer of people that actively
isolate the management from understanding the dirty work that goes into
actually making projects work or appear to. So while a certain amount of
"common sense" knowledge factors into the reasoning processes, a great amount
of Cyc's output at the project level really comes from hand-crafted algorithms
implemented either in the inference engine or the ontology.

Also the codebase is the biggest mess I have ever seen by an order of
magnitude. I spent some entire days just scrolling through different versions
of entire systems that duplicate massive chunks of functionality, written 20
years apart, with no indication of which (if any) still worked or were the
preferred way to do things.

~~~
dmix
Two easy ones for you:

1) How did they manage to make money for so long to keep things afloat? I'm
guessing through some self-sustainable projects like the few business
relationships listed in the wiki?

2) What's the tech stack like? (Language, deployment, etc)

~~~
catpolice
1) The money situation has changed over the years, and they've had times where
things have boomed or busted - it's been a while since I left but I think
they're still in a "boom" phase. There are a lottt more projects with
different companies or organizations than the ones listed on the wiki, but
they tend to be pretty secretive and I won't name names.

The categories of projects that I was familiar with were basically proof of
concept work for companies or government R&D contracts. There are lots of big
companies that will throw a few million at a long-shot AI project just to see
if it pays off, even if they don't always have a very clear idea of what they
ultimately want or a concrete plan to build a product around it. Sometimes
these would pay off, sometimes they wouldn't but we'd get by on the initial
investment for proof of concept work. Similarly, organizations like DARPA will
fund multiple speculative projects around a similar goal (e.g. education -
that's where "Mathcraft" came from IIRC) to evaluate the most promising
direction.

There have been a few big hits in the company's history, most of which I can't
talk about. The hits have basically been in very circumscribed knowledge
domains where there's a lot of data, a lot of opportunity for simple common
sense inferences (e.g. if Alice worked for the ABC team of company A at the
same time Bob worked for the XYZ team of company B and companies A and B were
collaborating on a project involving the ABC and XYZ teams at that same time,
then Alice and Bob have probably met) and you have reason to follow all those
connections looking for patterns, but it's just too much data for a human to
make a map of. Cyc can answer questions about probable business or knowledge
relationships between individuals in large sets of people in a few seconds,
which would be weeks of human research and certain institutions pay a high
premium for that kind of thing.

2) Oh god. Get ready. Here's a 10k foot overview of a crazy thing. All this is
apparent if you use OpenCyc so I feel pretty safe talking about it. Cyc is
divided into the inference engine and the knowledge base. Both are expressed
in different custom LISPy dialects. The knowledge base language is like a
layer on top of the inference engine language.

The inference engine language has LISPy syntax but is crucially very un-LISPy
in certain ways (way more procedural, no lambdas, reading it makes me want to
die). To build the inference engine, you run a process that translates the
inference code into Java and compiles that. Read that closely - it doesn't
compile to JVM bytecode, it transpiles to Java source files, which are then
compiled. This process was created before languages other than Java targeting
the JVM were really a thing. There was a push to transition to Clojure or
something for the next version of Cyc, but I don't know how far it got off the
ground because of 30 years of technical debt.

The knowledge base itself is basically a set of images running on servers that
periodically serialize their state in a way that can be restarted - individual
ontologists can boot up their own images, make changes and transmit those to
the central images. This model predates things like version control and things
can get hairy when different images get too out of sync. Again, there was an
effort to build a kind of git-equivalent to ease those pains, which I think
was mostly finished but not widely adopted.

There are project-specific knowledge base branches that get deployed in their
own images to customers, and specific knowledge base subsets used for
different things.

~~~
perl4ever
"certain institutions pay a high premium for that kind of thing"

Applications in litigation support/e-discovery?

------
stereolambda
Knowledge bases should work in principle. There are many issues with filling
them manually: a) the schema/ontology/conceptual framework is not guaranteed
to be useful especially when done with no specific application in mind b) high
cost of adding each fact with little marginal benefit etc. But I don't think
it outweighs the issues of "pure" machine learning that much: poor
introspection, capricious predictability of what you will get, and if you want
to have really structured and semi-reliable information you will probably have
to rely, at some point, on something like Wikipedia meta-information
(DBpedia). Which is really a knowledge base with its own issues.

I think what really stopped Cyc from getting a wider traction is its closed
nature[0]. People do use Princeton WordNet, which you can get for free, even
though it's a mess in many aspects. The issue and mentality here is similar to
commercial Common Lisp implementations, and the underlying culture is similar
(oldschool 80s AI). These projects were shaped with a mindset that major
progress in computing will happen with huge government grants and plans[1].
However you interpret the last 30 years, it was not exactly true. It's
possible that all these companies earn money for their owners, but they have
no industry-wide impact.

I was half-tempted once or twice to use something like Cyc in some project,
but it would probably be too much organizational hassle. Especially if it
turned out to be something commercial I wouldn't want to be dependent on
someone's licensing and financial whims, especially if it can be avoided.

[0] There was OpenCyc for a time, but it was scrapped.

[1] Compare
[https://news.ycombinator.com/item?id=20569098](https://news.ycombinator.com/item?id=20569098)

~~~
emw
> if you want to have really structured and semi-reliable information you will
> probably have to rely, at some point, on something like Wikipedia meta-
> information (DBpedia).

Wikidata is also worth considering for that task. It is:

* Directly linked from Wikipedia [1]

* The data source for many infoboxes [2]

* Seeded with data from Wikipedia

* More active and integrated in community

* Larger in total number of concepts

Wikidata also has initiatives in lexicographic data [3] and images [4, 5].

On the subject of Cyc: the CycL "generalization" (#$genls) predicate inspired
Wikidata's "subclass of" property [6], which now links together Wikidata's
tree of knowledge.

\---

1\. See "Wikidata" link at left in all articles, e.g.
[https://en.wikipedia.org/wiki/Knowledge_base](https://en.wikipedia.org/wiki/Knowledge_base)

2\.
[https://en.wikipedia.org/wiki/Category:Infobox_templates_usi...](https://en.wikipedia.org/wiki/Category:Infobox_templates_using_Wikidata)

3\.
[https://www.wikidata.org/wiki/Wikidata:Lexicographical_data/...](https://www.wikidata.org/wiki/Wikidata:Lexicographical_data/Documentation#Introduction)

4\.
[https://www.wikidata.org/wiki/Wikidata:Wikimedia_Commons/Dev...](https://www.wikidata.org/wiki/Wikidata:Wikimedia_Commons/Development#Statistics)

5\. See "Structured data" tab in image details on Wikimedia Commons, e.g.
[https://commons.wikimedia.org/wiki/File:Mona_Lisa,_by_Leonar...](https://commons.wikimedia.org/wiki/File:Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg#ooui-
php-1)

6\.
[https://www.wikidata.org/wiki/Property_talk:P279#Archived_cr...](https://www.wikidata.org/wiki/Property_talk:P279#Archived_creation_discussion)

------
wrnr
The the following utterance, sort of looks like the triplet data structure
used in graph/knowledge databases:

"Alive loves Bob"

What do you know? Nothing. Was it Alice who said she loves Bob, or was it Bob
who said it is Alice who loves him, maybe Carol saw the way Alice looks at Bob
and then conclude she must love him. What is love anyway? How exactly is the
love Alice has for Bob quantitively different than my love of chocolate. It
might register similar brain activity in a MRI scan, and yet we humans
recognise them as qualitatively different.

A knowledge base is useless if you can't judge wether a fact is true or false.
The response to this problem was for the semantic web community to introduce a
provenance ontology, but any attempt to reason over statements about
statements seem to go nowhere. IMHO you can't solve the problem of AGI without
also having a way for a rational agent to embody its thoughts in the physical
world.

~~~
Jeff_Brown
Agreed. Human thinking is arbitrarily high-order -- we use statements about
statements about statements with no particular natural complexity limit. This
seems to me the big limitation of knowledge graphs: The majority of real-world
information, just like the majority of natural-language sentences, are highly
nested relationships among relationships.

That was my motivation for writing Hode[1], the Higher-Order Data Editor. It
lets you represent arbitrarily nested relationships, of any arity (number of
members). It lets you cursor around data to view neighboring data, and it
offers a query language that is, I believe, as close as possible to ordinary
natural language.

(Hode has no inference engine, and I don't call it an AI project -- but it
seems relevant enough to warrant a plug.)

[1]
[https://github.com/JeffreyBenjaminBrown/hode](https://github.com/JeffreyBenjaminBrown/hode)

------
mblackstone
Is it possible For a non-academic to get a ResearchCyc license? I’ve used
OpenCyc in the past; now that I’m retired I’d like to look deeper into Cyc.

------
jes5199
last time I looked at OpenCyc's knowledge base, the information encoded was
all strangely specific academic stuff - like very fine classifications and
relationships between species of tapeworms and of fungus. There was very
little daily-life common-sense knowledge, even though that's often the hook in
interviews and articles about Cyc's purpose. I'm not sure why that's true -
maybe it's hard to decide what the 'facts' are about normal human life, but
the more academic something is, the more there's a consensus, rationalized
'reality'

~~~
brundolf
Employee of Cycorp here. A few thoughts:

\- At least right now, we have a good amount of common-sense information about
the world (I don't know when "last time" was for you).

\- That said, we have _a lot_ of highly specialized knowledge in various
domains, so if you took a random sample of the knowledge base (KB) it may not
be as common-sense-centric as you'd hope. But the KB is also incredibly large,
so that doesn't mean we _don 't_ have much common-sense, just that we have
even more other stuff.

\- Often for contracts we get paid to construct lots of domain-specific
knowledge, even if the project also uses the more general knowledge, so this
biases the distribution some.

\- Information that's already well-taxonomized is low-hanging fruit for this
kind of system; its representation doesn't take nearly as much extra thought
and consideration, so it's a faster process, which also biases the
distribution some.

~~~
kick
OpenCyc hasn't been a thing for something like a decade, so even if "last
time" was yesterday, it'd still be on outdated information (because Cycorp
keeps things proprietary, hidden and unauditable). Do you know when they last
pushed it out? It's been a while.

~~~
_bxg1
It has indeed been a while, unfortunately. I don't know the exact date. Some
of us here are trying to push for a revival of OpenCyc or something similar,
to democratize things and get third-party developers playing with the system,
but for now OpenCyc is not really supported.

~~~
kick
Good luck! That's a really genuinely exciting prospect. I hope you succeed!

------
Tistel
I am familiar with prolog and know what it takes (ish) to make an old school
expert system. I have heard about this project. Are there any demos of this
system? Like a video sales pitch. I have always wanted to see it in action.

------
joveian
I can't find it but I distinctly remember that there was part of an episode of
3-2-1 contact in the 80s about what must have been an early version of this
system. It was the exact same thing as brundolf mentions* about the common
sense system and how they set up the system to ask questions when
contradictions arose. An example they used was it had asked if a human is
still human when shaving. It is interesting that the system still exists.

Of course, I don't recall them mentioning any of the more dystopian things it
could be (and sounds like has been) used for :/.

* [https://news.ycombinator.com/item?id=21784105](https://news.ycombinator.com/item?id=21784105)

On second thought, it might have been an Alan Kay presentation. I couldn't
find that either but looking I did find this interesting Wired article from
2016:

[https://www.wired.com/2016/03/doug-lenat-artificial-
intellig...](https://www.wired.com/2016/03/doug-lenat-artificial-intelligence-
common-sense-engine/)

~~~
tgbugs
Amusingly, or sadly, depending on your perspective, in practical settings the
default behavior of that question depends on exactly what you want human to
mean in the vocabulary of the local use case, because continuants are a very
leaky abstraction if they are used to type biological systems, and while to
our 'common sense' the type of human and human shaving should obviously be the
same, when you get to questions about whether seemingly insignificant
numerical differences in rates of catalysis constitute differences in type,
then suddenly the distinction between "protein" and "protein wiggling slightly
faster than usual" or "protein binding molecule a" (think "human holding
shaver") becomes very much not obvious depending on exactly what question you
want to answer. In the protein example if you black box the system, they can
be fundamentally different. If human means predator, and your question is how
dangerous is this human, then "human" and "human holding razor" becomes
"agentous thing" and "agentous thing with sharp edged object" practically
different things in very important ways if you are trying not to be filleted.

------
one_electron
in my experience, most people dismiss cyc as a failed science experiment. this
shouldnt be! after all, many important deep learning concepts have their roots
in the 80s, and it is possible that cyc could be revived too.

~~~
jacquesm
CYC effectively died the day OpenCYC died. There is no way that an entity that
tries to catalogue human knowledge in this way will thrive on a closed set of
data, there are only so many people working there.

Just like the Encyclopedia Brittanica has found its match in WikiPedia so CYC
will find its match in something open. The engine - if the comments here are
to be believed as still currently relevant - is a core that may be relevant
and a huge number of domain specific hacks. Let's hope sooner or later CYC
management comes to their senses and revives OpenCYC.

~~~
rademaker
That is SUMO Ontology
([http://ontologyportal.org](http://ontologyportal.org))! It is open, in
GitHub and people can contribute.

------
dex_tec
1) What do you think about hybrid approach: hypergraphs + large-scale NLP
models (transformers)?

2) How far we're from real self-evolving cognitive architectures with self-
awareness features? Is it a question of years, months, or it's already solved
problem?

3) Does it make sense to use embeddings like
[https://github.com/facebookresearch/PyTorch-
BigGraph](https://github.com/facebookresearch/PyTorch-BigGraph) to achieve
better results?

4) Why Cycorp decided to limit communication and collaboration with scientific
community / AI-enthusiasts at some point?

5) Did you try to solve GLUE / SUPERGLUE / SQUAD challenges with your system?

6) Is Douglas Lenat still contribute actively to the project?

Thanks

~~~
choamnomsky
Doug Lenat is very much still active in the project. He doesn't do as much
work building the ontology, but he plays a role in how various projects
develop and provides a lot of feedback.

~~~
The_rationalist
How do you compare with SOAR and opencog/atomspace?

Which6is the most promising AGI project according to you?

------
musicale
I've always thought that being able to model the physical world at multiple
levels of abstraction was pretty essential for trying to interact with it in a
less brittle way.

Moreover, having models of things that are interesting and relevant to humans
seems pretty important for any system that interacts with humans.

And it always seemed reasonable that any system that aims to use natural
language should be able to represent the meaning of the sentences it uses in a
clear and understandable format.

Also "organizing the world's information" should make it usable in an
automated fashion based on semantic models.

------
gavanwoolery
I immediately recognized the headline even though its been 15 years since I
last read up on Cyc.

I still think the potential of lambda calculus in knowledge representation and
logical deduction is high and under-represented in research.

Just theorizing, but I think a large part of the problem is the difficulty in
interfacing this knowledge base with manual, human entry. Another pitfall is
the difficulty in determining strange or unanticipated logical outcomes, and
developing a framework to catch or validate these.

~~~
rademaker
I have been working on that direction with Lean Theorem Prover
([https://leanprover.github.io](https://leanprover.github.io)). There is also
works using Coq
([https://link.springer.com/chapter/10.1007/978-3-642-35786-2_...](https://link.springer.com/chapter/10.1007/978-3-642-35786-2_11))

------
PeterStuer
Cyc was the last holdout of GOFAI in the 90's, its premise being that the
traditional symbolic AI paradigm wasn't wrong, but that it was just a matter
of scale.

~~~
The_rationalist
Almost all AGI projects use symbolic AI. It is a misconception to believe that
connexionism has won, it only leads narrow tasks that help to build the higher
level thing that is AGI.

------
spirographer
My 2 cents is that I can ask just about any question I can think of and absorb
and internalize an amazing answer in 5 minutes of reading. Many of those same
questions can be automatically asked and answered too. The web and search
engines are realizing the promise far better than anything else.

~~~
Jeff_Brown
God I wish I felt this way. It's true that I can get an amazing amount of
information in a short time, but I don't know how accurate or complete it is,
and I don't even know how to find out.

------
earenndil
How does this compare to wolfram alpha?

------
MauiWarrior
I used to read about cyc here [https://www.cyc.com/cycl-
translations](https://www.cyc.com/cycl-translations). But it says now "coming
soon". Since we have folks from Cyc here, any ideas how soon?

------
coldcode
While this approach might seem dated and strange, at some point something will
begin to approach the ability to do general learning like a human. I just
wonder how long we have to wait.

~~~
yters
What if general learning is uncomputable?

~~~
Rerarom
General learning is uncomputable, it's called Solomonoff induction. You don't
need general learning, you need something at least as powerful as the mess in
a human brain.

~~~
lorepieri
Can you provide some references on "General learning is uncomputable"? Thanks.

~~~
Rerarom
[https://en.wikipedia.org/wiki/AIXI](https://en.wikipedia.org/wiki/AIXI)

------
doctorphil
For me it would be helpful with some more examples of how to formulate the
query and how the reasoning would look. Could someone share some examples of
"common knowledge" that they think are cool?

Here are some common knowledge in English that I would love to see the system
answer.

\- Is a dog owner likely to own a rope-like object? (Yes, they likely own a
leash.)

\- Does the average North American own more than 1 shoestring? (Yes, most
people have at least 2 shoes, and most shoes have shoestrings.)

\- Is it safer to fly or to travel by car?

------
vsskanth
Is it possible that the inherent context dependent ambiguities of human
language make knowledge-based inference so difficult since most current
knowledge is stored in human language ?

tangential question: is there a standard language for "knowledge", like how we
describe math for "computation" ?

Are a part of our brains essentially compilers from human language to an
internal definition of "knowledge" that leads to consciousness ?

~~~
sp332
There are a bunch of "standards" for representing knowledge. E.g.
[https://en.wikipedia.org/wiki/Semantic_Web](https://en.wikipedia.org/wiki/Semantic_Web)

[Edit] Here's a wider overview:
[https://en.wikipedia.org/wiki/Knowledge_representation_and_r...](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning)

~~~
vsskanth
Thanks for the link. It seems to talk about a knowledge graph type links
between entities. It is however in a human language (here, english). I am
interested in knowing if there's something analogous to "math" to represent
knowledge.

~~~
sp332
The knowledge is represented in the links. Words don't inherently mean
anything. The meaning of a word is how it relates to other words. The "math"
of knowledge representation is in manipulating and searching the graph. The
nodes can be named anything, because the names aren't knowledge, they're just
tags on parts of the knowledge.

------
lonelappde
> require between 1000 and 3000 person-years [to input all the relevant facts
> in the world]

Which is laughably small in retrospect. I wonder what current estimates are.

------
jsonbourne
[http://conceptnet.io/](http://conceptnet.io/)

------
et2o
This Wikipedia article is clearly not neutral in tone. It reads like the CEO
wrote it.

------
yters
Why is there never any fundamental research whether human intelligence is even
computable? All these huge, expensive projects based on an untested premise.

~~~
radeklew
Why wouldn't it be? It seems to me that at worst we would have to wait for
computers to become as powerful and complex as a human brain, and then
simulating human intelligence would be a matter of accurately modelling the
connections.

Is there doubt as to whether a neuron can be represented computationally?

~~~
yters
The mind may be nonphysical.

~~~
13415
That's one position, but there are three problems with it:

1\. You have to solve the interaction problem (how does the mind interact with
the physical world?)

2\. You need to explain why the world is not physically closed without
blatantly violating physical theory / natural laws.

3\. From the fact that the mind is nonphysical, it does not follow that
computationalism is false. On the contrary, I'd say that computationalism is
still the best explanation of how human thinking works even for a dualist.
(All the alternatives are quite mystical, except maybe for
hypercomputationalism.)

~~~
yters
1\. No I don't. I don't have to explain how gravity works to know that it does
and make scientific claims about its operation. Likewise, I can scientifically
demonstrate the mind is non physical and interacts with our physical world
without explaining how.

2\. If the world is not physically closed then physical theory and natural
laws are not violated, since they would not apply to anything beyond the
physical world.

3\. True, but if the mind can be shown to perform physically uncomputable
tasks, then we can infer the mind is not physical. In which case we can also
apply Occam's razor and infer the mind is doing something uncomputable as
opposed to having access to vast immaterial computational resources.

Finally, calling a position names, such as 'mystical', does nothing to
determine the veracity of the position. At best it is counter productive by
distracting from the logic of the argument.

~~~
13415
I wasn't trying to argue with you, I merely laid out what is commonly thought
about the subject matter. Sorry if that sounds patronizing (it's really not
meant to). Anyway, if you want to publish a paper defending a dualist position
nowadays in any reputable journal, you'll have to address points 1&2 in one
way or another, whether you believe you have to or not. It's not as if that
problem hadn't been discussed during the past 60 years or so. There are whole
journals dedicated to it.

> _if the mind can be shown to perform physically uncomputable tasks_

That's true. Many people have tried that and many people _believe_ they can
show it. Roger Penrose, for example. These arguments are usually based on
complexity theory or the Halting Problem and involve certain views about what
mathematicians can and cannot do. As I've said, I've personally not been
convinced by any of those arguments.

Your mileage may differ. Fair enough. Just make sure that you do not "know the
answer" already when starting to think about the problem, because that's what
many people seem to do when they think about these kind of problems and it's a
pity.

> _calling a position names, such as 'mystical', does nothing to determine the
> veracity of the position. At best it is counter productive by distracting
> from the logic of the argument._

That wasn't my intention, I use "mystical" in this context in the sense of
"does not provide any better understanding or scientifically acceptable
explanation." Many of the (modern) arguments in this area are inferences to
the best explanation.

By the way, correctly formulated computationalism does not presume
physicalism. It is fully compatible with dualism.

~~~
yters
Yes, I understand computationalism does not imply physicalism, but physicalism
does imply computationalism. Thus, if computationalism is empirically refuted,
then physicalism is false.

I know the Lucas Godel incompleteness theorem type arguments. Whether
successful or not, the counter arguments are certainly fallacious. E.g. just
because I form a halting problem for myself does not mean I am not a halting
oracle for uncomputable problems.

But, I have developed a more empirical approach, something that can be solved
by the average person, not dealing with whether they can find the Godel
sentence for a logic system.

Also, there is a lot of interesting research showing that humans are very
effective at approximating solutions to NP complete problems, apparently
better than the best known algorithms. While not conclusive proof in itself,
such examples are very surprising if there is nothing super computational
about the human mind, and less so if there is.

At any rate, there are a number of lines of evidence I'm aware of that makes
the uncomputable mind a much more plausible explanation for what we see humans
do, ignoring the whole problem of consciousness. I'm just concerned with
empirical results, not philosophy or math. As such, I don't really care what
some journal's idea of the burden of proof is. I care about making discoveries
and moving our scientific knowledge and technology forward.

Additionally, this is not some academic speculation. If the uncomputable mind
thesis is true, then there are technological gains to be made, such as through
human in the loop approaches to computation. Arguably, that is where all the
successful AI and ML is going these days, so that serves as yet one more line
of evidence for the uncomputable mind thesis.

~~~
inimino
> physicalism does imply computationalism

That's not true either.

There are plenty of materialists who think the universe is not computable,
thus it's totally possible to believe that the mind is not computable despite
being entirely physical.

~~~
yters
It's possible, so I should qualify it as our current understanding of physics
implies computationalism.

So, if a macro phenomena, i.e. the human mind, is uncomputable, then it is not
emergent from the low computable physical substrate.

~~~
inimino
If the mind were found to be uncomputable, I think you'd find vastly more
physicists would take that as evidence the universe is uncomputable than that
the mind is nonphysical.

~~~
yters
So they may, but that would not follow logically. If the lowest level of
physics is all computable then the higher physical levels must also be
computable. Thus, if a higher level is not computable, it is not physical. We
have never found anything at the lowest level that is not computable. None of
it is even at the level of a Turing machine, unlike human produced computers.

~~~
inimino
Any chaotic system (highly sensitive to initial conditions) is practically
uncomputable for us, because we have neither the computational power nor the
ability to measure the initial conditions sufficiently accurately. Whether
there is some lowest level at which everything is quantized, or it's real
numbers all the way down, is an open question.

I don't think your argument will seem compelling to anyone who doesn't already
have a strong prior belief that the mind is non-physical.

~~~
yters
I would argue it is the other way around. If people are truly unbiased whether
we are computable or not, then they would give my argument consideration. It
is those with a priori computational bias that will not be phased by what I
say.

~~~
inimino
You're right, but people tend to have strong priors one way of the other,
often unconsciously. This is one of those classic cases where people with
strong, divergent priors will disagree more strongly after seeing the same
evidence. So if you want to convince people you'll have to try harder than
most to find common ground.

~~~
yters
And that's why I'm not concerned with convincing anyone. The proof is in the
pudding. If I'm right, I should be able to get results. If not, then my
argument doesn't matter.

------
vanniv
I can't believe this is still a thing.

Then again, I felt the same way when I studied it at university almost 20
years ago. It was pretty obviously a pipe dream then, too.

~~~
magnifique
I thought so on first glance too, but from what I've heard from someone who
worked there, it's working software and it makes a lot of money. That's the
opposite of a pipe dream, even if far short of AGI.

~~~
yters
Is it making real money, or speculative money on the moon shot premise that
AGI will rule it all if successful?

I worked at an AI company before, and it was the latter.

~~~
aidenn0
My understanding is mostly the latter, but definitely also the former, but
it's based off of "I worked there, but our customer list isn't public so I
can't tell you who" type statements like you'll see elsewhere.

If I had to guess what it's been actually used for, I'd wager it's money
laundering or counter-terrorism type stuff; it's fairly well suited to finding
connections between people and entities given a large data-set, and unlike
many ML models, it can tell you why it thinks someone is suspicious, which
might be needed for justifying further investigation. This is a completely
wild-ass guess though so take with a giant grain of salt.

~~~
yters
Yes that was the sort of usecase for the AI company I worked at, similar data
mining competing with Palantir.

------
lidHanteyk
I think that this particular topic is evergreen because people are perennially
surprised that this technology, which seems so reasonable and advanced at
first blush, has failed to be useful in practice.

~~~
xamuel
I spent a year doing an ontology postdoc. I can't speak for Cyc, but from what
I saw of the ontology world, there are a lot of charlatans and people who are
using it as a buzzword to make grant proposals sexier. Whether or not there's
real potential in the technology, that kind of environment surely isn't
conducive to achieving said potential. An outsider trying to peek into the
field is immediately swamped by vast oceans of garbage, and everyone in the
field is an expert at marketing themselves so if there are legitimate
researcher gems in the field, I don't know how you'd actually find them from
amidst all the noise.

~~~
Jeff_Brown
This supply-side opacity is definitely a problem. There seems to be a
corresponding demand-side problem, that clients often don't know quite what
they want. If there was an easy way of generating hard tests with clear-cut
answers, maybe there would be an easy way for a winner to distinguish
themselves.

