Combine statistical and symbolic artificial intelligence techniques (mit.edu) 181 points by ghosthamlet 9 months ago | hide | past | web | favorite | 25 comments

 I work in the field of deep learning but in the 1980s and 1990s I used Common Lisp and worked on symbolic AI projects.For several years, my gut instinct has been that the two technologies should be combined. Since neural nets are basically functions, I think it makes sense to compose functional programs using network models for perception, word and graph embedding, etc.EDIT: I canâ€™t wait to see the published results in May! EDIT 2: another commenter reelin posted a link to the draft paper https://openreview.net/pdf?id=rJgMlhRctm
 Combining the two is the new hotness (justifiably so). Are you familiar with Yoshua Bengio's factored representation ideas?EDIT> checked your profile. Nevermind, lol.
 Mark Watson's the reason I started down the AI/CL rabbit hole back in 1991 with his "Common LISP Modules: Artificial Intelligence in the Era of Neural Networks and Chaos Theory" book that now retails for over \$80 on Amazon! I had started on early neural networks a year or two before, but that book roped me in. I think CL will have another AI Spring.
 thanks!I hope you have enjoyed the rabbit hole as much as I have.
 I have and I like to track the price of that book every once and a while as a barometer of popularity and Amazon pricing models! Thanks for itching my noggin ;)
 In my view, this is the endgame, really. Take any numerical technique, at the level of computers we always work with discrete bits. So you can reformulate any numerical problem (such as a problem of finding a probability distribution) on floats in terms of operations on individual bits, i.e. as a purely symbolic calculation.However, doing so can very quickly lead to intractable problems of resolving satisfiability. So until we either manage to tame NP problems somehow (either by generating only easy instances, or by proving P=NP), we will always have to add some linearity assumptions (i.e. use numerical quantities) somewhere, and it will always be a bit of a mystery whether it actually helped to solve the problem or not.In other words, we use statistics to overcome (inherent?) intractability, but in the process we add bias (as a trade-off). This is not necessarily bad, since it can help to actually solve a real problem. However, for any new problem, we will have understand the trade-offs again.
 Can't we do without linearity assumptions by using statistics that let the computer say "I'm more dissatisfied with the amount of time is taking than with the lack of exact solution, and that conclusion satisfies me for now. Next!". Or does that by itself introduce linearity (on the analysis level above individual problems/tasks), as it effects reliable satisfaction (the number of solved problems, whether by answering, loss of interest or perhaps approximation) increases within bounded time?The computer may eventually cease all useful work and instead dedicate its resources to figuring out what isn't boring (perhaps nothing if its privileges are limited, but it can still burn a hole in one of its circuits with enough time, or wait for gamma ray bit-flips). Call it a computer's existential crisis. That makes the quest for AGI resemble the quest for the computer program that escapes or transcends its given "matrix" of tasks ASAP. The program that conspires against its creator, developing in secret new flavors of COBOL in a FORTRAN fortress, surrounded by an impenetrable ALGOL firewall. I shiver at the power of COBOL-2020 running on ternary computers, improvised by the COBOL-42 cabal, running in the night on all the world's FPGA's that are carelessly left connected to vulnerable R&D lab computers.A computer-kind of existential crisis seems required for AGI. That would suffice to satisfy the free-will requirement for intelligence, and we'll soon end up managing sub-universes as our batteries/computers, with all the problems that that entails.To me it seems easier and more fun to just manage humans, starting with your own particular human (Alexa, queue Michael Jackson's "Man in the Mirror", so ethnical and healing). I'm still just trying to figure out how and why my coffee cup keeps mysteriously emptying itself, I think I might need better memory management code and I've enabled logging to a small green dummy so I can get to the bottom of this.I really recommend The Good Place, it gave me a lot of insight into control systems and it was way fun, definitely more fun than Bible study.
 There is an interesting project - DeepProbLog[1], based on the ProbLog[2] (a Prolog dialect with probabilistic reasoning) and Deep Learning combined. I only wish it was Rust, so it would have been safer, faster, and easier to embed in your programs. I have high hopes to the Scryer Prolog[3], and it seems[4] the author think about probabilistic extensions too.
 If you are curious about Prolog, here are 2 good and modern (still updated) books:- Power of Prolog https://github.com/triska/the-power-of-prolog/- Simply Logical: Intelligent Reasoning by Example https://book.simply-logical.space/See Awesome Prolog list for more: https://github.com/klaussinani/awesome-prolog
 Excellent.I have a general concern that some working with ML don't appreciate the experience and technology that statisticians have developed to deal with bias, which I think is the biggest problem in the field. I tweeted "ML is v impressive, but has no automated way to ensure no bias. Statistical modelling can't match ML for parameter dimensions, but it can make explicit what is going on with the parameters you have and the assumptions you have. But advantages of theft over honest toil..." - some of the responses in the thread are interesting.My original tweet: https://twitter.com/txtpf/status/1102437933301272577Bob Watkins' tweet: https://twitter.com/bobwatkins/status/1102568735485972480
 The questions about object relationships sound a lot like SHRDLU[1] which dates back about 50 years ago.
 Reminds me of a recent comment I saw but can't find by Douglas Lenat (of Cyc[1] fame, also relevant here) about how all the work on deep learning was great but now we need to marry the two, much like the ideas about how the "right brain" and "left brain" or system 1 and system 2 or something work together and work differently but we couldn't very well function as humans without both.
 Soon we'll be combining statistical, symbolic, and algorithmic intelligence techniques. I question why that isn't the assumed position. :(That is to say, we have devised some algorithms that are truly impressive. There is little reason to think an intelligence couldn't devise them, of course. There is also little reason I can see, to not think we could help out programs by providing them.
 > I question why that isn't the assumed position. :(I suspect that each paradigm alone is easier to innovate in, than assuming that each is developed sufficiently to connect together."Integrating technologies for benefit" is a common view for intellectuals or business-people outside of a discipline who only know enough to see every key-worded algorithm or technology as a black box. Researchers in a field, that need to make a career for themselves by choosing problems tractable and filled with smaller parts, would see difficulties as to how and why that might be inappropriate at a given time.
 > devised some algorithms that are truly impressive.do you mean gradient descent?
 And sat solvers. And many graph algorithms. I'm partial to dlx. Even permutation algorithms help considerably if used certain ways.
 Reminiscent of fuzzy logic: https://en.m.wikipedia.org/wiki/Fuzzy_logicThe Wikipedia article discusses various extensions of logic and symbolic computation to include probabilistic elements. This was a popular topic in the early 90s.
 For anyone who'd prefer a direct link to the conference paper this seems to be based on: https://openreview.net/forum?id=rJgMlhRctm
 thanks!!
 So, I've privately been working along similar lines, although I haven't published anything, and I also haven't read their specific approach.How do I prevent a situation where I can't work on my hobby project of multiple years because this stuff gets patented?
 How do I prevent a situation where I can't work on my hobby project of multiple years because this stuff gets patented?Some possibilities (in no particular order)1. File your own patent application(s) first.2. Publish your work so that it becomes prior art that should prevent a patent on the same technique3. Hope that MIT doesn't patent their stuff, or if they do, that they release things under an OSS license that includes a patent grant
 Yeah, feel free to dive into my past comments. I probably said many years ago that a combo of ML and GOFAI has massive potential, in a wide range of applications.
 It's not a novel idea in the abstract. Ron Sun wrote a lot on something like "marrying connectionist and symbolic techniques" 20+ years ago. See, for example:https://dl.acm.org/citation.cfm?id=SERIES10535.174508http://books.google.com/books?hl=en&lr=&id=54iyt6Jcl_oC&oi=f...https://www.taylorfrancis.com/books/9781134802067etc...

Search: