Hacker News new | past | comments | ask | show | jobs | submit login

We are chasing the wrong things. Our conceptualization of the problem domain is fundamentally insufficient. Even if we took our current state of the art and scaled it up 1,000,000x, we are still missing entire aspects of intelligence.

The AI revolution is very likely something that will require a fundamental reset of our understanding of the problem domain. We need to identify a way to attack the problem in such a way that we can incrementally scale all aspects of intelligence.

The only paradigm that I am aware of which seems to hint parts of the incremental intelligence concept would be the relational calculus (aka SQL). If you think very abstractly about what a relational modeling paradigm accomplishes, it might be able to provide the foundation for a very powerful artificial intelligence. Assuming your domain data is perfectly normalized, SQL is capable of exploring the global space of functions as they pertain to the types. This declarative+functional+relational interface into arbitrary datasets would be an excellent "lower brain", providing a persistence & functional layer. Then you could throw a neural network on top of this to provide DSP capabilities in and out (ML is just fancy multidimensional DSP).

If you know SQL you can do a lot of damage. Even if you aren't a data scientist or have a farm of Nvidia GPUs, you can still write ridiculously powerful queries against domain data and receive powerful output almost instantaneously. The devil is in the modeling details. You need to normalize everything very strictly. 20-30 dimensions of data derived into a go/no-go decision can be written in the same # of lines of SQL if the schema is good. How hard would this be on the best-case ML setup? Why can't we just make the ML write the SQL? How hard would it be for this arrangement to alter its own schema over time autonomously?




You're talking about logic - SQL is basically a "logic language", it's just not entirely evident.

Logic programming was the AI paradigm for more or less most of the 20th century and has fallen out of favor.

Many people have talked about combining the neural net/extrapolation/brute-force approach with the logic approach. That hasn't born fluid yet but who knows.


There is somewhat renewed interest in hybrid approach. See, for example, DeepProbLog[1][2][3] - a combination of Deep Learning and probabilistic logic.

[1] https://arxiv.org/abs/1805.10872

[2] https://arxiv.org/abs/1907.08194

[3] https://bitbucket.org/problog/deepproblog/src/master/


I don't think there ever has not been interest in hybrid approaches - I think each I've looked over ten or more years, there was at least one hybrid thing (Neural Turing Machines comes to mind). I think the problem is no one has figured out a way to make them "work".

Or not even that they don't function but you need a way to demonstrate that such things are "really good", that they solve real problems that neither "business logic system" (the real existing remnant of GOFAI) nor neural networks can solve. And the key both logic systems and neural networks have is how they pretty standardized. logic systems are like regular programming and neural networks have their train/test/verify cycle understood (and even with that, they're probably overused/misused at this point given the hype).


BPE as used in NLP might count as a successful hybrid approach. Maybe the hybrid approaches that work out will be really specific purpose like that for a while.


Bullshit. Write me a better cat detector in SQL, or protein folder, or super-human board game player. And, by the way, ML does write SQL [1].

Deep learning has all these disadvantages and difficulties because we moved the goalposts too many times and now want so much more out of it than regular software. A model has to be accurate, but also unbiased, updated timely and explainable and verified in much detail; also efficient in terms of energy, data, memory, time and reuse in other related tasks (fine-tuning).

We already want from an AI what even an average human can't do, especially the bias part - all humans are biased, but models must be better. And recently models have been made responsible with fixing societal issues as well so they've become a political battleground for various factions with different values - see recent SJW scandals at Google.

With such expectations it's easy to pile on ML but ML is just a tool under active development, with practical limitations, while human problems and expectations are unbounded.

[1] a neural SQL patent: https://patentimages.storage.googleapis.com/af/78/be/92ee342...


> We are chasing the wrong things...

It isn't obvious we are chasing anything. The graphics card industry was chasing more performance and the field of AI research was pushed along by that.

The field is full of smart people doing impressive work but there havn't ben any fundamental breakthroughs that aren't hardware driven.


It's not all possible because we have more compute. Algorithms have also gotten better and more efficient, on a range of 10-50x.

https://openai.com/blog/ai-and-efficiency/


If you follow John Carmack’s foray into AI then it appears he’s also approaching it from the “scaling compute” perspective.


I think the inhibitor to scale is actually model compression. You’re right that scaling up 1Mx won’t cut it. That’s because fidelity is still too high. We already know the brain is a very efficient machine for storing heuristics and compressed models. Also related to why it’s prone to err. Information theory is the right framework here imo. Other related concerns: hierarchal organization of information and model comparison.


> Our conceptualization of the problem domain is fundamentally insufficient.

There have been people in the AI community saying this since at least the 80s/90s (e.g. Hofstadter). It's an old idea that has been difficult to get much traction on, partially because it's a long way from applications. NN, SVN, etc. for all of their limitations can draw that line pretty easily.


What is DSP?


Digital signal processing maybe?


Correct, in this context I meant Digital Signal Processing.


> We are chasing the wrong things. Our conceptualization of the problem domain is fundamentally insufficient. What we really need is GOFAI.

Back at ya mate :D.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: