You are right too, of course, about the investor situation. That one is really bad because nothing says "employ a technology you don't consider a good fit for your business" like $1M...
I had to switch prematurely to microservices before due to investor pressure, and the company wasn't ready and you can imagine the rest.
Google results being as manipulated as they are these days, what is the best course/book to get to self teach on artificial intelligence? I'd like to spend a few months familiarizing myself with concepts and writing programs with what I learn to see if anything useful comes of it.
edit: Found this which answers this perfectly https://news.ycombinator.com/item?id=15689399
Until you can answer these questions, you're not doing science. You're just throwing data at Tensorflow or Keras until you guess it's good enough. And then someone dies in a self-driving car accident because the network didn't learn that a transverse truck or a pedestrian was an obstacle.
ML does impressive stuff but as several recent HN articles have pointed out it's basically just curve fitting. And voodoo curve fitting at that.
>"hit something of a brick wall decades ago"
Is true. Why do you disagree? What improvements did "old-school, mostly symbolic AI" bring to the current field of research?
Sure, ML has failures - but those failures are in applications and fields where old school symbolic AI can't even reasonably be applied to. We have to start somewhere and just using symbolic AI is far behind in terms of the requirements we have currently.
>"How many layers do you need and why? How many training cases do you need and why? What has the network learned and how do you know that? What important things has the network not learned? When will it fail?"
A lot of these issues have been addressed in many recent papers. A lot of these papers have been solely focused on understandable/explainable machine learning which is an overarching topic that covers all your questions.
>"Until you can answer these questions, you're not doing science."
So, you are essentially saying a large part of CS academia is not doing "science". I'm not sure what kind of "science" you have done to make such comments. But I'm pretty sure there are plenty of researchers out there who are far more of an expert than you are in this field would wholly disagree with you.
Neural nets are also showing a lot of promise for symbolic tasks like automated theorem proving. So I would predict that in time they eventually become a standard technique in what would otherwise be the domain of older techniques.
Considering we use neural networks to encode audio (in Opus), transcribe our speech, secure our homes and much more, this most recent AI wave has been quite productive.
It does not really translate to the final objective though.
The early programming language parsing research is the direct product of researchers working on natural language processing, and in fact the BNF form was developed for natural languages but later adapted and improved for programming languages.
The idea of Logic programming with prolog and friends comes directly from AI research.
Most of the search algorithms we use unknowingly in various machines have origins in the first AI wave.
The human computer interaction research directly dealt with development of fundamental ideas on speech synthesis, graphical user interfaces and computer graphics.
All in all, The field of applied AI and Computers developed together and a lot of early ideas spearheaded by AI transferred into fundamental general computing, ideas so trivial we do not even think about them now. But they were not so trivial when they were developed, specifically for AI
That of course went nowhere. The current AI revolution has produced a lot of tangible results but is - as far as I understand it - not much closer to AGI than the first one was. And some are - again - overpromising and under-delivering which risks a second AI winter, though for less good reasons.
All in all it would be nice if people would stop to make these claims, it isn't helping at all.
Also the first line on the wiki mention that this entry needs to be updated.
The level of funding is as hot as it’s ever been. NOAA predicts a winter with temperatures above the normals. So I would go with that.
The term AI doesn't actually imply human-level intelligence.