Hacker News new | past | comments | ask | show | jobs | submit login
An Epidemic of AI Misinformation (thegradient.pub)
123 points by hughzhang 4 days ago | hide | past | web | favorite | 59 comments





Very hard to critique an article you overwhelmingly agree with!

The key point I kept picking up was the extent to which a press willing to laud a discovery was reticent about owning the clinb-down.

Peer review in ML journals should be tighter maybe? If you solve a limited subset of the three body problem you can't claim to solve "the three body problem" and if you apply a well known Rubik's cube model solution you didn't learn it, you had it baked in.


typically the actual papers are reasonably honest about everything and the misinformation comes in elsewhere, often university press releases summarizing the results.

do most of the hyped papers have peer review even? the field moves too fast for that, press releases are issued as soon as the first draft is on arxiv.

Apologies, but I see a lot of “trash” published in arxiv (also some real good stuff) Peer review could fix some things but would also show some other things down

I like to think of arxiv as the holding area for anything that COULD be peer reviewed.

Advantage is we get to see stuff now, not after it gets accepted at by a conference or journal. Disadvantage, we have to filter on our own.

Twitter and sites like arxiv sanity preserver are helpful for filtering.


When I saw this item, I thought not of misinformation produced about AI, but misinformation produced by AI.

Some of the Google searches I have done recently turn up mostly automatically written articles that plug in numbers and facts but make no sense at all, because there is no real understanding. (Yes, this is probably not "real" AI, but it's the sort of thing people are and will be trying to apply AI to)

It seems to me that there is no way to get to a serene, well-functioning society that utilizes intelligent machines, because in the process of developing software that can digest and write intelligently about topics humans are interested in, we will inevitably come up with less intelligent software first, that produces plausible misinformation far cheaper than humans. And economics will drive out, is already driving out, real research and journalism. Human society would be drowned by automatically generated misinformation by the time a machine is truly intelligent, whether that is in a couple years or a decade or longer.

I'm increasingly thinking that the people who are worried about artificial intelligence ending the world as we know it have slightly missed the mark - superhuman intelligence is a singularity that we are approaching, but we are going to be ripped apart by the tides beforehand and maybe that will prevent the endpoint from ever happening.


> I see this as a version of the tragedy of the commons, in which (for example) many people overfish a particular set of waters [...] if and when the public, governments, and investment community recognize that they have been sold an unrealistic picture of AI’s strengths and weaknesses that doesn't match reality, a new AI winter may commence.

I think this is not only inevitable, but necessary! This time around it has been a lot more useful, due in no small part to the advance in hardware since 1970s.

Unfortunately this has caused many important people to believe far too much of the hype and not see it's current limitations. As a result they have started integrating it into important part of our societies - i find this alarming - not for the reason most people find it alarming i.e "because it's too smart", but because it's far far too dumb in combination with people assuming it's very smart. I think a lot of this problem stems from inappropriately anthropomorphising ML with terms like "AI" when we are no where near the stage that we need to have the philosophical debate about where something is sentient or "intelligent". The ML we are doing with NNs is still at the "tiny-chunk-of-very-specifically-engineered-piece-of-brain" stage. It's important people understand this before we start integrating what are essentially basic statistical mechanisms into our societies.

For those in pursuit of better ML and things like real AI aka AGI, I also think having the hype blow away will do more good in the form of clarity and lack of noise than it will harm in lack of funding.


OpenAI hype is definitely exaggerated. Seems like there might be a connection with Elon Musk somehow because all of his projects seem to get a massive amount of hype also.

The radiology thing, is it really the case that there are no startups in that area with useful AI software? Seems like he overstated that.

Part of this is a worldview difference. Many people truly believe that AGI is just around the corner, and that even before then, the narrow AI applications will significantly alter the world as they are deployed. And since no one can actually predict the future, it's an area where it's easy to have different worldviews.

Personally I think that it's true that there is a lot of poor reporting and companies that overhype results, but it also seems like people like Gary Marcus are really not keeping up to date with the true capabilities of DL systems. If he was up to date, why would he be so pessimistic about applications like radiology? There seem to already be a lot of strong results.


I think there are certainly issues with ML in radiology (i.e. lots of publications of low power studies or overfit models validated incorrectly), but I agree with you that Marcus was being unfair. It is principle a straightforward problem to solve with deep learning. The reason biggest reasons "no actual radiologists have been replaced" are probably just political/social. Hinton cannot be blamed for those hurdles.

The funny thing for me is that if The Economist had just hacked the input into GPT-2 so that it was an ongoing conversation they would have found that it's OK at holding a conversation, better than I expected when I did so.

The conversation that I posted at:

https://medium.com/@scottlegrand/my-interview-with-the-world...

is performed in one shot. And I think it shows both the abilities and the limitations of GPT-2 and similar models. I am 100% role playing with the language model and prompting it to go in the directions it goes, but it surprised me several times, and eventually it all fell apart because I didn't perform any transfer learning on the model I just used the raw GPT-2 XL model to measure whether further work would be worth the effort and I would conclude yes it would be.

The first thing I need to do is dramatically increase the length of its input context. It's pretty good at running with an ensuing script because I suspect much of its training data was formatted that way. But since I ran out of context symbols, it eventually suffers from several incidents of amnesia and eventually effective multiple personality disorder. It also contradicts itself several times but then no more than the typical thought leader or politician does IMO.

What The Economist did was effectively erase the thing's memory between questions. So they were starting fresh with each question. And I think that's why they had to do the best of 5.


Interesting that a few big examples of hype driven articles came from OpenAI. This is dangerous and will lead to additional misinformation and public backlash. Scientists should be unbiased and not market driven, but when OpenAI has these releases I shook my head along with a couple others.

Hype lasts the next quarter but is replaced with distrust. Science is a long game of incremental discoveries. Breakthroughs usually are an understanding of some interesting outcome that needed more interpretation.


I think this is what happens when you have various stakeholders and some are driven by KPIs such as reach / views. To that end those specific stakeholders (content writers / marketing) will bias towards sensationalism. That's not to excuse the organizational culture which allows for this behavior / outcome. There does need to be course correction.

Let us remember that AI is at the center of the cloud wars. Google, Amazon and Microsoft while they produce ton of research, their sales teams need to market those R&D investment to get customers interested in their tech. We are at a stage that AI is still a buzz word for many companies, once we evolve and deploy more and more of AI cases, we will see less and less articles promising things that can't be implemented

That's probably a part of it. Note how the Google TPUs aren't for sale. If you want them you have to use the Google Cloud. The cloud is expensive and slow ... I think everyone is shocked when they first see perf numbers coming off Azure.

I don't know if GCE is better, but the temptation to overload the hardware is always there: hardware rental is fundamentally a business with low barriers to entry. Anyone can buy some machines, bring up a Kubernetes or OpenShift cluster and start renting it out. So the big 3 are always looking for proprietary advantage and dedicated AI chips are something other firms can't easily do at the moment, making it a good source of lockin.

Do many people need it though? Deep learning is pretty useless for most business apps, unless you happen to need an image classifier or something else pre-canned. Classical ML is often sufficient, or better, human written logic. The latter can be explained, debugged, rapidly improved and in the best case requires no training data at all!


I work for a company that uses bare metal, open stack, azure, AWS, and GCE.

GCE perf is significantly better (and more consistent) than Azure. Even with Windows instances. :/

But I agree that “cloud is slow” when compared to bare metal- it’s also most financially costly especially for the same performance due to it being slower. But the gains in flexibility are immeasurable.


Their tpus are for sale: https://coral.ai/products/

These are Edge TPUs, for inference, not for training

I thought a single powerful gpu would be enough for any inference?

Ugh. I dislike the author s permanent negativity, but he s right about a lot. I think it’s worth asking why people feel the need to lie about the future of AI? If they are confident about its future (and I don’t know of a fundamental reason why they would not be) then there is no reason to rush half assed results out the door and overcompensate (like gpt2). There is plenty of theoretical questions and answers to debate publicly instead. I don’t know if its the fear of some impending tech recession or fear of their own incompetence

He's also wrong about a lot. For all his insistence on accuracy, he himself is misleading or ignorant. Like his slam of the Dartmouth project - even if you knew nothing about it, all you have to do is click through to see the claims of 'solving vision' are sheer projection and urban legend. And he's happy to make up claims out of whole cloth: for example, when he says "AlphaGo works fine on a 19x19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.", for which he provides precisely zero evidence, he ignores the fact that AG training works fine on a mixture of board sizes and such progressive growing/curriculum training in fact seems to accelerate training (https://arxiv.org/abs/1902.10565) and that rectangular convolutions are a thing that exist, and there would be plenty of transfer if anyone tried. If no one has tried that exact thing, it's because it would be pointless and it's obvious to everyone not named 'Marcus' that it'd work fine, being rectangular doesn't mystically stop it from working anymore than using a 13x13 rather than 19x19 board makes AG-style training stop working. (This is not the first time Marcus has claimed that something didn't work; I first realized that he doesn't actually keep up with the AI literature when I pointed out to him on Twitter that plenty of knowledge graphs were in use combined with DL, and Google Search was the biggest example of this, and he had no idea what I was talking about.)

No matter what DL or DRL does, or how little his own preferred paradigm does, Marcus will never ever admit anything. AlphaGo beats humans? Well, it just copied humans. AlphaZero learns from scratch? Well, he wrote a whole paper explaining how akshully it still copies humans because the tree search encodes the rules. MuZero throws out even the tree search's knowledge of rules? Crickets and essays about 'misinformation'.


Gary Marcus' tone amounts to base trolling and it's a shame that's how he chooses to carry out his criticism. I understand your frustration.

Regarding the lack of transfer, yes, AlphaGo, AlphaZero and most of their variants have boards of fixed size and shape hard-coded in their architecture (as they have the types of piece moves-hard coded) and need architectural modifications and re-training before they can play on different boards or with different pieces (e.g. AlphaGo can't play Chess and Shoggi unmodified). The KataGo paper (the paper you linked) is one exception to this. Personally, I don't know others. Anyway general game-playing is a hard task and nobody claims it's solved by AlphaGo.

Regarding KataGo its main contribution is a significant reduction to the cost of training an AlpahGo variant while maintaining a competitive performance. This is very promising- after DeepBlue, creating a chess engine became cheaper and cheaper until they could run on a smartphone. We are far from that with Go computer players.

However, in the KataGo paper, major gains are claimed to come from a) game-playing specific or MCTS-specific improvements (playout cap randomisation, forced playouts and policy target pruning) or architecture-specific improvements (global pooling) or, b) domain-specific improvements (auxiliary ownership and score targets). Finally, KataGo has a few game-specific features (liberties, pass-alive regions and ladder features).

The KataGo paper itself says it very clearly. I quote, snipping for brevity:

Second, our work serves as a case study that there is still a significant efficiency gap between AlphaZero's methods and what is possible from self-play. We find nontrivial further gains from some domain-specific methods (...) We also find that a set of standard game-specific input features still significantly accelerates learning, showing that AlphaZero does not yet obsolete even simple additional tuning.

Finally, "it would obviously work so nobody tried" would make sense if it wasn't for the extremely competitive nature of machine learning research where every novel result is presented as a big breakthrough. Also, if something is obvious but never seems to make it to publication the chances are someone has tried and it didn't work as expected so they shelved the paper. We all know what happens to negative results in machine learning.


> The KataGo paper (the paper you linked) is one exception to this.

And my point is that KataGo shows that if you make the relatively minor architectural changes necessary to do this at all, it works just fine. None of the other tweaks it makes, useful as they may be, have anything to do with fixing transfer learning, because there's nothing to fix. It's a pretty absurd claim to claim that a CNN which works fine on 19x19 will suddenly collapse and show no transfer on, say, 17x17, and KataGo demonstrates that this does not happen.

> Also, if something is obvious but never seems to make it to publication the chances are someone has tried and it didn't work as expected so they shelved the paper.

'What if Go but rectangular boards' is pretty dumb when you have chess and shogi and other domains showing that A0 works, so I feel confident that no one like DM seriously tried and simply buried their failures. (Publication bias requires there to be a literature that can be differentially published, and competition presumes the existence of >0 entities competing; there is no active field of 'rectangular Go'.)


Regarding MuZero- I confess to not have read the paper very carefully, but I am confused by its claim that the new system achieves superhuman performance without knowing the game rules.

Specifically, MuZero uses MCTS and MCTS needs to have at the very least a move generator in order to produce actions that can then be evaluated for their results. The trained MuZero model learns the transition function and evaluation function but I don't see in the paper where it learns what actions are legal in the domain. And I don't understand how any architecture could model the possible moves in a game without observing examples of external play (i.e. not self-play).

MuZero reuses the AlphaZero architecture so most likely the moves of the pieces for Chess, Shoggi and Go are hard-coded in the architecture, as they are in AlphaZero. There's also probably some similar hard-coding of Atari actions, which I'm probably missing in the paper.


> but I am confused by its claim ... without knowing the game rules.

> probably some similar hard-coding of Atari actions

Nope, no hard coding.

Consider trying to MCTS on an Atari game. You have to "learn to predict" the <next frame, action> pairs. Initially this guess is very bad, but eventually your predictions are good enough that rolling out a tree of predictions improves your action selection

For Go, and chess, we twist our self into NOT using the game rules in the simulator e.g. for each move, just indicate if GAME LOSS WIN

Whether this paper worthy of a new Nature hype cycle is a separate debate


>> You have to "learn to predict" the <next frame, action> pairs.

But where do the actions come from?

For example, if I play chess, I could pick up a piece and throw it at my opponent's head. Similarly, if I play Atari I could chuck the controller at the monitor. These are actions I can perform that are available to me because of my basic human anatomy and because of the laws of physics (I can grab and throw and a thrown object flies through the air untl it hits a target or gravity wins).

In the case of MuZero, what actions can the system perform and where do they come from? I don't see where that is described in the paper.

>> For Go, and chess, we twist our self into NOT using the game rules in the simulator e.g. for each move, just indicate if GAME LOSS WIN

Similarly - what determines "each move"?

EDIT: I can see in the MuZero paper that "Final outcomes {lose,draw,win} in board games are treated as rewards $u_t \in {-1,0,+1}$ occurring at the final step of the episode" but I also can't see where these come from, what tells the model that a loss, draw or win has occurred at the end of an episode.

I mean, if you're telling the model what actions can be performed and what end-states values are, then what game rules are you _not_ giving to the system?


As someone patiently explained to me 2 yrs ago...

For the ATARI, the "real world" is the present frame, and a fixed set of 4 buttons and 4 directions. This of course is the game pre-programmed into the ALE ROM.

You can take any action, and get the next frame. but you cant "undo" an action, and you cant restart a game from a fixed state (see the Go-Explore controversy). And you cant explore 4 different actions in an interesting frame.

So now, if you learn a network which predicts the next frame, you can enter the world of model-based learning, where we do a simulated move tree roll-out (i.e. not calling the ATARI), try a gazillions moves, and only then select an action and get the next sample.

In a formally defined synthetic domain such as chess or logic programming, it is not clear whether this is helpful. We are simply trading one cpu time (calling the environment) for other cpu time (running our own learned im-precise model of the environment)

Of course DM has a chess function which does codes the rules of the next move. It can return a LOSS if you try an illegal move. But this function is NOT called for the tree roll out.


Thanks for your patience but this is still confusing. It's clear from your explanation that the moves and end-game states are given at the start of learning (now that you mention it, I remember the bit about illegal actions leading to a game loss). So training does not start from scratch without knowing anything about the game. The system knows what moves are _legal_ (not just possible) and it knows when the game ends, and how to score it. I don't see how this supports the claim of "no rules".

I appreciate that someone explaiend this to you at some point but I'm going with what I've read in some of the published papers and the ones I've read really leave a lot to the imagination. That is no way to present and support such big claims as "no rules", "no hands", especially when this is the central claim in a paper. Why fudge this so much when it's such an important aspect of the whole contribution? [1] You (general you) make a claim? Support the claim.

I didn't get what you mean about logic programming? Where does that come in?

________________

[1] Oh, I know why. It's the whole silly game with machine learning publications where they never tell you everything and you have to figure it out yourself. Well I like to play the other game, where I call bullshit unless it's explained clearly. In the paper. Not on Twitter and not by kind colleagues.

Silly games don't advance the science though.


>> Of course DM has a chess function which does codes the rules of the next move. It can return a LOSS if you try an illegal move. But this function is NOT called for the tree roll out.

I see what you mean- the chess function computes the results of actions returned by the system. But, if you do rollouts you need to have a set of actions from which to choose and an internal representation of states resulting from those actions. MuZero learns to predict those actions and states- but that means it selects from sets of possible actions and states. The paper does not explain where do these sets come from.

For ATARI I get it, there's the physical ish controls and video frames. For the board games however, I remember very clearly from the AlphaZero paper that there was an encoding of "knight moves" and "queen moves". I also remember less clearly that the structure of the network's layers mirrored the layout of a chessboard. That's what I mean by hard-coding and in the MuZero paper there are many references to reusing the AlphaZero archietecture and no explanation of how the same components (board states, moves) are represented in MuZero.


> Specifically, MuZero uses MCTS and MCTS needs to have at the very least a move generator in order to produce actions that can then be evaluated for their results.

You are confusing the two phases. The MuZero training does not use MCTS, it merely observes sequences of moves/states/rewards. This can be done using observations from anywhere: human games, AG games, A0 games, random games. This is where it does the actual learning of what moves are valid and what makes moves good (because invalid moves will not be represented in the dataset of valid games). It does not need MCTS or any access to an oracle about move validity, which is Marcus's complaint. This is no more cheating than observing the real world to infer its physics.

The second phase, where new games are generated, may use MCTS. But it doesn't have to. So it can learn by simply training on a game corpus, and then generating a new game corpus by self-play using only its internal implicit tree search and something like illegal moves = instant loss. It will rapidly learn to not make illegal moves and play just as validly as a MCTS-structured tree search, and then its implicit learned tree search achieves the same or greater playing strength.


>> The MuZero training does not use MCTS, it merely observes sequences of moves/states/rewards.

I'm sorry, I read the paper a bit more carefully since we're discussing it and I don't think this is right. It's true that it's a while since I read the AlphaZero paper and the details are a bit fuzzy in my memory, but in the MuZero paper it's clear that MCTS is used to generate a policy and estimated value for a current hidden state, and to select an action to take at the current real game state (the "environment"), then the observed state and reward are later reused as past observations to train the model, together with future actions, also selected by MCTS. So it seems to me that MCTS is pretty central to the training process.

The paper does say that any MDP could be used in place of MCTS but I don't think anyone seriously plans on using something else than MCTS for board games in the foreseeable future.

I'm confused by your use of the term "implicit tree search". Could you clarify?


> I think it’s worth asking why people feel the need to lie about the future of AI?

I wonder if it's because it's so vague and fuzzy, or because the techniques are so general. Like with self driving cars. The will someday probably be safer than people and that's huge. People get excited. We want that! Self driving cars save lives! But in those four sentences we went from "will someday probably" to let's do it now. The class of thing we're talking about now and the class of thing in the future are one and the same, so it's hard to talk about the future versions as distinctly separate from today's.

AI will probably be able to talk to people well. We have AI today that talks to people. They're not the same thing, but these two sentences don't make that clear, because it's all AI.

In principle, polynomials can learn any function! And we have polynomials today! We can learn anything! Rinse and repeat with fourier series or (as a totally random example) deep learning and it sounds like tomorrow's techniques are the same as today's, so we're done, right?

Or maybe it's on lay people's poor math and stats skills and lack of understanding of the simple stuff. If I tell lay people I do stats, they think I'm taking an average with a lot of bureaucracy they don't really get. They won't think I'm using simple logistic regression to do really cool stuff like classify documents. They didn't know "stats" could do that! So they might be even more misled about what I do if I call it "statistics" than if I call it "AI." If they're mislead whatever I call it, we're already screwed.


It's the bullshit process by which people get funding. Make something that has some potentially interesting engineering value, hype it up to hell and back, and funding agencies, investors etc. are more likely to back you. Probably this wasn't everyone's first plan all along, but when people who make a marginally better hot-dog / not-hot-dog classifier put so much spin on it, then everyone else has to just to remain visible. Moreover, people have to publish findings before their competition does, meaning sloppier and less interesting work. It's the snake-eating-its-own-tail plague that impacts so much of academia.

I'd say its the job of academic researchers to dream up high-risk ideas to work on, and to hype up incremental advances as potentially proof-of-concept for those ideas (when they're decades away still, with many potentially unsolvable hurdles left). Generally they are careful to give all sides though, since their reputation among their peers is the basis for their career. The media is a very different story. They just need to catch the interest of the mostly-ignorant, then move on to something new the next story.

> impacts so much of academia

i think the trend started from the industry , but you re right warrantless self-promotion very pervasive in academia , and it's sad that it works!


>Ugh. I dislike the author s permanent negativity, but he s right about a lot.

Yes, in a way it must suck to always act like the grumpy party-crasher but Gary Marcus is absolutely spot on when it comes to the facts. That interview with the economist had me shaking my head, everyone had to know within five seconds that the chance that this is uncurated is zero.


I dislike how, when dealing with a general industry of spruikers and boosters, speaking the truth is perceived as permanent negativity :P

He's not negative in this article.

He's right.


It's kind of funny just how much complaining about AI hype and bad coverage there is (on Twitter, and on this very subreddit), it feels like there is more of that now than actually bad coverage. This article suggests some actions to take at the very bottom, which amount to including some discussion of the limitations of the work in the paper -- not a bad idea, but then again the misleading coverage usually does not stem from claims in the papers themselves in the first place.

Shameless but relevant plug warning, I run this effort Skynet Today with the aim to let AI researchers/experts present various advancements/topics to a general audience without hype and with context. Everyone on the team is volunteering their time for free and we can always use more people to help us tackle various subjects, so if you care about this issue feel free to take a look here: https://www.skynettoday.com/contribute


Skynet Today is very good. I wish you had more original articles because the ones you have, I enjoy a lot (your editorials and briefs). I see you are calling for contributions, I hope you get some good ones.

Writing good articles- there's no other way to combat bad coverage. Keep it up please and all the best.


It happened to VR a few years ago. The development slowed down when companies realized the sales. Unlike VR, AI is not a physical good that users/customers can purchase and test directly. It is just a buzzword. Companies will use it to death at every opportunity to market themselves. This is unstoppable. We will hear more about it, maybe until people get sick of hearing "another crappy ai product".

AI is more of a datacenter technology. People already use it (siri,alexa,google search) and they apparently like it already. It was easy to see that any VR headset is bulky and agonizing, but those new services work fine. Sure, however the "not hot dog" apps are going to be disappointing

An AI winter will not happen. We are past the point of no return. The vast majority of research is net positive research.

The only risk is misappropriation of investment, but that has already been smoothed over quickly.

I’d get the complaint if AI was not doing anything useful as it wasn’t in ‘74, but it simply is.

For example, plate recognition, drones and facial recognition. These are brutally efficient technologies. To imagine any government under investing in these is a total misunderstanding of the value that AI brings to military, security, and economic dominance.

It’s a race between nations and the only danger is the computers taking over (not in a general AI way, but in a way that everything is run by algorithms no human can understand)


You sure?

Without a doubt, there's an investment bubble for AI companies in industry: a huge amount of companies have added "AI" to their proposition, where the AI is only marginally useful if not useful at all, for the purpose of marketing and inflating valuation.

On research side, it's now clear the current AI paradigm will not produce the kind of massive, society-shifting promises that were made to investors, which is the focus of the article.

Yes deep learning is here to stay for narrow tasks, but the investment contraction for the rest could certainly feel like a Winter.


> it's now clear the current AI paradigm will not produce the kind of massive, society-shifting promises that were made to investors

It's correct in that it won't produce the society-shifting promises. But there is a huge amount of money on non-society-shifting innovation.

Besides, I really doubt the media predictions are what actually was told to investors.


This is primarily deep learning which is not general purpose AI

We’re talking about AI here. And deep learning is AI. If you go to university and research or study AI that’s what you’d currently be learning. AGI does not currently exist and is a word used mostly by laymen and the media who don’t understand the current state of AI.

The term AI winter is IMHO misrepresenting the true state of affairs. Since day one, the field's modus operandi was one of overpromising and underdelivering. That has not changed. We see how Alphabet plows billions of dollars into DeepMind [1] and all they get in return is a series of game-playing bots. If unproductive activities are defunded, this creates an opportunity for productive ones to thrive. "Winter" is not a suitable word to describe this.

[1] https://www.wired.com/story/deepminds-losses-future-artifici...


This is a ridiculously reductive take on DeepMind... they produce a TON of great research, and have helped advance the state of RL considerably (speaking as an AI researcher here, having read many of their papers). Google should be commended for investing in largely basic AI research, even if once in a while they make a PR splash out of it, IMO.

The more I read articles like this (or pretty much ANY article recently), the more I'm convinced that what we really need is: transparency.

I don't mean just AI, but transparency on everything: Transparency from researchers, news outlets, governments, retailers, and YouTubers. It's a great shame that a lot of IT has been used for bringing more confusion and obfuscation to the people, whereas it could be used to provide a more coherent/honest view of the world. We should demand more transparency on every matter.


Good article, but it overstates its case.

>In 1966, the MIT AI lab famously assigned Gerald Sussman the problem of solving vision in a summer; as we all know, machine vision still hasn't been solved over five decades later.

It certainly took five extra decades, but it would be a massive shift of goalposts to say the problem of vision hasn't been sufficiently solved today.

>In November 2016, in the pages of Harvard Business Review, Andrew Ng, another well-known figure in deep learning, wrote that “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” A more realistic appraisal is that whether or not something can be automated depends very much on the nature of the problem, and the data that can be gathered, and the relation between the two. For closed-end problems like board games, in which a massive amount of data can be gathered through simulation, Ng’s claim has proven prophetic; in open-ended problems, like conversational understanding, which cannot be fully simulated, Ng’s claim thus far has proven incorrect. Business leaders and policy-makers would be well-served to understand the difference between those problems that are amenable to current techniques and those that are not; Ng’s words obscured this. (Rebooting AI gives some discussion.)

It takes significantly longer than a second to actually understand spoken conversation (rather than provide a conditioned response or match against expected statements, both of which computers are fully capable of doing).

>I just wish that were the norm rather than the exception. When it’s not, policy-makers and the general public can easily find themselves confused; because the bias tends to be towards overreporting rather than underreporting results, the public starts fearing a kind of AI (replacing many jobs) that does not and will not exist in the foreseeable future.

Robotic manufacturing has already eliminated massive swaths of high paying jobs. Likewise, software has eliminated massive swaths of data entry and customer service jobs (with software being a particularly poor replacement for the latter, but still being put into widespread use to cut costs). And contrary to beliefs that new jobs will be created in IT, software is able to massively eliminate low skilled tech jobs as well, as e.g. automated testing did to India's IT industry.

As with existing jobs that have been automated away, companies won't need generalized AI to eliminate many more jobs. Many jobs don't rely on unconstrained complex deduction and thinking, and will be ripe for replacement with deep learning algorithms. And we can be reliably assured that corporations will engage in such replacements even when the outcomes are not up to par.


The buzzword back in the 70s-80s (after AI over-promised in the 1960s) was 'expert systems'. https://en.wikipedia.org/wiki/Expert_system

(To the extent that I have kept up with it) modern AI skips the 'knowledge base' part of ES, in favor of pattern-recognition based on 'training'.

Today's (Indeterministic, trained, n-net) AI has clearly saved a lot of time/effort in creating 'knowledge bases'. I suspect it appeals more to singular fantasies about 'more human than human' intelligence. (Sorry Ray)

Question is: Is today's AI even a magnitude-better than (deterministic) ES insofar as extensibility and verifiability? What if we had spent those decades refining the ES approach instead?


You should use both. Download wikidata and use the manually curated interconnected data available. You could argue it was compiled by the largest pool of neural networks ever assembled. Wikipedia data was curated content that was hierarchically and painstakingly interconnected by humans, the most advanced NN we know of :).

There's now discussion about how neural networks can succumb to data poisoning / adversarial attacks, because there are no immutable facts. Adding a mostly immutable fact table can help keep things grounded in reason. Most of these engines support complex inference abilities that can lead to unexpected connections.

ES is not really dead. It feels like many rules engines changed their names to "AI Intelligent Agents"-type wording to describe their product. Rete algorithm is similar rule based calculation, is still used to calculate FICO score, which you could say fits into the problems that may be better served by the latest Neural network models. Allegro graph lets you query using prolog and is often used for governance and compliance tools. RDFox is one of the latest inference engines that made major advancements in turning first order logic in datalog into parallel computation.

I'd imagine if you can build a neural network that can successfully interact with a ES knowledge base you could easily make a neural network as good as the one that won in jeopardy


I mean, you extend a DL system by giving it examples. Yes, in fact by harvesting example or annotation data you can extend models many magnitudes faster and more comprehensively than through manual analysis. Effectively the analysis is done automatically. At this point, what comes out it is usually poorly factored and entangled and not human-parseable, but that doesn't prevent it from being applied effectively to a range of narrow tasks that expert systems cannot approach.

We do still rely on expert systems for things that we want to be carefully parsed, verified and analyzed by people though. Such as rules that oversee most self-driving cars based on perception handled by neural networks. However, not all self-driving systems lean as heavily on rules at the top level as others.


They're not even competing with each other. How would you compare an ES to a deep neural network? According to what benchmark? What do you mean by verifiable? Modern success stories with machine learning often deal with problems (NLP, vision, RL) that have little to do with the problems that were being solved by expert systems.

Also, an hypothesis: if (big if) you somehow managed to have an expert system outperform deep learning for vision, I bet that it won't be any more verifiable than a deep neural net is today.

To me the complaint that modern deep learning is unverifiable is a bit dumb, in the sense that any perception algorithm working with low level signals (vision, sound) will not be transparent to a human. 15 years ago, an image classification pipeline looked like: bag of SIFT features + SVM classifier. Try explaining the decision made by that algorithm in an intuitive way!


I dunno if the GPT-2 examples are misinformation, more like exaggeration for PR, which is admittingly a separate problem in AI. (marketing hype vs. actual falsity)

Hype and exaggeration is just another form of falsification.

Exaggerated Information + Trusting & Uneducated Consumers = Misinformation

Sigh.

> The Economist [...] said that GPT-2’s answers were “unedited”, when in reality each answer that was published was selected from five options

> [Erik Bryjngjolffson] tweeted that the interview was “impressive” and that “the answers are more coherent than those of many humans.” In fact the apparent coherence of the interview stemmed from (a) the enormous corpus of human writing that the system drew from and (b) the filtering for coherence that was done by the human journalist.

If your success rate is ≥20%, the coherence is coming from the model, not the selection process. This is just basic statistics.

> OpenAI created a pair of neural networks that allowed a robot to learn to manipulate a custom-built Rubik's cube

Jeez, I've already corrected you here... well, why not have to do it again?

> publicized it with a somewhat misleading video and blog that led many to think that the system had learned the cognitive aspects of cube-solving

The side not stated: OpenAI said explicitly in the blog that they used an unlearned algorithm for this, and sent a correction to a publisher that got this wrong.

> the cube was instrumented with Bluetooth sensors

During training, but they ended up with a fully vision-based system.

> even in the best case only 20% of fully-scrambled cubes were solved

No, 60% of fully scrambled cubes were solved. 20% of maximally difficult scrambles were solved.

> one report claimed that “A neural net solves the three-body problem 100 million times faster” [...] but the network did no solving in the classical sense, it did approximation

All solvers for this problem are approximators, and vice-versa. The article you complain about states the accuracy (“error of just 10^(-5)”) in the body of text.

> and it approximated only a highly simplified two degree-of-freedom problem

As reported: “Breen and co first simplify the problem by limiting it to those involving three equal-mass particles in a plane, each with zero velocity to start with.”

> MIT AI lab famously assigned Gerald Sussman the problem of solving vision in a summer [https://dspace.mit.edu/handle/1721.1/6125]

I... sigh

“The original document outlined a plan to do some kind of basic foreground/background segmentation, followed by a subgoal of analysing scenes with simple non-overlapping objects, with distinct uniform colour and texture and homogeneous backgrounds. A further subgoal was to extend the system to more complex objects.

So it would seem that Computer Vision was never a summer project for a single student, nor did it aim to make a complete working vision system.”

http://www.lyndonhill.com/opinion-cvlegends.html

> Geoff Hinton [said] that the company (again The Guardian’s paraphrase), “is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.” Four years later, we are still a long way from machines that can hold natural conversations absent human intervention

‘Four years later’ to natural conversation is not a reasonable point of criticism when the only timeline given was ‘within a decade’ for a specified subset of the problem.

> [In 2016 Hinton said] “We should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists.” [...] but thus far no actual radiologists have been replaced

So Hinton actually said “People should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists, because it's going to be able to get a lot more experience. It might be 10 years, but we've got plenty of radiologists already.”

2019 is not 2026. “thus far no actual radiologists have been replaced” is thus not a counterargument.

> Andrew Ng, another well-known figure in deep learning, wrote that “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” [...] Ng’s claim thus far has proven incorrect.

I agree. This quote captures the wrong nuance of the issue.

Well, finally finding one point by Gary Marcus that isn't misleading, I think I'm going to call this a day.


I published some minor corrections/clarifications to this on Reddit. HN won't let me edit, so here's a link:

https://www.reddit.com/r/MachineLearning/comments/e453kl/d_a...


I bet ai is going to become like coding where everyone thinks they're an expert.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: