Hacker News new | past | comments | ask | show | jobs | submit login
AI Update, Late 2019 (piekniewski.info)
138 points by buzzier 18 days ago | hide | past | web | favorite | 47 comments



This blog gets passed around a lot recently. While I do draw value from the thorough observations of developments, the amount of text the author spends on shallow negativity can feel like the same waste of time as the overhyping PR machine he is reacting to.

There is without doubt something novel in the successes of convnets for sensory perception, deep Q-learning for decades-old and new game problems, artificial curiosity, recent machine translation, generative models and their various applications. Recent models also found their way into for-profit companies. It’s legit to be fascinated by this, and I’d rather stand on the side that doesn’t remain in their cave.

AI research may have picked all current low-hanging fruits or hit a wall either soon or in ten years, nobody can know yet, so there is no reason to run around predicting the future painted in only positive or negative light.


We need contrarian voices for both spotting any issues we might have overlooked, and assuring ourselves we know better.

It's still better than what I can read from "LinkedIn influencers" in my feed like "Logistic regression is still the best" or "Self-driving cars will never work because of long tail"...


It’s a good read but the negativity makes it appear irrational. It would be better if he left the ranting away and focused on the realistic recap without the PR hype.

Note that AI has a history of being stalled by overly pessimist evaluations (Minsky / Papert on the perceptron, Lighthill report).


AI also has a history of being stalled by believing its own hype (the AI winter of the 1990s). Too much money can kill AI as fast as too little.


Seems to be happening to VR right now...


Interesting read, I'm glad to see others are beginning to call the bluff on the AI hype machine. The summary is excellent:

> The whole field of AI resembles a giant collective of wizards of Oz. A lot of effort is put in to convincing gullible public that AI is magic, where in fact it is really just a bunch of smoke and mirrors. The wizards use certain magical language, avoiding carefully to say anything that would indicate their stuff is not magic. I bet many of these wizards in their narcissistic psyche do indeed believe wholeheartedly they have magical powers...


Agreed. I much prefer we would call it statistical intelligence.

Although artificial intelligence is actually spot on. We just understand the wrong side of the ambiguity. Its not really intelligence that we have reproduced artificially - since it isn’t intelligence - but a fake intelligence, the artificial kind. We’ve created the artifice of intelligence, through statistics, but not intelligence.

People knew long before newton that an apple would drop to the ground when released. Statistical experience has allowed us to have knowledge of this very early on. But it took newton to explain what was going on, so that instead of predicting through experience, he could predict by reason and logic. Thus saving him many lives of accumulating experience to make his next prediction ever more precise.

"Statistical intelligence" allow us to do a bunch of neat things though. Many problems are best approached statistically (because noise, lack of formal understanding etc), and these some of these methods achieve impressive results in a wide range of situations.


The use of the word "intelligence" overall is problematic. To most people, "intelligence" is human inductive reasoning. We see "intelligent" creatures in mass media and books--creatures that act just like humans except aren't biological. We think of Commander Data from Star Trek. Proponents of AI know most people interpret the term that way and gladly use the term as a way of implying the same magic we see in media.


So advances in RL (Deepmind), are not merely statistical intelligence, those are true advancement in AI (not only ML). I.e. those a machine can train on their own data.


True but i’ll argue a bit. They statistically maximize reward. As far as i’m aware, the engineer is still designing the reward function. She’s also designing the statistical method to converge to the optimal solution (as quickly as possible).

So a RL chess algorithm tells your statistically a move (action) from a state S to a new state S’ such that you are expected to maximize your reward. Whereas a chessmaster (probably) designs his next sequence of moves based on logic (my opponent will respond in such a way because etc). This is different from « statistically, this move right now has the best odds of leading to a win » a la monte carlo. Now what is surprising, is that statistical algos are better than our best logicians at this particular task. But its the action at a given state is still statistically designed.

Finally, you need your data mining to be representative of the underlying distribution you are trying to model. So you need your simulator to be the most real whereas they are in fact approximations in most useful cases (landing a plane for instance).

So for instance if you want an algo to design the flight path of a rocket landing on an asteriod, you could recreate a simulator modeling spacetime from observations and model its dynamics from eintein’s equations, but then what’s the RL for, why not just use an off the shelf optimization algorithm like we have for decades? [1]

The bellman equation and DQNs are nice and all, but they’re still statistical algorithms, producing - in my mind - statistical intelligence about a particular system. An RL agent will not tell you WHY such an action was taken, but it’ll tell you that statistically, it is the action to take.

Very neat results in RL however.

[1] i worked on a RL based agent to control trafic lights, and it wasnt clear whether our solution was better than a classical optimization one. Actually, classical optimization (minimizing an analytical model of the system) seemed to scale much better to larger meshes.


But then the guy says 'oh yeah but I'm using the stuff that works'


Nobody ever reads that far before commenting. :-)


Wait but the Wizard of Oz actually helped the other characters, and he so did even after they had learned he has no magic powers.

I have no idea what's the moral here, if any.


Sure, take some pot shots - some are valid criticisms - but OpenAI's Rubic's cube solver being lame does not mean AI needs to be re-evaluated.

Sure, AI has its faults; the tantalizing cost savings of automation has created some negative feedback loops - might that be more deserving of the question "what in the hell are we trying to accomplish and how exactly did we get here in the first place?"

A Rubic's cube solver is the problem? Really?

OpenAI (of now infamous Rubic's cube failure :p) released a hide-and-seek demo a few months back that gave me literal goosebumps. Little AI agents facing off in a game of hide and seek start evolving with seriously clever strategies. According to the author's bio (dynamic, time-aware ML systems, etc.) that sort of thing should be right up their ally!

Instead we get some sort of selective self-promotion hit piece - highlighting anecdotal failures while claiming some better AI based robotics startup is coming soon(tm).


>Sure, take some pot shots - some are valid criticisms - but OpenAI's Rubic's cube solver being lame does not mean AI needs to be re-evaluated.

There may be genuine criticisms of that particular project, but 'only the actual solving is done via symbolic methods' is a non-sequitur. The Rubik's cube is just a generic physical task that requires dexterity, they could have done the same research with dominoes or blocks or playing Tic-Tac-Toe with random pens in various adverse conditions -- the point wouldn't be that the ML solves or doesn't solve the actual Tic Tac Toe!


Sure there is a lot of hype in AI/ML right now, but this post reads like there is an axe to grind with all ML. it ignores true progress made in a lot of areas and denigrates the whole field.

to me it did not read like an objective post, but more like just a "all AI is bullshit" style blog post


I agree. Some people are hyping things, but this is expected.

All new technology is overestimated short term, but underestimated long term.

We might not have autonomous driving that takes our kids at school, but we have cars that can recognize lanes and other cars and complain if something is dangerous. We also have facial recognition, voice recognition and almost turing level chatbots to sell you stuff.

I suspect AI might be analogous to computer graphics. Purists in the beginning said the holy grail was ray tracing. However, people still worked on the problem, marching the state of the art forward with smaller building blocks, and now that ray tracing is appearing, a practiced eye is needed to see the difference.


> All new technology is overestimated short term, but underestimated long term.

Well, no, some technology is overestimated short term and also overestimated long term.

For example, flying cars. Nuclear fusion (though that one could still come through). Gallium arsenide (still one of my favorite names for a speed-metal band, and still available as far as I know).

The question is, which category is AI going to be in? AI for specific tasks seems likely to be underestimated long term. AGI? My guess is that it's overestimated long term, because it isn't going to happen. That's a guess. Evidence? Don't have any. Guesses are like that.


I think we need a new word for this kind of posts anti-hype hype. Lot of people try to ride on anti-hype train to fame without bringing anything new to the table.

Anytime there's new progress in AI, you will see many comments or posts some variations of "but humans do it more efficiently" (in arbitrary dimension) or "what about the other problem AI didn't solve". More often than not these are just some lazy layman criticism that makes the posters feel smart without offering anything new or substantial.


pg calls it "middlebrow dismissal" and tells HN commentators to avoid it.


Yeah that's where I'm at. There's this general sentiment that if AI does not solve everything immediately, then it is worthless and hype. Especially the part about not being able to deal with corner cases, forgetting that all AI needs to be valuable is to deal with such cases better than your average human, which isn't a very high bar.


No, I feel like that's the exact other way around. There's a general sentiment that AI will solve everything immediately and lead to a massive breakthrough in every single aspect of our lives, when in fact what has been accomplished so far in this "new AI" boom is improvements brute-force statistical fitting.


Hmm. That means that in order for a human to get paid, they're going to have to be able to do something better than an AI can...


It's a "Reverse AI Effect". If the AI Effect is that anything we actually understand cannot possibly constitute artificial intelligence, the Reverse AI Effect is that nothing can possibly constitute useful AI until it rises up and kills all humans.


A refreshing view of AI, this excerpt I particulary enjoyed:

> I mentioned in my previous half-year update, Open AI came up with a transformer based language model called GPT-2 and refused to release the full version fearing horrible consequence that may have to the future of humanity. Well, it did not take long before some dude - Aaron Gokaslan - managed to replicate the full model and released it in the name of science. Obviously the world did not implode, as the model is just a yet another gibberish manufacturing machine that understands nothing of what it generates. Gary Marcus in his crusade against AI hubris came down on GPT-2 to show just how pathetic it is. Nevertheless all those events eventually forced Open AI to release their own original model, and much to nobody's surprise, the world did not implode on that day either.


As someone who has posted an GPT-2 excerpt and accidentally had people confuse it for a human comment—I thought the context would make it obvious—calling a language model ‘pathetic’ for only occasionally getting math right hardly strikes me as a reasonable sort of complaint. Nor does disingenuously putting false words in OpenAI's mouth.


Just wait until your GPT2-generated MBA homework gets you full points first time, then you either won't compute, start weeping, shake rapidly or laugh like a madman. Automated essays scoring is already reality, now you get GPT2-automated ones as well.

HN crowd is often intellectual elite; imagine regular persons reading what GPT-2 produces when they can't understand what a regular grad student writes. I can use e.g. talktotransformer.com to complete some quotes like "Intel CEO said that the new 10nm CPUs will...", then post that to some Reddit thread, it would get picked up by search engines, and at some point somebody would use it in some serious work or it would spread like wildfire on sites that don't check their references.


God forbid that it takes more than a few days for decent chat bots to appear on Reddit from a troll farm in eastern europe/china/wherever based on these new models. Or has that already happened, and we're simply unaware?


Already done. It really feels like new dark times are upon us, this time not because of a lack of writings, but because of automated garbage arriving quickly. Previously one had to hire some writers to write crappy ad-driven garbage articles, soon you can do a 1-person operation for that.


>John Carmack is going to take a shot at AI. Whatever he accomplishes in that field I hope it will be equally as entertaining as Quake and equally as smart as the fast inverse square root algorithm.

John Carmack did not invent the fast inverse square root algorithm. (I'm still rooting for him, though!)


Hm interesting, I knew it from Quake and implicitly assumed that it was Carmack's trick. But Wikipedia has some more history:

https://en.wikipedia.org/wiki/Fast_inverse_square_root#Histo...


> The whole field of AI resembles a giant collective of wizards of Oz. A lot of effort is put in to convincing gullible public that AI is magic, where in fact it is really just a bunch of smoke and mirrors.

No, you're generalizing the marketing department at IBM over a deeply passionate, hard-working, brilliant community of scientists and engineers.


At least one misleading source from the article: when talking about the limitations of Uber's self-driving tech, the author links to a source mentioning Uber may be paying license fees to Waymo for their AV tech, insinuating that Uber is unable to move its AV program forward on their own. The article linked actually mentions Uber being court-ordered to pay for the tech (the Lewandowski case)


Seems like people will keep being needlessly negative and dismissive of AI right up until the singularity.

But really, what did AI do to this guy? ML really does have real world applications. Though many self driving start ups are overblown, my Tesla really does drive me to work everyday.

As always, things are easy to critize, and hard to create.


>But really, what did AI do to this guy?

He's a founder of an ML startup with published papers.


So hes a masochist or?

Seems to hate ML.


He hates ML as a pattern recognizer (I.e. have no real understanding of the world).

He does not hate symbolic ML which is based on logic/knowledge (and do understand the world).


Reading his post, I don’t see anything suggesting that he hates ML as opposed to companies over-promising what the technology is capable of.


Nah, he just applies his ordinary Slavic pessimism and cynicism to it.


This post is full of non sequitur like links to the PG&E wildfire prevention shutoffs after talking about how model training (which happens offline in some datacenter) will always cost lots of energy (why would you build a data center north of the bay where it can be affected by wildfires and sky high utility/real estate prices). Maybe it is meant to be humorous and I just didn't get it.

Yeah everything is harder than the first wave of hype made it seem, no this list of ridiculous hype proved to be ridiculous doesn't mean it's all useless or doomed. I get the impression the author knows this from reading the about page though, which makes me think I just missed the joke.


It has a fair tinge of typical Polish-style pessimistic humor.


there are a lot of dashcam footages of car crashes on youtube. i wonder if that information can be salvaged in some way


It's good to have some skepticism, but there are things that genuinely work, which the author alludes to at the end of his diatribe. Unfortunately they happen to be less trendy things like surveillance, military, retail, factory QA, and other strictly perceptual tasks that are far cry from "self driving" cars or "AGI".

I'd also like to point out that very little of what you can see in those Boston Dynamics videos is "AI". It's mostly just good old fashioned control systems, just very sophisticated.


As someone who has worked as a researcher in one of the big AI research labs, I completely agree with this post. There has been true progress in a few ML subfields over the past few years, most noticeably representation learning for image recognition and text/ translation, but 99% of what you read in both scientific papers (which are more PR than ever) and the general media is nothing but hype. Especially over the last 2-3 years or so I haven't seen anything novel. IMO that's mostly a result of the confluence of perverse incentives at various levels:

- Academics need to create PR and hype to increase their chances for grants

- PhD students need to publish papers, and thus convince reviewers, with unnecessarily complex and hype-filled language, that their papers are good. They are also more incentivized than ever to create their own personal brand (via hype-filled blog post or videos) to increase future employment opportunities. More PR also means more citations, which is metric academics are often evaluated on. After all, if you work on something related, you're pretty much obligated to cite research that everyone has heard about, right?

- Startups, as it has always been, jump on the latest trend to increase their chance of raising money from investors. They slap AI/ML onto their pitch decks to differentiate themselves from others, or to become eligible for AI-focused funds. In reality, none of them will ever use any of the new ML techniques because they are too brittle to work in real-world products or require many orders of magnitude more data then the startup will ever have.

- Big companies want to brand themselves as "thought-leaders" in AI to drive up their share prices, hire better talent, improve their public image, convince investors, etc.

- The general media has no idea what they are talking about and wants to generate clicks. Same as always.

Put all this together and you get the current AI hype cycle. We've seen this happen with lots of other technologies in the past, what's kind of new this time is the entrance of academia into the cycle. When I first started in (ML) academia I was under the naive impression that I would be doing hard and cold science - I was so wrong. Everyone is optimizing for their own objective (grants, salary, publications, etc, see above), which makes most of the published research completely useless or simply wrong. One of the, sometimes unspoken, criteria of choosing ML projects in many of these labs is "how much PR will this create". This useless "research" is then treated as if it was a proven method and picked up by startups to convince clueless investors or customers with "look at this latest paper, it's amazing, we will monetize this, we're at the forefront of AI!", or by the general media to create more hype and drive clicks.

One important point that the blog post makes that is always overlooked is this:

> Now what this diagram does not show, is the amount of money which went into AI in corresponding time periods.

With all the hype over the last few years, just think about how many billions of dollars and tens of thousands of some of the smartest people on this planet got into the field, often to make a quick buck. With this many resources invested, would you expect there to be no progress? Obviously there will be, but most of it is smoke and mirrors. People think that the progress comes due to new AI techniques (Neural Nets), but in reality, if you were to take the same people and money and forced them to make progress on the same problems using some other technique, let's say probabilistic models or even rule-based or symbolic systems, they would've done just as well, if not better.


The author uses these ([1][2]) diagrams to argue that more compute has diminishing returns. But the 'diminishing returns' are on the accuracy of correctly picking the single right category for a photo out of one thousand. Photos may simply not carry enough information to be able to meaningfully distinguish between them at that level of accuracy; existing models already exceeded humans' ability at top-5 accuracy in 2015 [3]. It wouldn't be surprising if SOTA models exceeded humans at top-1 already.

It's possible that the human baselines were bored and so performed sub-optimally when picking between the 1K classes. But the argument has now become a subtler one, much less clear cut.

As an example of categories that may be difficult to distinguish between, do you feel confident that you can reliably distinguish between the Norwich terrier [4] and the Norfolk terrier [5]? These are two separate categories in ImageNet1k.

[1] https://i0.wp.com/blog.piekniewski.info/wp-content/uploads/2...

The first diagram shows exponential growth in the compute usage of state of the art deep learning architectures.

[2] https://i1.wp.com/blog.piekniewski.info/wp-content/uploads/2...

The second diagram shows diminishing returns on Imagenet1k top-1 accuracy from doubling the size of Resnext.

[3] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.725...

[4] https://www.google.com/search?tbm=isch&as_q=norwich+terrier&...

[5] https://www.google.com/search?tbm=isch&as_q=norfolk+terrier&...


I can easily learn to distinguish between a Norwich terrier and a Norfolk terrier.

1. Google "difference between norfolk and norwich terrier".

2. Click first link: https://www.terrificpets.com/articles/10290165.asp.

3. "The Norwich terrier has prick ears, or ears that stand up, seemingly at alert, while the Norfolk has drop ears, or ears that seem to be folded over".

SOTA models are merely doing black-box pattern matching on who-knows-what, and are highly likely to fail dramatically outside of the training dataset confines.


Mehhhh. BERT is mind blowing, Waymo has started fully driverless service.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: