Hacker News new | past | comments | ask | show | jobs | submit login

I am worried about the recent trend of "ethical AI", "interpretable models" etc. IMO it attracts people that can't come with SOTA advances in real problems and its their "easier, vague target" to hit and finish their PhDs while getting published in top journals. Those same people will likely at some point call for a strict regulation of AI using their underwhelming models to keep their advantage, faking results of their interpretable models, then acting as arbiters and judges of the work of others, preventing future advancements of the field.



> IMO it attracts people that can't come with SOTA advances in real problems and its their "easier, vague target" to hit and finish their PhDs while getting published in top journals.

I'm also pretty wary of interpretability/explainability research in AI. Work on robustness and safety tends to be a bit better (those communities at least mathematically characterize their goals and contributions, and propose reasonable benchmarks).

But I'm also skeptical of a lot of modern deep learning research in general.

In particular, your critique goes both directions.

If I had a penny for every dissertation in the past few years that boiled down to "I built an absurdly over-fit/wrongly-fit model in domain D and claimed it beats SoTA in that domain. Unfortunately, I never took a course about D and ignored or wildly misused that domain's competitions/benchmarks. No one in that community took my amazing work seriously, so I submitted to NeurIPS/AAAI/ICML/IJCAI/... instead. On the Nth resubmission I got some reviewers who don't know anything about D but lose their minds over anything with the word deep (conv, residual, variational, adversarial, ... depending on the year) in the title. So, now I have a PhD in 'AI for D' but everyone doing research in D rolls their eyes at my work."

> Those same people will likely at some point call for a strict regulation of AI...

The most effectual calls for regulation of the software industry will not come from technologists. The call will come from politicians in the vein of, e.g., Josh Hawley or Elizabeth Warren. Those politicians have very specific goals and motivations which do not align with those of researchers doing interpretability/explainability research. If the tech industry is regulated, it's extremely unlikely that those regulations will be based upon proposals from STEM PhDs. At least in the USA.

> faking results of their interpretable models

Jumping from "this work is probably not valuable" to "this entire research community are a bunch of fraudsters" is a pretty big jump. Do you have any evidence of this happening?


> If I had a penny for every dissertation in the past few years that boiled down to...

This is very, very accurate. On the other hand, I oftentimes see field-specific papers from field experts with little ML experience using very basic and unnecessary ML techniques, which are then blown out of the water when serious DL researchers give the problem a shot.

One field that comes to mind where I have really noticed this problem is genomics.


> Those same people will likely at some point call for a strict regulation of AI using their underwhelming models to keep their advantage

While this is probably true (and IMO, possibly the right choice), this:

> faking results of their interpretable models

is taking a huge leap.

Why are they more likely to fake their results than companies selling a black box model?


Leaving interpretability and conspiracy theories about softy PhDs aside, for a bit, "SOTA advances" are not progress. e.g., in the last seven years since AlexNet, results have kept creeping upwards by tiny fractions on the same old datasets (or training times and costs have gone down) all of which is achieved by slight tweaks to the basic CNN architecture and perhaps better training techniques, or, of course more compute. But there has not been any substantial progress in fundamental algorithmic techniques- no new alternative to backpropagation, no radically new architectures that go beyond convolutional filters.

But I'll let Geoff Hinton himself explain why the reliance on state-of-the-art results is effectively hamstringing progress in the field:

GH: One big challenge the community faces is that if you want to get a paper published in machine learning now it's got to have a table in it, with all these different data sets across the top, and all these different methods along the side, and your method has to look like the best one. If it doesn’t look like that, it’s hard to get published. I don't think that's encouraging people to think about radically new ideas.

Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.

What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That's I think the main downside of the fact that we've got this inversion now, where you've got a few senior guys and a gazillion young guys.

https://www.wired.com/story/googles-ai-guru-computers-think-...


> no new alternative to backpropagation, no radically new architectures that go beyond convolutional filters.

Attention

My point wasn't about lack of investment/propagation of fundamental research that is not trendy, it was about hijacking what should be science by "softy PhDs" that found a niche in less demanding areas and will likely impose their will over the ones who are doing hard science and not politics, like how CoCs were recently used to take control over open source/free software licenses by some fringe non-technical groups. It's a pattern that is repeating across all industry and academia in the past, the ones that move field forward are often displaced by their "soft-skilled" and less-capable peers.


In your view, it's better to answer the wrong questions optimally than the right questions suboptimally? That's not the kind of AI want putting people in jail or denying them housing or healthcare or shooting missiles at them.


There are two parts to my opinion:

1) I would prefer something that works most of the time (blackbox) than something interpretable that doesn't

2) if my DL converged to e.g. solving some complex partial differential equation, how does it help me that I could "interpret that" if 99.9999% human population has no clue what is going on regardless? "Your loan was rejected, because the solution to this obscure PDR with these starting conditions said so; these Nobel prize economy winners created this model with these simplifications and we deem it good enough to keep you where you are."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: