This makes as much sense as using a video of the postures of a developer at work as a training set, combined with code output, and then wondering why this doesn't generate useful apps when fed by a live posture csm.
Am I missing something?
As a ukulelist, when I sit in with (say) a guitarist, I do a sequence-to-sequence translation of the chords I see my friend playing to chords I can play. If you know your instrument well, you know the sound that goes with certain visual configurations. So you can think of it as sequence-to-sequence translation with a very non-traditional input.
NVIDIA GPUs are the primary platform where Deep Learning training and inference is being done. Not only that, NVIDIA is making that the core of what they do
here was a whole chain of separate departments dealing with proletarian
literature, music, drama, and entertainment generally. Here were produced
rubbishy newspapers containing almost nothing except sport, crime and
astrology, sensational five-cent novelettes, films oozing with sex, and
sentimental songs which were composed entirely by mechanical means on a special
kind of kaleidoscope known as a versificator.
-- George Orwell, 1984
Edit: science requires reproducing. Opening the code is literally enabling reproduction. When researchers refuse to do that, they refuse to be scientists. I find this especially irksome when there are public funds used NG the researchers.
If their paper is properly written there should be enough information there to recreate what they've done. If there isn't that and they refuse to provide that information when asked then you have every right to complain.
There is a really interesting case study in the Collins Parser. Starting around 1999, Michael Collins published some really exciting results on parsing (interpreting the grammatical structure of nature language). However, people could not replicate them--a "clean room" implementation of Collins' models didn't work nearly as well as the paper claimed it should.
Dan Bickel identified a number of apparently trivial implementation decisions that, when taken together accounted for Collins' models improved performance. There's a nice tech report describing this process here: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1026...
This is true even for theoretical papers with mathematical proofs and the like. I don't think this is a very controversial opinion. I've often seen reports in the (mostly popular) press starting with "such and such team claims to have solved such-and-such longstanding problem in a new paper released..." etc.
In the popular press we tend to see claims made in papers reported as an absolute fact: "These Danish boffins trained a deep managerial neural pixie network to recognise the sex of starfish" etc. The point that the paper reports the team's own results as the team understands them and that other teams may have a different interpretation of the same results, is often lost in the translation.
The best outlets often include a few opinions from researchers not involved in the work- I tend to trust those better.
It's currently a bit fuzzy what open data requirements would be, so before enforcing the requirement they're trying to pin it down with regard to things like subject/patient/etc. confidentiality, industrial agreements, what to do about massive datasets (e.g. TB+), etc.
There is hope!
Yet refusing to open the code is not refusing to enable reproduction.