The tensorflow and keras R packages are interesting and have similar parity to the Python APIs (since the packages translate the commands to Python), but using them requires a more functional approach than working with R/tidyverse. And hopefully you don’t hit a bug, as the extra layer makes debugging more difficult. (In contrast, sparklyr, an R interface to Spark, fits well into a dplyr ETL, although that’s more due to the nature of Spark DataFrames)
The tricks used in the explainer slides don’t take much advantage of the R ecosystem, unfortunately. (And I say that as someone who is very vested in the R ecosystem, but still switch to Python for anything deep learning)
If you are referring to the act of mixing "rmarkdown, keras and kerasjs" to run deep learning models inside a slide as a hack, that would make much more sense.
Could you please clarify?
What is more interesting to me is that one can pull together this demo pretty easily using all the new tools that have just been made available in R. Those tools are much more interesting than this post.
Regarding, 'which is compounded by the nature of the former', the argument from the comment seems to be that creating a layer of something makes things worse, this is not necessarily the case. Abstraction layers are one of the most powerful and reliable tools in software engineering. For instance, I would much rather write a layer of C++ code that wraps assembly code. Others can choose to create layers over C++ and so on, it's all good.
Worth mentioning that this model uses a simple feedforward network with a few dense layers trained over the MNIST dataset; therefore, digit recognition classification is not very accurate.
Chrome: Stayed smooth but did not recognise a single digit correctly.
Not sure what causes either of these problems :(