Hacker News new | past | comments | ask | show | jobs | submit login

Hey Ben, are these going to support arbitrary CUDA?



Also, does Kaggle have any IP rights to code written in a kernel?


Code made public on Kaggle Kernels is currently required to be under an Apache 2.0 license.

Beyond that, for private work the non-legalese TL;DR is no, at least not beyond what's required for us to operate the service. I'll refer you to https://www.kaggle.com/terms, and copy one relevant section below. If your work is in the context of making submissions to a specific machine learning competition, then that competition may have bespoke exceptions to this as well (which would be detailed in the rules of the competition).

"For all User Submissions, you grant Kaggle a license to translate, modify (for technical purposes, for example making sure your content is viewable on an iPhone as well as a computer) and reproduce and otherwise act with respect to such User Submissions, in each case to enable us to operate the Services, as described in more detail below. You acknowledge and agree that Kaggle, in performing the required technical steps to provide the Services to our users (including you), may need to make changes to your User Submissions to conform and adapt those User Submissions to the technical requirements of communication networks, devices, services, or media, and the licenses you grant under these Terms include the rights to do so. You also agree that all of the licenses you grant under these Terms are royalty-free, perpetual, irrevocable, and worldwide. These are licenses only — your ownership in User Submissions is not affected."


At the moment, we're focused on providing great support for the Python and R analytics/machine learning ecosystems. We'll likely expand this in the future, and in the meantime it's possible to hack through many other usecases we don't formally support well.


How do you handle custom environment requirements, whether it’s Python version, library version, or more complex things in the environment that some code might run on?

Basically, suppose I wanted everything that I could define in a Docker container to be available “as the environment” in which the notebook is running. How do I do that?

I ask because I’ve started to see an alarming proliferation of “notebook as a service” platforms that don’t offer that type of full environment spec, if they offer any configuration of the run time environment at all.

I’ve taught probability and data science at university level and worked in machine learning in a variety of businesses too, and I’d say for literally all use cases, from the quickest little pure-pedagogy prototype of a canned Keras model to a heavily customized use case with custom-compiled TensorFlow, different data assets for testing vs ad hoc exploration vs deployment, etc., the absolutely minimum thing needed before anything can be said to offer “reproducibility” is complete specification of the run time environment and artifacts.

The trend to convince people that a little “poke around with scripts in a managed environment” offering is value-additive is dangerous, very similar to MATLAB’s approach to entwine all data exploration with the atrocious development havits that are facilitated by the console environment (and to specifically target university students with free licenses, to use a drug dealer model to get engineers hooked on MATLAB’s workflow model and use that to leverage employers to oblige by buying and standardizing on abjectly bad MATLAB products).

Any time I meet young data scientists I always try to encourage them to avoid junk like that. It’s vital to begin experiments with fully reproducible artifacts like thick archive files or containers, and to structure code into meaningful reproducible units even for your first ad hoc explorations, and to absolutely always avoid linear scripting as an exploratory technique (it is terrible and ineffective for such a task).

Kaggle Kernels seems like a cool idea, so long as the programmer must fully define artifacts that describe the complete entirety of the run time environment, and nobody is sold on the Kool Aid of just linear scripting in some other managed environment.

Each kernel for example could have a link back to a GitHub repo containing a Dockerfile and build scripts for what defined the precise environment the notebook is running in. Now that’s reproducible.


Here are the Kaggle Kernels Dockerfiles:

- Python: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...

- R: https://github.com/Kaggle/docker-rstats/blob/master/Dockerfi...

https://mybinder.org builds containers (and launches free cloud instances) on demand with repo2docker from a (commit hash, branch, or tag) repo URL: https://repo2docker.readthedocs.io/en/latest/config_files.ht...


That’s a great first step! Adding the ability to customize on a per-notebook basis would be impressive.


Regarding "thick archive files or containers" for reproducibility: I'm curious what (at least in your view) the solution to reproducibility looked like prior to easily shareable containers like Docker? (I'm also not sure what a "thick archive" would be.)

For a brief window of time, I was aware of colleagues distributing ubuntu virtualbox VMs for providing complex software environments to students, which sounded like it mostly worked. Not sure if such was used to package up reproducible research, too.


Before containers and even widespread VMs, “thick archives” basically just meant a tar file that contained all of the build tooling in addition to the project code.

So you might create an archive with a whole compiler toolchain and shell scripts / makefiles to invoke it locally on the host machine.

Usually a project would have a build system that auto-generated these archives for any combination of platform / compiler options targeted for support. So you’d choose the MacOS archive if you use a Mac (maybe further separated based on your architecture’s precision and which compiler, etc.)

It leads to a Chinese Menu problem of multiplicity: archive files for X platforms times Y precisions times Z compilers, etc. (especially painful for embedded devices).

It’s a reasonable way to ship the entire build artifacts though.

VMs are a perfectly good way to distribute reproducible research. Though I think containers are the best way currently because of the usability of most container APIs (standard recipes & build experience, managed container repos, etc.).

In principle you could build convenience APIs around thick archives or VMs too, it just seems less common for whatever reason.


>avoid linear scripting as an exploratory technique

What do you recommend instead for exploratory {data analysis? science?}


The same thing you do for other types of development. Place separate units of logic into well modularized functions / classes / units of organization; factor out any aspects of config; add a makefile or other build scripts.

An experiment would most often be the creation or modification of a config file followed by just invoking a build command.

No cell-by-cell evaluation, no commenting things out to run differently, no magic constants or big sequences of plotting code sprinkled all over.

The program itself to explore data or fit a model might be an imperative program, but that doesn’t mean it should exist in a single large functional unit that receives modification through commenting things out, re-running a notebook cell to change parameters, etc.

While obviously there is a trade off regarding how much design effort to put in for an experiment, most often people are not putting any design into it, nowhere close to the boundary where the trade off matters at all. Basic things like organizing separate functions, putting constants into a simple config file, etc., cost almost nothing but drastically improve usability and clarity, so you should pretty much always believe those efforts are worth it from the beginning of starting a project.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: