Really nice! When I was in college we had all sorts of online systems that automatically graded assignments based on how close to the correct answer your supplied answer was. I think it'd be nice if in the future there was more of an overlap between science classes and programming classes (almost like a freshman / junior level scientific computation class) where instead of approaching problems from a pure theoretical perspective, we involved these types of computational approaches in parallel. Personally I have found that while I often know how to perform various theoretical computations, it's faster for me to just throw together a quick script to approximate the result (e.g. what's the expected value of the product of two gaussians?)
I agree 100%. It seems that almost any real-world science problem these days requires a computational solution/approach at some level. Taking a computational approach in parallel would be super useful.
This was sort of how physics classes at my alma mater worked. Especially at the upper-division level, it was pretty much expected that you would be submitted a Mathematica notebook with all your homework/tests/labs
It is already mentioned in Project Lovelace's About page, but people who like this and Project Euler may also like Rosalind, which is a programming problem site focused on bioinformatics and adjacent algorithms.
I like the eclectic selection of problems. I wish I had seen something like this when I was growing up; a manageable set of problems with the hint of depth.
It’s easy to snipe a nerd, but nerd sniping a non-nerd.. That’s gold.
FWIW:
- No margins on the side on iPhone in portrait.
- Math formulas are clipped on the top in landscape.
I don't use Go so might take a while unless someone decides to swoop in and add support for it!
Supporting new languages takes some effort since we need to be able to run arbitrary Go code and communicate with Python. Definitely possible but requires some familiarity with both.
So far we've just been adding languages we know and use. Hoping to learn Rust soon so that might be the next language!
One of the big reasons Project Euler is so brilliant is that every problem is formulated so that you can do it in any language. (I've introduced myself to about six languages this way, none of which you support yet.) What drove the decision to have an allowlist of languages? If you're concerned about people publishing the one true number that is the answer to problem 11, you could always generate random test cases and ask the user to supply the answers to those test cases.
I think the reason we did it is so that we could visualize user output/solutions. You can learn new things by looking at your solutions to various test cases!
One example would be if you submit a solution (or just the code stub) to the Exponential growth problem, then your solution gets plotted and compared with the analytic solution and the correct solution: https://projectlovelace.net/problems/exponential-growth/
Also, unlike Project Euler where every solution is a number, a lot of scientific problems have solutions that are multi-dimensional arrays or multiple objects. So in this case, copy pasting your output becomes quite messy.
It's not trivial to add support new languages since the engine app that runs user-submitted code needs to be able to run arbitrary code in your chosen language and needs a way to communicate with Python (either directly or through JSON passing).
So far we've just been adding support for languages we know and use.
But indeed there are hundreds of languages out there that would be nice to support in some way. It might be neat to add a new submission mode where you're given a couple of test cases that you run manually then submit your solution manually.
Oh that's a neat idea. I should check out Advent of Code. Summing would probably work for a lot of problems but some problems mix strings with numbers in the output
The downside with Project Euler, though is that you need to run your code elsewhere. That's why something like leetcode is so nice (I think, at least): you can run the code right there in the browser. If you take this approach, you're forced to whitelist languages.
Could you add Fortran as a programming language? It is one of the main languages for "science and math flavored programming problems". Gfortran is a free compiler that is part of gcc.
That is true. It's not super easy to add new languages and I'm not familiar with Fortran (thankfully our group switched from Fortran to Julia recently!) but we already support C since it's easy to call C from Python. Perhaps Fortran support won't be super hard either.
Really nice! I noticed however that the verification email that gets sent to you after you register has the name/address of 'webmaster@local' rather than <username>@projectlovelace.net.
Speaking of registration, it would be nice if we could log in using our Google accounts (ie. Sign in with <service>), rather than make another account with your site.
Also it would be nice if the site picked up your preferred language of choice after you've done at least one of the problems. I like coding in javascript, but I have to select it from the dropdown list each time I go into a problem to solve, and sometimes I forget to do so (defaulting to Python instead)
Suggestion: allow entering the registered email ID in the place of username, when logging in. I was repeatedly trying to login and even reset the password, assuming the Username field was "Username or Email ID" like it is in many places now. This is certainly a bit of PEBKAC, but it would be a nice usability improvement to allow what's now become a common pattern and make either username or email ID work in that part of the login form.
We thought it would be tedious to write tons of code to make sure users don't take the easy way out for each problem and language, so we figured we might as well allow it. We can't force users to solve the problem our way.
This is fantastic, I actually independently gave my CS101 (for engineers) students some of the same questions last semester (temperature, definite integrals, game of life). Can't wait to try some of these out!
Love this! Always been a fan of kata-type websites like [0], but they all become uninteresting after a while (new katas get bland). This submission has good fresh energy!
I got completely obsessed with the code golf part at codewars, at some point I just decided enough is enough and stopped playing with. A lot of fun though.
It is neat. The inline math formulas are not displaying right (unless raw latex is right...). Compared to Euler I think perhaps the early problems are a little too easy...
These problems look like a lot of fun! Unfortunately, I'm currently learning Rust and would prefer to use it to solve these problems.
It would be nice to support uploading a binary or solution (like project Euler) or a CSV of test cases next to solutions. Maybe I'll try compiling rust to c and upload the c file
I'm excited to learn Rust actually so we might support it soon!
It's not trivial to add support new languages since the engine app that runs user-submitted code needs to be able to run arbitrary code in your chosen language and needs a way to communicate with Python (either directly or through JSON passing).
So far we've just been adding support for languages we know and use.
But indeed there are hundreds of languages out there that would be nice to support in some way. I'm gonna think about how we can add a new "submission mode" where you're given a couple of test cases that you run manually with any language then submit your solution.
That's true. I am most interested in testing if my code works though. I don't see a 'cheat' option that gives you the correct answer I could check against.
Ah do we run all user-submitted code in Docker containers. The "engine" that runs the code is written in Python and we do different things for different languages.
For running Javascript and Julia, it goes something like Python objects -> JSON -> read JSON in Javascript/Julia -> run code -> output JSON -> read user output from JSON in Python.
For C, we can call C functions directly from Python with some code for dealing with different types.
Not sure if this is the best approach (it's not super fast) but we've been learning as we go. We might be due for a refactor in case the next language we want to support doesn't fit into this pattern. I'm personally excited to learn Rust and maybe add support for it.
From a quick glance it looked quite worrying, many red flags. I didn't look too carefully so some of this might be wrong or I missed where it's done.
* Results from the untrusted part inside container are returned using pickle. Which can be used to achieve arbitrary code execution outside container.
* no time limiting
* no memory limiting
* Untrusted code is run as root in the container which by default is same user as root outside container. From what I understand it isn't as bad as it was in earlier docker versions but still not great.
* untrusted code is run in the same process as semitrusted run_lang code, which means that the untrusted code with little bit of effort can manipulate reported execution time and memory usage
* for some languages correct_output is copied into the untrusted execution environment which means that solution could potentially just read the correct answers instead of calculating themselves
* none of the default capabilities are dropped which is probably more than what solution needs
That's a good attitude to have but people are probably going to take over your host[s], vandalize your site and run up your bills long before you get a chance to do all the necessary learning. I don't think your current approach is actually all that easy to secure.
I don't have anything particularly concrete but I'd say find an open source implementation of something similar that has had a track record of running without too much incident and carefully copy its implementation, design and configuration.
Disable ability to make submissions until you have more solid plan
Decide what is your goal. Do you want to make a judge system, do you want create tasks or do you want a platform with specific kind of tasks.
Get in touch with people involved in ICPC an IOI contests in your country. Even if you are not interested in those kind of algorithm tasks, there will be some people who are familiar with similar existing systems and could point you in the right direction.
Assuming your primary goal isn't to make a judge system itself, some other options are:
* Evaluate the existing online judge systems. There are some open source ones like DOMjudge(https://www.domjudge.org/), CMS(https://cms-dev.github.io/index.html) and others . Consider if you reuse or extend them to suite your desired format. In the worst case maybe just the execution part can be reused. At least learn from their experience and mistakes creating and maintaining such systems.
* Many programming languages now have online REPL environments. Some of them open source. This is one more source of projects that provide sandboxed execution.
* If you have some budget, there are platforms that provide sandboxed exection as service oriented at your exact use case. Some examples are Sphere Engine https://sphere-engine.com/enterprise used by Sphere online judge and Kattis .
There a lot more different platforms with different style of programming tasks than what you listed in your FAQ. Some of looking for problem setters. Maybe one of them fits your type of tasks more. Or it can be a one off contest with slightly unusual problem set. Or maybe it can be a separate category on their system
and you can advertise this category on your website.
Thanks for the suggestion. Thread is too deep to reply to your actual message, but I will look around to see how other "online judge" software run arbitrary code securely.
There's probably some low hanging fruit in configuring Docker properly.
There should be a limit on how long the Docker container can run code for, but it might be unnecessarily long right now.
I'm not a web developer by trade or anything so I'll have to learn how to secure the Docker container from malicious code. Hopefully Docker provides some amount of protection for now...
For today it's going to be old school htop + F9 haha.
In the Python templates, what’s with the variable “initialisations” —- e.g. t = 0 in the light-speed one? I’ve seen similar stuff in my eldest’s computing homework: is it an accepted Python idiom that I’ve missed somehow?
It looks like they've made the minimum amount of code that will execute (but not pass the tests). By using a variable instead of just hardcoding 0 into the return they give you a specific thing to assign to and as long as you leave that return t bit alone at the end your code will work.
Though, in that case, the solution is so trivial that t is completely unnecessary.
Yeah the other two replies explain why we used code stubs that would at least run but not pass.
Don't think it's an idiom (at least not one that I've seen).
It's definitely not the nicest code, especially for simpler problems that are really one-liners but we figured lots of people would just click "submit" on just the code stubs.
And I would like this to exists for a long time and I want it to be around when I teach programming to my little cousins, nephews and nieces and my future generation.
Off-topic: How you english speakers pronounce 'Ada'? Like [a]ffirmative or {a}pex?
'[A]da Lovelace' sounds so nice and soothing to me. It's a really beautiful name. However, just 'Ada' doesn't do anything for me, but 'Julia' does the trick there. A programmer named Ada Lovelace and a programming language named Julia... those things make for really comforting echos in my mind, I don't know why. Like coming home chilled from winter's rainfall and bathing the brain in warm vanilla sauce.
I've always heard Ada pronounced like in Apex. I like the language and find many of the complaints about it to be misplaced. I haven't tried Julia but I think its application area is completely different. I would say Ada isn't a great choice for this Lovelace project. Maybe the project should have been called Robinson instead of Lovelace, after the mathematician Julia Robinson (the R in the MRDP theorem, among other things). Then the examples could have been programmed in Julia. Or maybe even in R ;-).
We never expected this much traffic and everything is hosted on a tiny DigitalOcean server including the code runner haha. If you visit a bit later it should be much faster.
I can think of some ways to speed things up which I will try.
It looks cool. Thanks! I've started solving problem in Julia. Are there 27 problem in total?
Also, I can not seem to submit solution to problem 'Compound interest'. I kept getting the following error, '...docker container is nonzero. Returning falcon HTTP 400.'. Sent you an email with the details.
Nice to see another Julia user! Yup right now we only have 27 problems but there will hopefully be many more in the future. Some people might even contribute some new problems.
The code gets uploaded to the server which sends it to an "engine" sitting in a Docker container. The engine generates test cases and send them with the code to another Docker container where the code is run. Then the output from your code is sent back to the engine which checks to see how many test cases you got correct before sending all the information back to your browser.