
SlopPy: An error-tolerant Python interpreter that facilitates sloppy programming - minthd
http://www.pgbovine.net/SlopPy.html
======
pgbovine
wow, this is a really old project that was a two-month-long hack culminating
in a workshop paper: [http://www.pgbovine.net/publications/SlopPy-error-
tolerant-P...](http://www.pgbovine.net/publications/SlopPy-error-tolerant-
Python-interpreter_WODA-2011.pdf)

i haven't maintained the code for ~5 years, so ymmv

------
naftaliharris
Interesting! The approach I take to the "script fails after running for an
hour" problem is to drop into a debugger on an exception. I use
[https://gist.github.com/naftaliharris/8284193](https://gist.github.com/naftaliharris/8284193)
for this but you can also use "python -m pdb myscript.py" or "python -i
myscript.py" and then "import pdb; pdb.pm()" when your script fails.

~~~
sdenton4
My approach has always been to do checkpointing on long computations, so that
if something fails in step n, it can pick back up again from step n-1.

Of course, if your script has REAL problems, your data is going to be suspect
at step n-1 as well....

------
minthd
The author has another few interesting tools:

IncPy: Automatic memoization for Python:
[http://www.pgbovine.net/incpy.html](http://www.pgbovine.net/incpy.html)

CDE: Automatically create portable Linux applications :
[http://www.pgbovine.net/cde.html](http://www.pgbovine.net/cde.html)

Burrito: Rethinking the Electronic Lab Notebook :
[http://www.pgbovine.net/burrito.html](http://www.pgbovine.net/burrito.html)

~~~
sitkack
I'd like to see IncPy and SlopPy combined so one could iteratively fix the
causes of the exceptions passed by SlopPy, this would also require some
Incremental Computation, see Acar et. al

~~~
pgbovine
yep great idea. one of my pipe dreams was to combine all of those systems into
one giant uber pipeline for data science ... but then i finished my Ph.D. and
moved on to other things :)

~~~
sitkack
You already planted the seed and provided the field.

------
rch
> If you don't want to go through this hassle (compiling from source), please
> email me at ... and I will try my best to compile a custom version for your
> computer and to guide you through the setup process.

That's wild. I wonder how frequently someone tries to take him up on the
offer.

~~~
pgbovine
oh wow i need to update that webpage ... haven't maintained this project in
years :)

------
readams
I'm not really sure that I see the advantage here. Rather than understand the
semantics of the N/A values and what effect this would have on your program,
why not just add a try block around the inner part of your loop? It's an extra
2 lines of code, and you'll get the same sort of yolo behavior you get with
the intepreter.

Perhaps this would be better represented as a simple library you can use to
catch the error, log it, and then continue with your loop.

And of course, this technique totally fails if you're trying to accumulate
anything as you process the data that might end up depending on these N/A
values: you'll run your whole program for many hours and at the end it will
print the final result: N/A

~~~
minthd
Let say you run some application over a large set of data , a process that
takes hours. After a few hours of runtime , some data in the set is malformed
, so you have to start from scratch.

With this ,you'll get you're partial results, plus a list of all the places
with malformed data that you can fix together .

Same goes for other cases where your applications partially works, and there's
value in that partial work.

~~~
readams
Not sure if you read my comment. You get the same behavior with a try block.
And that's even assuming that you don't just get back gibberish which would be
the common case. Note with the example if he didn't pointlessly parse out then
reassemble the IP address in his "simplest program", he would have been
outputting those #number# values instead of the IP addresses without catching
the error.

~~~
emj
I have forgotten my fair share of try blocks on multi hour parsing runs.

------
reinhardt
Related:
[https://github.com/ajalt/fuckitpy](https://github.com/ajalt/fuckitpy)

------
jlarocco
Visual Basic had this "feature" with the "On Error Resume Next" command. It
was universally hated among people who had to maintain VB code, and was a sure
fire way to spot crappy code.

I don't even understand the point of this type of coding. If a program throws
an exception and crashes, it means my assumptions were wrong, and the code is
handling data it wasn't meant to handle, and needs to be fixed. If I got that
part wrong, it's very likely the code around that section is also wrong in
some way. On the other hand, if I don't care that the computation fails, why
did I have it there at all?

~~~
asveikau
I went into these comments looking for the reference to "on error resume
next", wondering why it took a while. I guess a bunch of the younger crowd
here won't remember it.

It's worth mentioning that shell scripts also tend to lean towards keeping
chugging when they encounter errors, and writing proper error checks in a
shell script can be tedious and troublesome. I know I have been bitten by
this.

~~~
bashinator
`set +e` is your friend, as is pipefail.

------
ceronman
This is a very interesting project. It makes Python behave a bit more like
Perl. By default, Perl is very forgiving with errors. Instead of stopping
everything when a non fatal error is found, Perl will try to continue and will
throw a warning instead.

This behavior is very practical when writing quick scripts to automate things
or process data. And this is the reason I often prefer Perl over Python for
these kind of tasks.

Of course, this has a downside: when you just continue with errors, it's often
hard to find the real source of them, or even notice them at all. For bigger
application this might end up in weird errors in production that are hard to
debug. In this case I very much prefer the behavior of vanilla Python.

~~~
raiph
Continuing after arbitrary computation events that are meant to be fatal seems
to me to be asking for trouble. It's why folk reviled Basic for its "On Error
Resume Next" command. It has nothing to do with what Perl is about.

Optionally resuming after a computation event that is not meant to always be
fatal in a principled manner is an excellent, well known, decades-old power
programming technique available in a few sufficiently powerful languages. See,
for example, Common Lisp's condition system [1] or Perl 6's similar system.[2]

[1]
[https://en.wikipedia.org/wiki/Common_Lisp#Condition_system](https://en.wikipedia.org/wiki/Common_Lisp#Condition_system)

[2]
[http://design.perl6.org/S04.html#Exceptions](http://design.perl6.org/S04.html#Exceptions)

------
rpcope1
I don't think we should encourage this sort of behavior; Python already has
excellent exceptions and handling and this only encourages the detrimental
sloppiness that one sees in JavaScript. If your program hits an unhandled
exception, I'd wager 9 out of 10 times it's unexpected, and thus the only
appropriate behavior is to stop, rather than to continue operating in a
potentiallly totally unexpected and silently buggy (and dangerous) state. Fail
hard and fail fast, while not always convenient, is far easier to debug and
handle than the proposed alternative.

~~~
sitkack
Please read the article and watch the demo.

------
PhantomGremlin
Once upon a time in the distant past, people entered programs on punched
cards. They read these in to the mainframe and came back a few hours later or
even the next day to see the results. Often there was no useful output, as a
result of a simple syntax error.

Students taking introductory Computer Science courses made a lot of simple
syntax errors. Some people at Cornell University wanted to help students out.
So they wrote CUPL.[1] Cornell University Programming Language.

CUPL attempted to discern the programmers true intention when encountering a
syntax error in PL/I. The error message was of the form:

    
    
       blah blah error
       STANDARD FIXUP TAKEN
       EXECUTION CONTINUING
    

This was useful to people who actually wanted to learn PL/I, but for us early
hackers it was more amusing to just feed in random data and see CUPL attempt
to turn it into legal PL/I. Unfortunately it didn't usually do a very good
job.

Moral: Some ideas in CS have been around for many many decades.

[1]
[https://en.wikipedia.org/wiki/Cornell_University_Programming...](https://en.wikipedia.org/wiki/Cornell_University_Programming_Language)

------
latenightcoding
It seems excessive to replace the interpreter just to add some fault tolerance

------
drvortex
Right. That's all we need. More encouragement for code.

------
Old_Thrashbarg
Aren't Python programmers already sloppy enough?

------
kijiki
"Failure Oblivous Computing"

[https://www.doc.ic.ac.uk/~cristic/papers/fo-
osdi-04.pdf](https://www.doc.ic.ac.uk/~cristic/papers/fo-osdi-04.pdf)

------
s_kilk
This is a fun hack. While I can see how it would be useful to some, reading
the rationale brought me out in a rash.

Still, well done to the author!

~~~
logicallee
it's not as bad as I thought - I thought it would make random changes to your
code to try to get it to run, which is what actual sloppy programming
languages feel like.

------
frugalmail
No thanks, I feel this(sloppy programming) is already a problem in Python.
Mine and others.

~~~
joshuapants
I don't disagree with you, but I think the name is supposed to be humorous.
It's not supposed to let you get away with sloppy programming, it's supposed
to help you fix things caused by it (at least in the example on the page).

