
PySnooper: Never use print for debugging again - cool-RR
https://github.com/cool-RR/pysnooper
======
yason
Happy to see new tools for debugging. Yet it's the good old print that I most
often end up using. Debugging is very context-sensitive: adding prints makes
sense because I know exactly what I'm interested in, and I can drill down on
that without being disturbed by anything else. I can print low-level stuff or
high-level indicators, depending where my debugging takes me. There's no
ready-made recipe for that.

Some codebases have built-in logging or tracing functionality: you just flick
a switch in a module and begin to get ready-made prints from all the
interesting parts. But I've found myself never using those: they are not my
prints, they do not come from the context I'm in and they don't have a
meaning.

Use what you want but please don't underestimate prints.

~~~
anitil
I've definitely had the experience of using custom_logger.log("something"),
then checking the system log. Nope, no there, must be in /var/log/app ...
nope. Hmm oh I know! I'll turn up the logging in the config file, wherever
that is. Still nothing. Did I need to compile in some flag?

Screw it. print, run directly from shell, done.

~~~
thatoneuser
Fuck var/log/app.

~~~
lsh
honestly, fuck the whole standard logging library in Python. It is the _most_
infuriating thing to use. log4j has a lot to answer for

~~~
oblio
Why?

~~~
scbrg

        >>> import logging
        >>> logging.info('Hello, world!')
        >>> # right, my log message went nowhere
    

Because by default, logging goes nowhere. And if you configure logging - using
a most unintuitive config format (it's so weird that even the documentation
_about it_ can't be bothered to use it, but reverts to yaml to explain what it
means!) - there's a good chance that loggers created before you got around to
configure it (for instance if you, God forbid, made the mistake of adhering to
PEP-8 and sticking your import statements at the top) won't use your
configuration - and thus send their log messages, again, nowhere.

Also, it's slow as hell.

~~~
Zopieux
You know, you could start by reading the documentation and stick one
logging.basicConfig() call in your entrypoint, instead of spreading that kind
of misinformation.

Python's logging infrastructure is pretty bad but you fail to give any good,
factual reason why it is. Instead you just vent your frustration on HN, making
that platform all the more depressing to read.

~~~
yebyen
I for one am glad to hear this aired in a public forum, even though it took
several pointless replies to get to the "hey, check out this bad default
setting"... I'm learning Python as a distant third priority, I had only heard
of pdb once, and I would have probably tripped over this logger that logs by
default to nowhere at least once before I resorted to giving up and reading
the documentation in anger.

Why is it this way, do you think? (Is it a reasoned stance? I would have
expected the logger to send messages to stdout by default, so at the risk of
getting a "Read the docs!" am I going to be equally surprised at the behavior
of basicConfig?)

------
scwoodal
For my Django projects the development server is werkzeug [1] and anywhere a
break point is needed I'll add 1/0 in the code then hit refresh in the browser
which pulls up an interactive debugger [2].

[1] [https://django-
extensions.readthedocs.io/en/latest/runserver...](https://django-
extensions.readthedocs.io/en/latest/runserver_plus.html)

[2]
[https://werkzeug.palletsprojects.com/en/0.15.x/debug/#using-...](https://werkzeug.palletsprojects.com/en/0.15.x/debug/#using-
the-debugger)

~~~
marcell
FYI you can do

    
    
        import pdb; pdb.set_trace()
    

It pulls up console debugger and is a standard python package.

~~~
AtHeartEngineer
With python 3.7 you can just do 'breakpoint()' now, and it'll import pdb and
run settrace. It also allows you to use other debuggers, pretty cool.

------
butuzov
Again, Ram - it's a really nice project. Good luck with it.

But just in case if you from print cult, you will like
[https://github.com/gruns/icecream](https://github.com/gruns/icecream)

~~~
lsh
"Ram"? I went looking for a project called Ram but couldn't find one. Link?

~~~
draven
It's the author of the package, the readme says: Copyright (c) 2019 Ram
Rachum, released under the MIT license.

~~~
lsh
ah, thank you. I missed the forest for the trees.

------
badatshipping
This is a thoughtful hack but the real solution here is a) make debuggers
easier to set up and b) make your project easily debuggable with a debugger
from the beginning.

Like, debugging should be considered part of programming, and a local dev
environment that can’t be debugged should be viewed to be as broken as a
codebase without e.g. a way to run the server in watch mode.

Also someone should make a debugger that supports the equivalent of print
statements, e.g. set print breakpoint on a variable to print its value every
time it’s run, instead of typing print everywhere.

~~~
cknd
One situation where I would have wanted a real debugger and couldn't use one
was in crash reporting on remote systems (like, machine learning code running
in some container somewhere, or on a headless box far from the network).

Since Python's built-in tracebacks are pretty minimal, the default crash logs
don't offer much help other than a line number. I ended up writing a tool that
prints tracebacks along with code context and local variables [i], sort of a
souped up version of the built-in crash message. It's surprising how much that
already helped in a few situations, makes me wonder why it isn't the default.

So, yes, more debuggers, but also abundant logging everywhere!

[i]
[https://github.com/cknd/stackprinter](https://github.com/cknd/stackprinter)

~~~
Gibbon1
Old school neckbeard way

a) Crashes dump the program state to a file and then you load that in the
debugger.

b) If the process is still running but broken, attach the debugger.

c) In my firmware I log bus fault addresses and restart. That allows me to see
what code or memory access caused the error. 75% of the time it's fairly
obvious what happened.

~~~
int_19h
Crash dump debugging is tricky to implement Python, because its normal
debugging APIs - sys.settrace, frame objects etc - require being inside the
process with a running Python interpreter.

You can still do it, but you basically have to redo the whole thing from
scratch. For example, instead of asking for a repr() of an object to display
its value, you have to access the internal representation of it directly - and
then accommodate all the differences in that between various Python versions.
Something like this (note several different dicts):
[https://github.com/Microsoft/PTVS/tree/bcdfec4f211488e373fa2...](https://github.com/Microsoft/PTVS/tree/bcdfec4f211488e373fa253583fefe3dbb320c87/Python/Product/Debugger.Concord/Proxies/Structs)

------
chaitanya
Is there an equivalent of Common Lisp's TRACE in the Python world?

IMO it is the most value for money (i.e. time and convenience) debugging tool
I've used till now. Simply write (TRACE function1 function2 ...) in the REPL
and you will get a nicely formatted output of arguments passed and value(s)
returned for each invocation of the given functions. Another nice feature is
that a deeper an invocation is in the stack the more it is indented -- so
recursive functions are fairly easy to debug too.

You can't use it for everything but its sufficient most of the time.

PySnooper looks good, but it is inconvenient in a couple of ways:

1\. It prints every line of the traced function -- most of the time this is
overkill and not what one needs. 2\. To snoop on a function you need to modify
the source file. Not a deal breaker but you still have to remember to revert
this.

~~~
yingw787
I use this snippet:

``` import ipdb ipdb.set_trace() ```

It drops me into IPython, an interactive Python shell, with the interactive
Python debugger. You can type variables and use `pdb` primitives like
(c)ontinue, (u)p call stack, (n)ext line, etc. I really like it, but ofc YMMV.

~~~
jmuhlich
Try pdb++ (pdbpp) instead — it's like a much improved ipdb. It monkeypatches
itself into pdb rather than defining a new package, so it works from any
context that drops into the debugger. Its “sticky mode” alone is worth the
switch.

------
scaryclam
Interesting project, though I'd suggest losing the line about "can't be
bothered to set one up right now" regarding a full debugger. (i)pdb is built
in and is simple to use. Perhaps focus on what this can add rather than
framing the project as something like a lazy alternative (especially when this
may actually be harder to set up than throwing in "import pdb;
pdb.set_trace()")?

edit: spelling

~~~
whitehouse3
I’d also lose the profanity.

> You can use it in your shitty, sprawling enterprise codebase without having
> to do any setup.

Debugging is a sensitive subject, particularly given how frustrating it can
be. There’s a place for vulgarity somewhere, but I’d rather see your README
provide authoritative info than crack jokes.

~~~
eganist
In practical terms, what would that achieve?

~~~
wang_li
Professionalism.

~~~
packetslave
"Any time somebody tells you that you shouldn’t do something because it’s
“unprofessional,” you know that they’ve run out of real arguments." \-- Joel
Spolsky

~~~
wang_li
Or there is a whole suite of behavior and conduct with well thought out
reasons that have been comprehensively debated over the years that have been
put under the rubric of professionalism and there is zero reason to rehash the
same arguments over and over and over ad infinitum.

------
tgbugs
I have to say sometimes I do find it much easier to read logs than to muck
about in an interactive interpreter. That said I did just add `export
PYTHONBREAKPOINT=pudb.set_trace` to my bashrc and am slowly going through and
removing all my old `from IPython import embed` lines. A much simpler workflow
that doesn't incur major runtime costs.

~~~
StavrosK
For those, like me, who don't know about the new breakpoint():

[https://hackernoon.com/python-3-7s-new-builtin-
breakpoint-a-...](https://hackernoon.com/python-3-7s-new-builtin-breakpoint-a-
quick-tour-4f1aebc444c)

------
lee
Using print is sometimes superior to pdb. If you want to quickly see how the
program runs without stepping through each line print is justified. This looks
like an evolved print. Nice tool!

~~~
collyw
Sometimes but not always, especially on large unfamiliar code bases.

------
rossdavidh
While I sometimes use one debug tool or another, I have never understood the
aversion to print statements reflected in the title. Sometimes, often even, a
print statement is just fine, and anything else is overabstracting it. Not
saying other options aren't nice to have available, just that there is nothing
wrong with using a simple print statement in many situations.

~~~
Gibbon1
Because the observability you get with a print statement is limited. You can't
do further investigation without modifying the code and running it again. With
a debugger you can explore the program state interactively.

My experience with python is limited but in my day job it's not uncommon to be
tracking down stuff that happens infrequently. Debug cycles get brutally long.

~~~
kurtisc
You also can't be confident that a debugged program is in a natural state
without restarting it, at which point the re-setting of the breakpoints and
scripting you did in the last invocation - or saving and loading what you've
already done - is a pain point. If all of this is quicker than
recompiling/relaunching, then IMO there is a second bug in a lack of effective
logging.

Most bugs I create are for simple reasons and can be found by scanning the
first error logs. If I add print statement debugging because I couldn't then
they'll often be adapted into additional logging. If I use a debugger for this
as my first tool and don't add logs, I'll have to do it again next time,
too[1].

If the bug is not a simple one and is not a structural bug, there's a decent
chance it's something debuggers deal with poorly: data races, program
boundaries, non-determinism, memory errors. If it's something that can be
found by calling a function with certain parameters, it's a missing test case.

So the times I find debuggers to be worth it are after I've already decided
it's a difficult yet uncommon bug. So I use them with despair.

[1] If I fix it with a debugger and then add the logs, I still have to prove
it gives the right output when it fails.

------
digitalsushi
As a very mediocre ruby programmer, is there an equivalent in ruby to this? If
you have a little pattern that is more clever than 'puts' everywhere, please
share, it would be well received by at least one person out there. Thanks!

~~~
johnernaut
Yes, check out Pry: [https://pryrepl.org/](https://pryrepl.org/)

~~~
alexisnorman
Second that, pry rules!

------
euske
Never use print for debugging again... If you're creating a reasonably complex
project, spend some time to set up a nice and robust logging facility and
always use it instead of print. That's the first thing you should do. You will
not regret that decision.

I sometimes use a debugger to tackle with unfamiliar code, but I always prefer
using trace/logging whenever possible, because 1) you can see the context and
whole process that reached that point, and 2) the history of debugging can be
checked into a VCS. I'd write an one-liner to scan the log file rather than
setting up a conditional breakpoint. I particularly like doing this for a GUI
application. A regression testing can be done by comparing logs.

------
tylerwince
Super neat project. I am a print debugger myself and will definitely use this
at some point in the future. For scenarios where PySnooper might be overkill
and you just want to see the value of specific variables, I wrote a port of
the Rust `dbg` macro for both Python and Go that are pretty nifty when looking
at values really quickly:

[https://github.com/tylerwince/pydbg](https://github.com/tylerwince/pydbg)

[https://github.com/tylerwince/godbg](https://github.com/tylerwince/godbg)

------
scriptkiddy
This is great! I'm currently working on a large Django project that has itself
and all of it's services running in Docker containers via docker-compose. In
order to use a traditional debugger, I would need to set up remote debugging
and the integration with VS code for that is really not great. Not to mention
that getting a remote debugger to work with Django's monkey-patched imports is
a little wonky as well.

With this package, it seems like I can just get my debugging via stderr.

------
jasonhansel
IMO the real advantage of print() over a full-fledged debugger comes when
you're testing. Just replace print() calls with assert(). (Also debuggers,
especially on the front end, always seem incredibly laggy and slow.)

Using debuggers tends to encourage people to fix problems without writing
regression tests.

Honestly, I'd prefer "better support for print-line debugging" to "better
debugger that you can set up" in most cases.

------
operatorequals
This looks pretty useful. I love these packages that seem like forgotten
stdlib features.

------
AtHeartEngineer
I missed this on HN last week but I just found it on Google. THANK YOU! I've
been debugging weird serial errors for the last day, and this is helping a
ton!

------
StavrosK
This looks great, good job! It strikes a great balance between PuDB (my
favorite, but can't easily run in Docker/remotely) and q (very simple to use
but you need too many print() calls everywhere). PySnooper seems great to just
drop in and get the full context of what I want to debug.

Can it be used as a context manager (`with pysnooper.snoop():`) for even finer
targeting?

------
a_c
Before clicking I was like: why would one not use pdb. With vim, python-mode,
and 'find . -name "*.py"| entr python testcases.py', setting break point and
re-running is painless.

I was wrong. Upon skimming, it seems a huge plus PySnooper have over pdb is
auto inspecting states, sparing whole lot of manual typing

------
nojvek
Really love this. Seems like it captures a whole bunch of interesting
information when a function is invoked.

I love auto-loggers like this where you can selectively capture interesting
bits.

This is the basis of reverse debugging I.e capturing chronological snapshots.
Would love to see a Vscode extension that allows to step forward/backwards
through time when an interesting thing happens.

------
_verandaguy
This is super neat, and definitely a great tool for early debugging -- but for
anything more in-depth, there's built-in pdb and third-party ipdb, which gives
PDB an IPython frontend.

Both use gdb semantics which is great if that's what you're used to.

------
zephyrfalcon
Another (lightweight) tool for debugging is the q library:
[https://pypi.org/project/q/](https://pypi.org/project/q/)

------
hguhghuff
I love this concept and will definitely be using it.

Presumably it works with Django? If yes then thrice thanks.

This project would justify an additional dedicated screen just for dumping
function debug logs.

~~~
cool-RR
I confirmed it worked with Django before releasing :)

------
jcroll
I come from the php world now coding in python. There used to be a very nice
library from Symfony called VarDumper:
[https://symfony.com/doc/current/components/var_dumper.html](https://symfony.com/doc/current/components/var_dumper.html)
that would just pretty print variables you dumped to the browser in an easy to
consume form. Is there anything like this in python?

~~~
guitarbill
There is pretty-print [0], which you could dump inside pre tags, or print to
stdout. But honestly, depending on the framework, quicker options exist than
(basically) printf debugging.

[0]
[https://docs.python.org/3/library/pprint.html#example](https://docs.python.org/3/library/pprint.html#example)

~~~
jcroll
For Flask what would be the better option?

~~~
guitarbill
Pretty much the stuff already mentioned elsewhere on HN for this article. I
often find Werkzeug's excellent debugger is enough:
[https://news.ycombinator.com/item?id=19718869](https://news.ycombinator.com/item?id=19718869)
. pdb/ipdb/pudb et al (pick your fav) can help for really tricky stuff. And
sufficient logging, so you know what's going on at all times even without a
debugger attached.

(occasionally, the low effort of print-debugging works, but if you keep having
to print in more/different locations... it's a blunt tool IMO)

------
dhbradshaw
We have some fairly intense functions that needs to be profiled with realistic
data and io conditions. I've been doing that by hand, line by line in the
shell.

I just tried wrapping this around one of these functions in the django shell
(actually shell_plus, but same thing). Just imported pysnooper, created a new
function using `new_f = pysnooper.snoop()(original_f)` and called new_f on the
realistic data and got a nice printout that included values and times.

Very useful.

------
maximleon
Dont think it works, tried few examples, it throws NotImplementedError at me
all the time. Any hint?

~/anaconda3/lib/python3.7/site-packages/pysnooper/tracer.py in
get_source_from_frame(frame) 75 pass 76 if source is None: \---> 77 raise
NotImplementedError 78 79 # If we just read the source from a file, or if the
loader did not

NotImplementedError:

------
bluejay2387
Does this work in Jupyter? That would be a good use case for it. In pretty
much everything else you really should be setting up a debugger...

------
zerkten
This might be useful as a teaching aid. You can run some code and then get the
line-by-line breakdown for newbie programmers.

------
vaylian
That looks really nice! I normally use pudb which is a curses-based debugger
but pudb falls short when the debugged program runs with several threads or if
it uses an asynchronous event loop. Having a non-interactive debugger like
PySnooper could definitely help in such situations!

------
Quiark
Similar principle to my project
[https://github.com/Quiark/overlog](https://github.com/Quiark/overlog) which
has some more interactive data exploration.

------
kapitanjack
Can someone explain to me why to use this instead of `import pdb;
pdb.set_trace()` ? I am new to python and am confused someone told me not to
use print and use pdb, so how is this different ?

------
fourier_mode
Would this run in Cython. I am trying to understand a Cython code and
something like this would be helpful.

------
v3ss0n
that gonna be huge chunk of log to search

~~~
cool-RR
No, because by default it only logs what's happening directly in your
function, not what happens deeper in the stack. (i.e. functions that your
function calls.)

You can set depth=2 or depth=3 and then you'll get a huge chunk of log. The
deeper levels will be indented.

~~~
sametmax
Does it have an option to print the caller ? "u" is my fav cmd in pdb.

~~~
cool-RR
No, feel free to add it as feature request on GitHub.

------
tezka
What’s so hard about putting this at the beginning of the function? import
pdb; pdb.set_trace()

------
amelius
Does it work in multithreaded code? And is debug output nicely separated based
on the thread?

------
fifnir
Will it work with multiprocessing?

~~~
cool-RR
Good question.

If the function is launched in a spawned process, it'll work, though you might
have trouble getting the stderr, so you better include a log file as the first
argument, like this `@snoop('/var/log/snoop.log')`

If the function launches new processes internally... I'm not sure.

If you try it and it doesn't work, open an issue: [https://github.com/cool-
RR/PySnooper/issues](https://github.com/cool-RR/PySnooper/issues)

~~~
sametmax
There is a race condition when writting on the same log file from several
processes at the same time, which is a typical use case for wsgi frameworks
such as django or flask.

~~~
detaro
Seems like a "log to this unix domain socket" option could help for those
cases? Or one to open a new file for new PIDs?

~~~
cheez
Could probably do:

    
    
        @pysnooper.snoop(f"/path/to/file.{os.getpid()}.log")
    

Something like that

------
makmanalp
I think this is a very neat tool for educational purposes - sort of like
python tutor!

------
StopHammoTime
This is great, I'm definitely going to be using this in my next project.

------
vemv
Github star count looks fake for a project so recent.

~~~
jdormit
I mean, it's been on the front page of HN for 8 hours...

------
gigatexal
oh man, that example code is way too verbose! I like the idea though as I have
never really liked the logging in python

------
MyBrew
Is there anything like this for java?

~~~
nexuist
To add on: Is there anything like this for Node.js? Would save lots of
headaches.

------
bkyan
Have you considered providing an option to pipe the output to a Slack channel
or something similar to that?

~~~
cool-RR
Haha. I don't know if this is satire or not, but in case it's not: You can
pass any writable stream as the first argument and PySnooper will use it. So
it should be easy to integrate with Slack or anything else.

~~~
bkyan
It's not :) This way, I get to see the debug stream in real time, but separate
from stdout.

~~~
bemmu
You can output to file and use tail -f

~~~
bkyan
Oh, yeah! Thanks for the suggestion!

------
ToBeBannedSoon
This is not a substitute for debug-level logging when you need it.

------
josteink
> You'd love to use a full-fledged debugger with breakpoints and watches, but
> you can't be bothered to set one up right now.

In VSCode: built in. Press the play icon.

In Emacs: M-x realgud:pdb

Is that really more effort than this?

Why invent inferior solutions to solved problems?

~~~
ben509
Installation for a module like this:

    
    
        cd project
        pip install pysnooper  # or pipenv or poetry
    

And it works.

That's far superior to poking around in the dark trying to make an IDE see my
project correctly.

If IDE authors ever figure out how to implement a test button that invokes
`python -c "some stuff"` and shows me the results, I'd consider using them.

~~~
josteink
> If IDE authors ever figure out how to implement a test button that invokes
> `python -c "some stuff"`

You mean like both Emacs and VSCode already does?

