Hacker News new | past | comments | ask | show | jobs | submit login
Icecream: Never use print() to debug again in Python (github.com/gruns)
485 points by polm23 9 months ago | hide | past | favorite | 262 comments



You can do the same thing with Python 3.8+ by using f-strings and just appending "=" to the variable name:

    >>> print(f"{d['key'][1]=}")
    d['key'][1]='one'


Hey! I'm Ansgar. I wrote Icecream.

f-strings's `=` is awesome. I'm overjoyed it was added to Python. I use it all the time.

That said, IceCream does bring more to the table, like:

  - Stynax highlighting.
  - Pretty prints data structures.
  - Returns its value for nesting.
  - Integrates with logging via `ic.configureOutput()`.
etc

I hope that helps!


I just got my first serving of icecream, thanks to the post here. And I'm a convert.

One question: in most cases, I get an output like `ic| tmp.py:9 in nonesuch() at 03:23:52.479` What is the "at 03:23:52.479", and how do I turn it off? Looks like it's some sort of timer function, but I don't see it in the readme. Python3.7 on Linux.


Thank you for writing IceCream! I've used it off and on over the years and it's been a great help.


You can also use the breakpoint() introduced in py3.7 via PEP533

https://www.python.org/dev/peps/pep-0553/


Pre 'breakpoint()' you could always use import pdb; pdb.set_trace()

That said, sometimes you don't want to interrupt execution and see your results in real time.


I like `from IPython import embed; embed()` because you get an entire ipython instance with access to all local variables :)


I like to use PDB++ which is a drop in replacement for PDB

https://github.com/pdbpp/pdbpp


Bump. This is the most correct way to debug python programs VS using print statement.


Debugging virtually anything real-time that interacts with other systems will require some kind of logging (of which printf() is the trivial case). Step-through debugging is great but far from universally applicable.


Setting a breakpoint that prints and continues is appropriate if performance doesn’t matter. It gets trickier if it does. In those cases RR seems like the best idea rather than prints. At least that’s the theory.

In my experience the conditions on the ground are different though. Tooling availability is inconsistent, which is why such approaches don’t always work. Reverse debugging isn’t available on all platforms/languages. Debuggers (or at least the C/C++ ones) are slow in how they inject instrumentation code to evaluate - rather than compiling expressions into native codes to conditionally trap, they seem to trap unconditionally and evaluate the conditional using their introspection language (there are valid reasons why it’s done this way but it has serious performance impacts). In compiled languages, the evaluation of the expression can be difficult/impossible to write because of this (not enough type information available, validation is late binding meaning mistakes are extra expensive, etc.

At the end of the day, when your tools fail you, print debugging is easier to use to accomplish the task rather. Getting tooling to work effectively is more time consuming and frequently not possible.


It's a way, it's hardly "the most correct way". Jeesh sometimes hackernews people. There are all kinds of ways to debug, and sometimes code has to run in different version of python. I find logging with various levels of "debug info" is best for me, but I hardly think that is "the most correct way"


Correct in what way? I can think of many real time systems that cannot be debugged with a pdb breakpoint.


Most correct you say? Based on what? Mind you we're talking about code that will never even ship in the vast majority of cases. Ergo, do whatever works best for you.


You can also make it a bit prettier by adding spaces:

  >>> print(f"{d['key'][1] = }")
  d['key'][1] = 'one'
It works with any expression you like, not just variables:

  >>> print(f'{np.sin(np.pi/4.) = }')
  np.sin(np.pi/4.) = 0.7071067811865475


> It works with any expression you like

Also assignment expressions? :)


    >>> print(f"{(yes:='yes')=}")
    (yes:='yes')='yes'


Yep.

  >>> print(f"{(yes:='yes') = } and now {yes = }")
  (yes:='yes') = 'yes' and now yes = 'yes'


I believe the walrus operator `:=` was introduced in Python 3.8. So keep that it in mind.


As was this feature.


Indeed but the operator is useful beyond fprint


Not sure if I would consider that an improvement.


Thank you, I was unaware of this. While I have lost a talking point in favour of Julia – the `@show` macro – I am happy that my old friends over in Python land has access to some better ergonomics and can stop rubbing this in their face. ;)

    julia> x = 4711
    4711
    
    julia> @show x
    x = 4711
    4711


I don't know what `@show` does exactly, but Python's is only a limited convenience for print, it still not as good as Rust's dbg![0] or TFA's ic, because it does not (and can not) return its parameter unmodified, so you still need to inject an explicit print and possibly execute the expression multiple times.

It's convenient, don't get me wrong, but it's not exactly groundsbreaking. Unlike breakpoint[1].

[0] https://doc.rust-lang.org/std/macro.dbg.html

[1] https://docs.python.org/3/library/functions.html#breakpoint it's not groundsbreaking on its own but the ability to very easily hook into it and replace the breakpoint hook by an arbitrary callable? Super useful.


Do you have any concrete usage examples of that? I’ve only used breakpoint() to stop execution while debugging.


In the docs you can look up breakpoint, it has a lot of features amongst other things you can register a custom handler. I use it in selenium tests so I can either debug on error or just print the error message and continue


Could you provide a concrete example? It's still unclear to me why one wants a custom breakpoint() handler.


This is how we use it in my testing repo:

__init__.py

    os.environ.setdefault('PYTHONBREAKPOINT', 'tests.utils.raise_or_debug')

tests.utils.raise_or_debug

    def raise_or_debug(msg=''):
        if settings.BREAKPOINT_ON_ERROR:
            extype, value, tb = sys.exc_info()
            if getattr(sys, 'last_traceback', None):
                pdb.pm()
            elif tb:
                pdb.post_mortem(tb)
            else:
                pdb.set_trace()
        elif msg:
            raise AssertionError(msg)
        else:
            pdb.set_trace()
The reason is so we can leave breakpoints in the tests, and then depending on context run the tests in a context where debug is possible, or simply raise the failure again.

Selenium can be very flaky, so this has really sped up iteration and improvements. Standard functions with retries, and smart assertions and timeouts can then be fixed and resumed in dev - and give a good message on fail (for example headless chrome runner on CI).


And from our own docs:

    We have manually overwritten the breakpoint handler to a custom handler that in default mode does this:
    
      - `breakpoint()` calls trigger `pdb.set_trace()` as per usual
      - `breakpoint(<str>)` calls trigger AssertionError to fail tests
    
    When `BREAKPOINT_ON_ERROR=TRUE` is turned on:
    
      - `breakpoint()` calls trigger `pdb.set_trace()` as per usual
      - `breakpoint(<str>)` call `pdb.set_trace()` so you can try to manually work out how the code is functioning
      - if you call `breakpoint(<str>)` from an `except` block it will open the debugger at the position of the exception

Usage example:

        try:
            WebDriverWait(self.driver, 10).until(
                EC.element_to_be_clickable((By.ID, 'desktop-apply')))
        except TimeoutException:
            breakpoint('Was not able to click apply button.')


You can run Python with

    PYTHONBREAKPOINT=print python
and all your breakpoint() calls will be prints.

Which is not super useful, however you can use

    PYTHONBREAKPOINT=icecream.ic
and now you get the advantages of TFA without having to import anything anywhere.

The only annoyance is that the default hook takes no arguments, so you can't trivially switch between the default and custom implementations, you may need to modify the source depending on the breakpoint hook you're using.


What is TFA?

icecream, the subject of this post, returns its parameter unmodified without multiple evaluations.


> What is TFA?

The fine article

> icecream, the subject of this post, returns its parameter unmodified without multiple evaluations.

That would be my point, yes.


WHAT... Why ain't anybody talking about this? Brilliant!


It was “recently” introduced in Python 3.8. See sibling for link to issue tracker


In this case, it would seem it took a little reinvention in order to let the wheel's discovery be known.


Not used it, but ic seems to be compatible back to python 2.7 so not really comparable. I can't use the builtin thing in my python code because the host software for my plugins is running python 3.6 embedded and only recently upgraded from 3.3 in the latest release so if I want to maintain backwards compatibility I can never use the new python builtin. I expect lots of other people are in the same position.


Yeah you're right. I hadn't realized it was so new till a little later... Well, it's good that IC has a stronger use case surface area for those who want it.


Nice feature. Is there a way to globally enable/disable it? E.g.,

  python3 myscript.py --enableFstringDebugging=true
So that I could get rid of lots of conditional statements, e.g.:

  if debug:
      print (f"some useful debugging info")


Use a logger, not print, and set log level?


No. You should use logging syntax[0] instead.

[0]: https://docs.python.org/3/library/logging.html


Wait. How does this work?



Neat. Thanks. I rarely read these lists, but am fond of how friendly this proposal was!


Thank you for posting this. I will certainly use this gem extensively.


Thank you for sharing this.


Yes. Please use more of the builtins instead of adding more libraries


I am always going to use print to debug in every programming language I can until the day I die.


Absolutely, I don't understand why using print() or its equivalent in other languages is looked down upon. It's quick way to narrow down the "area of search" before bringing in the big guns.


> It's quick way to narrow down the "area of search" before bringing in the big guns.

What are the big guns? with a debugger, I can stick a breakpoint and look at the entire state of everything. Given we're talking about Python, in pycharm [0] you can even execute your print statements in the debugger if you so wish. If you get the location wrong, or want to see what's going on elsewhere you can just continue execution and use another breakpoint.

This is even more important if you have a long compile/deploy cycle (I work in games, and rebuilding and deploying to a console can be a >10 minute iteration time)

[0] https://www.jetbrains.com/help/pycharm/part-1-debugging-pyth...


Sometimes sticking the debugger into the wheel makes stuff come flying over the handle bars in spectacular ways that have nothing to do with what you wish to observe. You might not even know which wheel to jam the debugger stick into, if the behaviour is complex.

In these cases prints work well as a less intrusive way to get a rough idea of what is going on.


I don't understand your wheel analogy, sorry.

> You might not even know which wheel to jam the debugger stick into, if the behaviour is complex.

If you don't know where to put a breakpoint, how do you know where to put a print statement?


Imagine putting breakpoints in multiple tight loops in the stage of narrowing the search space. Imagine how many times you need to click next. A conditional breakpoint will only help if you know the condition you're looking for, but there's stage before that of "Well, what looks strange during execution".

Also for multithreaded code, stopping one thread dead for long enough for a human to investigate it can inadvertently resolve all sorts of race conditions.


What I imagine Macha is arguing for is that the cost of using print is extremely small, smaller at least than breakpoints.

No one is saying breakpoints are useless, sometimes printing is 'cheaper' in time and effort in order to locate the region code of code in which using breakpoints is cheaper.


Yes, print() and breakpoints are different tools with different uses and there's cases where one is superior to the other. This is why some tools now offer logpoints, which are basically print() inserted via a breakpoint UI rather than in your code where you can forget to remove them


Oh please tell me about those tools.



VS Code and Firefox Developer Tools are the two I'm aware of with actual support. Also some tools you can adhoc it as a conditional breakpoint by basically putting "print(whatever); return false" as the condition


> the cost of using print is extremely small, smaller at least than breakpoints.

I don't think it is, at all. The cost of using print is re-running your applciation with a code change, whereas the cost of a breakpoint is re-running your application with a breakpoint. Clicking in a gutter in an editor, pressing a keyboard shortcut, or typing "b <line number>" into your debugger is no more time or effort than adding a print statement, and re-running your program.


> Imagine putting breakpoints in multiple tight loops in the stage of narrowing the search space.

If you have enough loops to make breakpoints impossible to use, you've likely got enough log output that you're not going to be able to parse. You're almost certainly going to look for other ways of narrowing the search space.

> stopping one thread dead for long enough for a human to investigate it can inadvertently resolve all sorts of race conditions.

Stopping one thread for long enough to do console IO has the same effect. Especially if you're using python, you'll need a lock to synchronise the print statement across your threads!


Today I was trying to solve the exact scenario in the second example. A multi threaded program had a race condition that would sometimes occur. printing numbers helped a great deal. Might also be that I'm not that proficient with my debugger even though I use that more than anything.


> Imagine how many times you need to click next

Once or twice? Any sane debugger has a way to disable the breakpoint.


Then you overshoot the iteration that has a problem


As someone who help teaches intro level CS class, debuggers can be too OP.


> in pycharm [0] you can even execute your print statements in the debugger if you so wish

In my experience, debuggers are really good to expose hidden control flow. But usually, I know the flow, and using a debugger for human-in-the-middle print statements is just going to slow me down. Worse, those print statements are ephemeral, so I'm disinclined to write a nice formatter.

Print debugging leverages the language -- want to print in a certain condition? Easy. Have a recursive structure, an array, or a graph that you need to investigate? A naked print sucks, a custom formatter is great. Need to check some preconditions/postconditions? Do that in code. Don't try to check that stuff by hand in the debugger.

Speaking personally... the only thing I like about icecream is that ic(foo) both prints and returns foo, because you can inject it into code basically for free. But I already have a solution to that:

  def ic(x): print(x); return x


The same thing happens all over Software really. Just because a tool is powerful is looked up as superior, or better.

The main argument that I have seen is that in print debugging you are relying on the program being executed in a non-descriptive/non-declarative fashion.

I legitimately believe print debugging is incredibly powerful (With a simple print I can check if a function is being called, how many times, if the value has the value I expected and the only requirement I need is to be able to see the stdout of the process. I say that is fantastic!

The real world is all about cost analysis. How much value can I get from a tool vs setup and running cost. The cost of print debugging is incredibly small.


Print debugging has all of those features and is natively built into just about every programming language in existence and doesn’t require any additional libraries or tools.


> The main argument that I have seen is that in print debugging you are relying on the program being executed in a non-descriptive/non-declarative fashion.

Breakpoints are way worse on this dimension.


It definitely has its place. The problem is mostly that you have to actually change your code to debug it, and then remember to change it back.


How is changing code simpler than literally clicking on the line number to set a breakpoint?


You said "click", I need to leave my keyboard.

Generally when I am coding I auto-run the tests on save. This means that to printf-debug I just add a message or two (and if I am coding I might already have a couple of useful ones lying around) and save. Then in less than a second I have a trace trough my program in the terminal. If I want to inspect a different variable I just add another print and run again.

With a debugger I need to kill my auto-run command, run the program, set breakpoints, type to see what variables I want to inspect, maybe switch stack frames, maybe step through a bit.

In my mind printf is like an automated debugger. I just put the info I want to see into the program and it does all of the printing for me. And when I find the problem I can just fix it and I am back to my edit-test cycle.

I'm not saying that there are no use cases for a debugger. For example I find variable-modification breakpoints very useful. As you mentioned if your edit-run cycle is slow then it may be faster to inspect a new variable in the debugger than adding another print statement. But when I just want to inspect values in my program I find printf sufficient and lower overhead. I'm sure part of my problem is that because I rarely use a debugger I am not as efficient, but I also think that printf-debugging is a very effective workflow for a wide variety of issues.


You said "click", I need to leave my keyboard.

Every proper IDE has a keyboard shortcut for that though.

With a debugger I need to kill my auto-run command, run the program, set breakpoints, type to see what variables I want to inspect

This indeed falls under your 'part of my problem is that because I rarely use a debugger' statement. E.g you could set breakpoints before you save, use auto-debug instead (i.e. launch program under the debugger on save instead of just rnning it - without breakpoints there shouldn't be much of a difference unless it's one of those nasty multithreading bugs), add variables you want to see to the watch window. Or type them anyway if it's a one-time thing. Or use tracepoints. Etc.

I personally keep bouncing back and forth between debugger and printing. All depends on context, but it's definitely worth it getting to know both really well.


So basically tracepoints, without touching the program code.


But I'm already mucking with the program code most of the time so I'm not worried about touching it.


When I use a debugger I often feel like I'm looking through a soda straw. I can only see the state at that one instance in time. Just because I know the line of code where the exception occurred, doesn't tell me which data caused it, and breaking on exceptions is often too late. Instead I'm stuck hitting continue over and over until I finally see something out of place, realize I went to far and have to start over again. With logging, I have the entire history at my fingertips which I can skim or grep to quickly pinpoint the data that caused things to go wrong.

More fairly, it is a trade-off with the debugger giving wide visibility into state, but narrow visibility temporally, and logging giving narrow visibility into state (just what you logged), but broad temporal visibility. They both have their place, but I find that logging narrows things down more quickly, while the debugger helps me understand the problem by walking through step-by-step, assuming the problem isn't obvious once narrowed down.


Would be nice if instead of break points debuggers had log points which stored the values of variables at that point in time. This data can be displayed as a table later.


You might be interested in Pernosco: see https://pernos.co/about/overview/ and its related content.

We agree with your critique of traditional debuggers and Pernosco tackles that "temporal visibility" problem head on.


I more or less agree; but I find myself wondering why I so often use Matlab's debugger, but almost never use pdb for python. It is not like I'm not used to using command line tools (I use bash, git, emacs, etc. everyday). It could just be an accident of habit, I don't know.


Depending on how complex your debugger is, it allows you to output values that might not be inspectable through the debugger. Especially computed values.

Debug printing also allows you to debug programs running in environments where you can't attach a debugger. For example, maybe halting the program causes the bug not to trigger. Or it's a remote system where you cannot attach a debugger for various reasons. Or the bug only happens in the optimized build, which in say C/C++ can make it quite tedious to walk through with a debugger.

Most of the time though I use print as "proactive debugging". Having detailed logs available is gold when customer calls with a blocking issue.


Especially computed values

Showing function return values automatically was really an eye-opener when I first encountered it.


Most OS offer that with process tracing like ETW and DTrace.


You have to first configure your IDE/editor to allow you debugging. This is different for every programming language/environment. Print works in any language without prior configuration.


In the time it takes me to figure out how to connect a debugger to the process I've had a good half-dozen full loops of 1) add print statements 2) compile 3) run already done.


Because you shouldn’t have to change code, just to debug it.

It’s okay though to add verbose logging as a feature.

But just adding some print statements to debug code and remove them afterwards, is dangerous (you release sth different than you debugged).


As opposed to software breakpoints which change your compiled binary at runtime in order to debug it. Even if you're using hardware breakpoints you're still changing what the CPU is doing and can easily make multi-threading bugs disappear.


I do it in a different branch & discard it afterwards. Having said that, I never meant print() should be used in place of a proper debugger. All I am saying is they both can complement each other and each one has its place & value. As for me, I find it quicker to add a few print statements and get a rough idea before firing up a debugger(if required). May be others are more proficient in using debuggers. But print() works for me.


############################# What? #########################


Depending on the use case it can be a sign that they don't understand how to debug efficiently. Not that this is something you should judge someone for.


I have found that print-style debugging gives a good birds-eye view of the situation.

With a stepping debugger, I tend to get lost in the weeds. My suspicion is that the bug fixes I come up with are less holistic when using a debugger.


“The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.”

— Brian Kernighan, “Unix for Beginners” (1979)


The old style printf from C is still the best formatting tool for output/debugging. The C++ style was just a distraction without introducing anything of real value. log4xyz has some nice features in terms of enabling/disabling at runtime, through a config, but ultimately, printf rules.


The value introduced by C++ was type safety. In C, it's way too easy for the format string to get out of sync with the type of the arguments, e.g.:

    printf("%d", my_long_var);
Might seem correct and work correctly on one platform, but fail on another. scanf() is arguably even worse since it can cause memory corruption.

These days compilers have diagnostics to catch those errors, but if you rely on those you can't use dynamic format strings, which means you're effectively using a subset of C with a stronger type checker than C. That's a pretty good state but it's definitely not "old style printf()"; old style printf() was insecure.

And don't get me started on the convoluted macro invocations necessary to correctly convert int32_t, int64_t, size_t and ptrdiff_t. And that's with the newest standard: IIRC there was no standard way to print long long in C, at the time when C++ already supported it.


Maybe C++ fixed type safety, but it introduced lot of complexity and bugs for no actual added value. For instance, because of stateful ios objects, it's close to impossible to write correct code outputing hex on first attempt. I'm sure that lot of C++ code outputing hex is just plain wrong.

Given that C++ keeps getting more and more complex features, it is just amazing that C++ I/O is still so inconvenient, opaque and ultra-verbose.


I mean it's not particularly pretty but what's so bad about this?

    std::cout << std::hex << my_int << std::dec << std::endl;


That construction is ok. But usually you want formatted output, let's say align to byte, and pad with zeroes. I've seen oftentimes (and did myself):

    std::cout << std::setfill('0') << std::setw(2) << std::hex << my_int << std::dec << std::endl;
This appears to work until someone change the alignment to left somewhere in the code. Hence the correct code is:

    // C++ type-safe equiv to printf("%-02x",my_int) - it's called progress
    std::cout << std::right << std::setfill('0') << std::setw(2) << std::hex << my_int << std::dec << std::endl;
Also, is it relevant to keep the final 'dec' when we assume we can't assert the ios left/right state, so why could we assert the hex/dec state? Or maybe was it a bug to change alignment to left, and not restore it to right afterwards? Or maybe should you restore ios state in some centrol place, and never pass your iostream to some external libs? Discussions and wars ahead. Note that the bug above is very nasty because it will change say "02" into "20", which looks perfectly valid.

Note: I just noticed that in C++20, there is new formatted string proposal. You can't stop progress, but neither can you speed it up it seems.

Note2: the 'std::' spam is yet another indication that C++ is lacking common sense and utterly broken.


The old style printf from C is still the best formatting tool for output/debugging.

The idea that you can just drop this anywhere, sure, that is good. But once you've used string interpolation printf isn't so attractive anymore. No more forgetting arguments, wrong argument order, wrong specifier, ...


Actually rust has the functionality of Icecream as a buildin command "dgb" and you quickly find yourself using it over print


And Julia with @show.


No doubt blatantly ripped off the venerable print_r :P


Nope, because unlike print_r dbg! returns the input as-is, so you should be able to wrap any expression in dbg! in-place, and just get debug output.

According to the RFC, the direct inspiration for dbg! is Haskell's traceShowId (http://hackage.haskell.org/package/base-4.10.1.0/docs/Debug-...).


Agreed, it's the simplest way to test and validate specific assumptions. Debuggers are useful tools, but it takes you just as long but usually longer to get to the same answer: is what I think is happening here actually happening here?


At least debuggers give you new powers, the posted above makes you install a library, import it all just so you can have slightly cleaner print statements... I'm not gonna go out of my way to import something and have to clean it up after just so my printed statement is better formatted. And if I needed the more powerful debugging, I'd use a debugger not this library.


I had this exact thought, but the title gives the wrong impression (at least to me). The package just defines a fancy print.


Sure, but typing ic(someVariable) is a lot faster for me than typing print(f"{someVariable=}") in Python3.8 or >, and still faster than typing print(f"someVariable={someVariable}") in Python < 3.8 (which I still need to use in some cases).

It's especially faster when I think about how often I fat-finger the '{' (and '}', when my editor doesn't insert the matching brace automagically). Of course YMMV.


I (and the person I replied to, I suspect) interpreted the title to mean that you shouldn't debug by printing stuff to the console at all, but instead do some other thing.


When in doubt, cout :)


Same. I do this for elixir. Luckily, we have IO.inspect:

https://youtu.be/JXQZhyPK3Zw&t=23m30s


I am the same, it's the most easy. Very interesting is that in the laravel php world currently an interesting product is gaining momentum: Ray (https://myray.app/)

So basically it's dump with a lot of neat extras and instead of looking at the console of the script, or the website you are printing on you push this to a little desktop application, from every of your languages you are using. Something like log collection for everything on your desktop.


Hmmm I would have thought that too, but recently I’ve been using byebug, which has changed my mind. Being able to throw in ‘byebug’ on a line, then catch execution at that point in another terminal (using byebug remote) and then check variables at that point, is a game changer. Saves so much time compared to looking for printed statements in the output, and then trying again.


You didn't even click on the link, did you? ;)


sometimes it's printf, sometimes it's printk, sometimes it's echo, but yes I agree.

I had an emacs macro that would help. Simplified it was:

  (defun add-printf ()
    (interactive)
    (let ((s (word-near-point)))
      (when s
        (beginning-of-line)
        (insert "printf(\"@@@ %s:%d "
                s 
                ": 0x%x\\n\", __FUNCTION__, __LINE__,"
                s
                ");\n")
        (forward-line -1)
        (indent-for-tab-command)
        (end-of-line)
        (search-backward " \\n" nil t))))

  (global-set-key [f8] 'add-printf)
I had lots of variants (crafted while recompiling) with prompts, or marked regions or lots more throwaway printf silliness


It's a failure of debuggers that they haven't cleared the very low bar of obviousness and ease of use of print(). I'm very much a novice, but RStudio was the first environment that made debugging so easy I didn't feel the need to use print().


It's the first debugging method one learns and always remains the last resort.


My objection here is that code should be self-explanatory, and icecream or ic() doesn't explain itself, so at least I'd prefer a name like icecream_debugger and replace ic() with pr(), perhaps.


One of the nice feature of print debugging I like is that I often end up leaving some of the print statements in for logging purposes.


give pycharm (or debugpy) a try, you might get few years of you life back :)


Why not taking the (short) time to learn using a debugger? This is way more efficient way to work for complex situations.


After having developed in an environment where I couldn't use a debugger (kernel drivers) I actually think that debugging with prints is better than a debugger most of the time, as it forces you to think about the code and where the failure might be. Right now I only use a debugger when I'm in C++ and I want to get a stack trace for a segmentation fault. In almost all of the other cases I get a broad location with the logging statements (always there), think about what could be happening and then put some prints to test it. Even for memory corruptions I don't use debuggers now, the address sanitizers in clang plus valgrind do the job far better.


I have debugger set in pycharm and use it all the time. I also use print all the time, often along with the debug tool. They are very complimentary and neither tool can do everything the other can.


TBF if you always run your programs in pycharm and use its debugger, you can trivially use non-suspending "evaluate and log" breakpoints instead of print.


But prints work equally well in any environment.

I can remove prints by just checking out the latest version of the file.


> But prints work equally well in any environment.

GP say they "have debugger set in pycharm and use it all the time". So under the assumption (explicitly made in my comment) that they're always using PyCharm to run their program, that's not a concern.

> I can remove prints by just checking out the latest version of the file.

Thereby losing all the changes you've made while observing program behaviour, which may be less than desirable.

Meanwhile it's just as easy if not easier to disable or delete breakpoints from the View Breakpoints pane / window: https://i1.wp.com/cdn-images-1.medium.com/max/800/1*0wAP-w-a... you can just uncheck the "Python Line Breakpoints" box, or select all breakpoints and [Delete] them.


> This is way more efficient way to work for complex situations.

It's also way more inconvenient for simple situations or when trying to sift through in order to zero-in on the issue's rough location, spatial or temporal: unless the debugger is well integrated into the editor it requires syncing multiple sources of information (the debugger's breakpoints configuration and the actual source) — and resynchronising on every edit; and if the debugger is well integrated into the editor… now I'm locked into a specific editor.


I use PySnooper[1] when code behavior deviates inscrutably from my mental model. Like Icecream, PySnooper exhorts, "Never use print for debugging again." A simple example:

    import pysnooper

    @pysnooper.snoop()
    def add_up(numbers):
        total = 0
        for number in numbers:
            total += number
        return total

    add_up([123, 456])
When run, PySnooper prints the activity of the decorated function or method:

        $ python example.py
        Source path:... /home/joel/example.py
        Starting var:.. numbers = [123, 456]
        08:46:58.742543 call         4 def add_up(numbers):
        08:46:58.742692 line         5     total = 0
        New var:....... total = 0
        08:46:58.742722 line         6     for number in numbers:
        New var:....... number = 123
        08:46:58.742755 line         7         total += number
        Modified var:.. total = 123
        08:46:58.742787 line         6     for number in numbers:
        Modified var:.. number = 456
        08:46:58.742818 line         7         total += number
        Modified var:.. total = 579
        08:46:58.742847 line         6     for number in numbers:
        08:46:58.742876 line         8     return total
        08:46:58.742898 return       8     return total
        Return value:.. 579
        Elapsed time: 00:00:00.000405
[1] https://github.com/cool-RR/PySnooper


There's also https://github.com/alexmojaki/snoop

For comparison:

    19:32:30.66 >>> Call to add_up in File "/home/joel/example.py", line 5
    19:32:30.66 ...... numbers = [123, 456]
    19:32:30.66 ...... len(numbers) = 2
    19:32:30.66    5 | def add_up(numbers):
    19:32:30.66    6 |     total = 0
    19:32:30.66    7 |     for number in numbers:
    19:32:30.66 .......... number = 123
    19:32:30.66    8 |         total += number
    19:32:30.66 .............. total = 123
    19:32:30.66    7 |     for number in numbers:
    19:32:30.66 .......... number = 456
    19:32:30.66    8 |         total += number
    19:32:30.66 .............. total = 579
    19:32:30.66    7 |     for number in numbers:
    19:32:30.66    9 |     return total
    19:32:30.66 <<< Return value from add_up: 579


From the README, Snoop is "primarily meant to be a more featureful and refined version of PySnooper. It also includes its own version of icecream and some other nifty stuff."

Thanks. I'll give it a try.


Just what I was looking for, thanks


For an even easier-to-remember alternative, there’s q: https://github.com/zestyping/q

All you need is `import q`. q works like a function (q(x)), like a variable (q|x and q/x, so you get different operator precedences) and like a decorator (@q), so it can be used in practically any circumstance for a quick debug print. Plus, the name sounds like you’re interrogating something.


I was wondering whether someone would notice that it's a clone of q! Thanks! :)

q has the additional feature that you can decorate any function or method with `@q`, which causes invocations to be logged with arguments, return values, and exceptions. Really handy for tracing what's happening in your program.


A quick look at the intro page was a bit disappointing. You immediately get how ic works because their examples include the code and result.


This is actually really cool. I was going to say that so many people miss the point of print debugging and as a consequence forget to shorten the import line as much as possible.

  from icecream import ic;ic() is 28 characters

  import q;q() is 12 characters

  print() is 7 characters


The article says you can call

    from icecream import install
    install()
in main, and from then on icecream counts as a built-in, so you can just say ic(). Four characters.


Why would the amount of characters possibly matter?


I find it important because it lowers the barrier of entry to print debugging. I have found myself using less prints in Java than in Python both because of static typing and having to write the full `System.out.println`.

This is also important to me because I am very cautious of not doing a file-level import (I don't want to commit a file with the dependency). The fewer characters it takes, the easier it is for me to write it all in a single line and remove it afterward.


That's why you should write `log.info("message")` ;)

Besides, just use a proper debugger instead.


Do you use typing in Python?


I don't. I've tried it, it just doesn't "click in" for me. I love types and (i.e. I love Rust's type system), but not on Python.


bc pgmrs r lazy af.

Seriously though... when I'm in debugging mode, speed and efficiency in completing the task is paramount. So, every character I save typing, the better!


reduce the energy barrier to debugging


I am sorry, but I am not going to pull in an external dependency just for this, especially not with the power of fstrings in newer python versions.


You're just being difficult. It's easy, you just setup these 10 Node.js micro services in docker, setup your database, the cache. Then my friend, you can really debug with beautiful print messages.

Wait until someone creates a $10/user SaaS service out of it.


Insightful prediction of how development will be done in the future.


It's for debugging. Not something to depend on in released code. Put the import in your site initialization file and you won't even have to import it in the REPL.


I understand, but Python >3.8 can do this:

  >>> x = 1+1
  >>> print(f"{x = }")
  x = 2
And if it is included why pulling in a dependency? And even if it is a dev dependency: it can be a dev dependency that bites you in two years in the middle of a night when your service fails.


Indeed, in a couple of years this is the new leftpad.


And I am going to use it, because of the pain of typing {, which half the time my fingers miss (whereas I'm pretty good at typing 'i' and 'c').


This library's 'ad copy' focuses on its ability to print the expression passed to the `log` function verbatim, but for those thinking about importing this library or something like it, I find that the most useful long-term benefit of a construction like this is this:

All short-term logging / debugging calls are now properly isolated.

Ordinarily, if you spot a `print` statement in code it's usually a left-over relic of some previous debug session. Or it's the command line printer, who knows?

With a call to ic, you know it's short-term logging and nothing else. You can search for them, you can spot them immediately in commit changelogs, you can write hooks if you want to ban them from ever appearing in certain branches, you can breakpoint them in your IDE, etcetera.

For many apps, _all_ print statements work like that, but I've worked on more than a few that have a 'print to standard out' component to them.


I promise the 15 minutes it takes to learn to use the debugger will save years of your life


Not necessarily. I know very well how to use a debugger, but nowadays I just prefer to use prints: it forces to you actually think about what's happening instead of just looking at it. It's also more useful in situations where putting a debugger actually changes behavior (high performance systems, parallel programs).


I would expect print-debugging to change the behavior in such cases too. Usually writing to stdout is behind some synchronization so using print debugging will drop performance and make program less parallel.


You can change where to put the prints if they do change the behavior (which incidentally also helps in understanding what's happening). You can print only if a condition happens, or only before/after critical sections, or print counters that you compute in the fast path. With the debugger you're always taking the overhead in and breakpoints will always change the flow of the program.


Exactly. And prints are cross platform cross language etc.


The issue is that the debugger stops execution at the breakpoint, but most often I want to analyze multiple print statements together. The debugger and repl have their uses, but IMHO I am usually better served by adding a print and having the script automatically re-run itself.


Which debugger would you recommend?


For Python: ipdb. In your code, you can just drop in "import ipdb; ipdb.set_trace()" and the execution will stop there with an interactive prompt. Alternatively start the script through (i)pdb and set breakpoints.

You can also use plain pdb ("import pdb; pdb.set_trace()"), which has the advantage that it comes with the Python stdlib, but the interactive prompt is less fancy (no history, no autocompletion, etc).


Add `export PYTHONBREAKPOINT=ipdb.set_trace` to your .bashrc and "breakpoint()" will invoke ipdb by default.


That is a great trick, thank you!


i use this frequently! but print() is still handy and my go to


I see print debugging and breakpoint debugging as two different tools, both very useful. Print statements are useful when you don’t know where to put a breakpoint, of course, but also when it’s important to see the real-time execution along with the debug logs. Breakpoints are of course insanely helpful when you know where to put them, but also can be peppered about suspect areas and used with steps to cover wider ranges.

Just to give credit to both “sides” in the comments here, you’re all kind of right and it’s okay if people have different workflows than you.


This isn't even breakpoint debugging though, this library simply makes your print statements "easier" to write, at the cost of having to install and import a library.


In classic HN style, half of the comments have devolved into variations of dEbUgGeR bEtTeR and PrInT bEtTeR...

Newflash! You can do both, and sometimes one is better than the other.


A while back I made a list of the Python debugging tools: https://stribny.name/blog/2019/06/debugging-python-programs/


You should add IceCream!


This is a really nice tool. But the fundamental reason most go for print is because its right there and that wins over other UX improvements or machinery. Python is a language where you can get a reasonably good debugger with a single line almost anywhere, still people reach for print()


As a staunch defender of print-based debugging I guess this would be a lot more useful if this could be connected to a logging library.


If.uou use vscode, the [Puke-Debug](https://marketplace.visualstudio.com/items?itemName=Zorvalt....) extension is a nice alternative. It allows to easily insert and remove similar print statements, in multiple languages, without adding a dependencies to your project that you need to remove afterwards.


Clojure allows this very brief copypasta `(defmacro dbg [x] `(let [x# ~x] (println "dbg:" '~x "=" x#) x#))` https://blog.jayway.com/2011/03/13/dbg-a-cool-little-clojure...

Wish Python has this by default though.


lots of nay saying here...

i for one like this. sure there are some ways in the new python code to handle some of the features of ic, but the overall feature set is rather nice.

sure it isn't a debugger, but definitely better than prints.


This looks cool and all, but why is it called Icecream? I know naming abstract stuff is hard but it feels like this lends itself to a more descriptive name. "ic()" tells me nothing about what the function does.


I think it's a (confusing) two-layer pun.

When read phonetically as letter names, the name "ic" sounds like "I see".

"ic" also is an initialism for "ice cream".

Everyone loves ice cream, and so another low-meaning cutesy-poo pun-ishment of a name was born.

I think. Pure speculation here.


Bingo.

All the one letter PyPi project names were taken.


My first thought was "I scream" ...


At its core, it has always called inspect.currentframe(). I suspect the first iteration was a wrapper around inspect.currentframe(), abbreviated as ic(), which was then backronym'd into ice cream.


Nice. Right now I’m using the following:

print(f’{foo(123)=}’)

Which prints: foo(123)=‘result’


I started writing a very-alpha related tool, https://github.com/czinck/pyset_x. As the name implies, it's like `set -x` in bash, in that it prints every line as it executes. It's more useful for situations where you have some complicated control flow and it's not working exactly how you expect.


Funny, a few months back I added this sentence to PySnooper's readme:

"PySnooper is a poor man's debugger. If you've used Bash, it's like set -x for Python, except it's fancier."

https://github.com/cool-RR/PySnooper


I saw someone mention PySnooper elsewhere in this thread, I think if I had known it existed before I probably wouldn't have bothered with `pyset_x`.


Party on! I applaud this effort. As a UI/UX engineer I can say that there is no one debugging UI/UX experience to fit all. Of course there is the active/passive division between active debugging and logging. There are further usability divisions in both of those. As this is a passive/logging debugging scheme I'm sure there are those in the passive debugging community who will find this helpful, if not vocal.There is no one size fits all when it comes to usability although 99.9999% of software built only presents one usability experience and only begrudgingly provide the disabled community access as an after thought. A fairly significant percentage of the population is color blind and yet web sites still overly rely on red. Yby. You be you. The bane of all usability is one size fits all. Well met! We need more, not less, usability options.


The split doesn't even run as deep as you describe it. Sometimes I use active debugging, sometimes I use passive debugging, depending on the situation. Sometimes I just want to see the output, so passive is fine, sometimes I want to control the flow and be able to inspect things more deeply. I don't think there are people who only use one or the other.


Humm, I built a similar but more powerful tool:

https://github.com/samuelcolvin/python-devtools

(install with `pip install devtools`)

It has similar functionality for debugging but has prettier formatting and code highlighting using pygments.


One of the most useful trick I found was installing the tool via sitecustomize.py so you don't need the import:

    # add devtools debug to builtins
    try:
        from devtools import debug
    except ImportError:
        pass
    else:
        __builtins__['debug'] = debug
(see https://github.com/samuelcolvin/python-devtools#usage-withou...)

This would work with icecream too.

The second advantage of not needing the import is that CI fails if you forget to remove all debug() commands.


Why don't people just use a debugger? It boggles my mind that people live with print style debugging. Depending on the project, it can be such a timer waster and just plainly worse in all aspects.

Especially in Pycharm for me, it is incredibly easy. Even when running over ssh in remote computers I can use my debugger.


It's strange, when developing C# and the like I'd go crazy without a debugger... and yet, in Python I've hardly ever used one. That might be because it's hard to develop C# without already being in an environment that puts an emphasis on easy debugging (Visual Studio), whereas I'm developing Python in whatever text editor I happen to be using at the moment and never really bothered to try out anything other than the IDLE debugger -- which really isn't that great to use.

Might be worth it to set up a better environment, then. Somehow I don't feel like print debugging is all that inferior though; in a sense it might actually make your debugging more efficient by forcing you to formulate concrete 'questions' about your code before even diving into the execution.


Writing Python (and most other languages) with a proper IDE with integrated debugger and all is like eating right and exercising: it's much better when you do it, you feel better while doing it, each time you start again you think "why did I ever give this up?", and yet it takes conscious effort to keep it up, circumstances just make it so tempting to fall off the train. 'Oh just checking the value of that constant, I'll just use Vim.'. 'It's just one trace line, I'll just use print.'. And before you know it you're back in the stone ages, and the circle repeats.


I really agree with this. I would love to be in a clean self-built environment in vim or sublime or the likes, but IDE's just work and boost productivity.


For me the debugger really shines when I need to trace the behavior of some library. It’s worth the effort to instrument the codebase I’m working on with good logging, but doing that to every library I call would be insane.


> Why don't people just use a debugger?

I only speak for myself, but I find that time spent inside a debugger is "lost", and bound to be re-spent again and again if you ever find new problems with the same code. I pretty much prefer to add assertions, logging and comments to my program, that will stay there and make further re-reads easier and more useful. As a matter of principle, I never use a debugger (except for assembly code, but that's rare nowadays).


Then you are either using a bad debugger, or you're not using one properly. A good, fully-featured debugger allows you to store sessions and configurations for various efforts and problems you're trying to diagnose, along with your notes and links to relevant issues.


> Then you are either using a bad debugger, or you're not using one properly.

You are certainly right, as I barely ever use a debugger and it is always an annoying experience. Can you suggest such good debuggers for C, Julia and Python?

Sounds like these "sessions and configurations" are important, and thus they should be formally part of the program, committed to its public repository for all to see and use. I call these things "tests" and write them using the same language as the rest of the program, and run them frequently to be sure that they don't get out of sync with the program itself.


+1 Agreed. Unless you can immediately see the problem from back-trace and local variables, stepping through with the debugger is often a waste of time compared to adding some logging.


Asserts are great, but when they fail, you still need to figure out why it happened. Nice environments allow you to debug the state at the point of assertion, python certainly doesn't make that convenient.


Agreed, and tests too.


I find you need both. Print gives you a nice high level view of your code, whilst stepping let's you see what's happening.

I would say I get more use out of printing these days.


Debuggers are great. But to unboggle your mind, here’s why I don’t use a Python debugger.

I don’t write enough Python to justify the time to learn the debugger. I write just enough to help my staff/teammate fix their problem or debug some tool I’m using and move on.

I’ve been doing this long enough to lose interest in digging deeply into each language. I learned gdb for C. After that I learned jdb and Eclipse for Java. There was probably something I used for the few years I wrote PERl later PHP. After a while I stopped getting excited about deeply learning new languages and their tools.

Debuggers and IDEs are yet another dependency. Frequently I use systems I don’t control, like client systems. They have really stringent rules about what can be installed. The lowest common denominator is vim and print.

Certainly there are lots of reasons to learn debuggers. But there are also lots of valid reasons to learn a tool like this which is easy to learn, useful in across many languages, and gets the job done in many situations.


I just use IntelliJ IDEs for all programming languages I write in and the debugger experience is the same everywhere.


It produces an ordered transcript of the whole run that you can visually scan quickly for the unexpected, and then paste bits of into your short-term notes file.

It's easier to go back in time with print debugging: just scroll up.

It also works in situations where debugging is infeasible or would slow down execution too much.


Right?? I find it really sad that python being the go-to "beginner/mess-around" language has such poor debugging in most environments. I've tried VSCode, Atom, Juno, Spyder, Jupyter(lab) and Thonny. All of them were either broken, buggy or cumbersome to step-debug in while also using the REPL. I will give PyCharm a go though.

The worst part is, I think a lot of devs resort to print-debugging which in turn leads to writing sloppy code - e.g. avoiding function calls to be able to access state, etc.


There is a place for both. Sometimes I really do want to see a sequence of events, or it may not be possible to run a debugger.

I work on an automated voice agent for restaurant drive-throughs, specifically the code that connects the voice agent to the existing point of sale system. When something goes wrong in a store, having a log of the API interactions between the voice agent and the POS is essential for troubleshooting.

Recently I updated the logging with a code generator option. I can place an order by voice on my local test setup, and it writes out a Python script that replays the same API calls, ready to run in a standalone API tester. This is super handy for testing.

But none of this is a substitute for a good debugger. For me it is a powerful thing to be able to stop the code at a breakpoint and see the actual values of all the variables at that point in time.

And it's not just for debugging. I jumped into this project after it had already been developed for a couple of years, and needless to say there is a lot that I did not understand about the code.

Of course I can read the code and use Ctrl+Click in PyCharm to see the definition of a function. But to really understand it, there is nothing like setting a breakpoint and looking at live data.


For me it's just lack of practice. This is rare enough for me that I would have to look up how to use the debugger each time. I never forget how to type print.


It depends on the situation.

In general, debuggers give you much more contextual information, but using a debugger is extremely slow. To see what's going on, you have to step through your program, one line at a time, until the interesting thing happens. That can take ages.

If you want to see how a specific function is behaving and it's called 100 times, it's a lot easier to look at a printed log of the 100 calls and scan or search it for interesting behaviour than to pause the program in a debugger and step through it over and over, hoping to catch the interesting moment.


> you have to step through your program, one line at a time

You can set more than one breakpoint at a time. Hitting 'c' will get you to the next breakpoint, no matter where it is.

> and step through it over and over

No, you don't (shouldn't) do that. You write a conditional breakpoint which activates when something interesting happens.


OTOH if you're able to write `if bug_will_happen_on_next_line(): breakpoint()` you're pretty much done debugging.


No, when starting debugging you most often know the results of a bug. You can break conditionally on when the results appear, then work your way up the stack or jump to the beginning of a block to re-examine it. Or you can rerun the program, this time with a breakpoint set based on the inspection of the environment when the bug happened.

In pdb not only can you set breakpoints with conditions[1], you can also assign commands to be executed when the breakpoint is reached[2]. You can also use `display` command to - effectively - insert temporary `print`s in any stack frame[3]. Coupled with `up`, `down`, and `jump` commands, and the ability to evaluate any code in the scope of a break, it really gives you a lot of options on how to find the problem.

Though, logging is still a good thing. When I see a commented out `print` in code review, I usually suggest to replace it with a DEBUG level logger call. Extensive logging does help to trace the execution, it can be disabled when not needed, and can improve the readability of the code. Although raw print is an antipattern (can't easily enable some and disable other, need to clean up after debugging, doesn't display the stack trace, etc.), logging is a valid technique which complements the usage of a debugger nicely.

[1] https://docs.python.org/3/library/pdb.html#pdbcommand-break

[2] https://docs.python.org/3/library/pdb.html#pdbcommand-comman...

[3] https://docs.python.org/3/library/pdb.html#pdbcommand-displa...


Yes, and sometimes I hit the wrong key and have to start again.


For what, exactly? Just saying "I can use my debugger" doesn't really help.

Debuggers give you a lot of information at a particular point (stack frame) in execution time, whereby print-ing can give you a filtered view of what happen(ed) during a full run.

When I can mark breakpoints and store the stack at that time (for each pass through that bp) in a inspectable gui tree (a list for linear execution path, a tree for threads etc) then I won't need print statements any more.


They're not perfect substitutes for each other. And not every python script that you need to quickly debug is worth firing up a while IDE for. Especially if your IDE requires creating a project and such. I say this as someone who very much loves Visual Studio.


What do you use for remote debugging? And do you run all production instances with a debugging server attached? What's the overhead?


I use Pycharm's native ssh interpreter toolhttps://www.jetbrains.com/help/pycharm/configuring-remote-in...

It automatically syncs your local project with the remote machine and runs seamlessly. They have a similar offering for docker, but I have yet to try it when running over ssh.


print is built in and runs everywhere my code runs. It never breaks and i never have to remember to install it. I never have to configure it or update it.


uncle bob told me you don't wanna be good at using a debugger

print("here") till the day i die!!!


We have this in Scala via the PPrint library, which provides 'pprint.log'. pprint.log also shows the filename/line number (so you can find your prints later) and has colored output and intelligent indent-formatting for things than wrap to multiple lines. It's super covenient!


pretty print is not important for me when debugging or logging, only would it make sense if the neat logger/debugger could accept a lazily evaluated expression. That saves some time if the logger is disabled at runtime. Otherwise the ugly print-debug-fix-and-remove pipeline is still there.



LOL, first time I write a slogan catchy enough that other people copy it :)

https://news.ycombinator.com/item?id=19717786 PySnooper: Never use print for debugging again


It seems like it was a coincidence. See comment thread:

https://github.com/gruns/icecream/commit/ee849b840eb34242aa2...


Has nobody heard about the logging module in python? It can essentially do all of this. it's also: 1. built in. 2. configurable by others so you can add it to a library. 3. allows different log levels per line (debug, info, etc).


Friendly reminder the stlib includes the pprint module. pprint.pprint() will give nicely formatted output of data structures and lists.

(Not suggesting it as a replacement for this tool - but if you’re a pyhacker and don’t know about this it’s handy)


> prints both its own arguments and the values of those arguments.

That's not enough.

Almost always I also want the argument type.

Occasionally a unique (i.e. as precise as possible) sortable (i.e. with leading zeros) time stamp also comes useful.


> sortable (i.e. with leading zeros) time stamp

Not just with leading zeros but also written in the YYYY-MM-DD-HH-... format.


This would be great to have.

I don't use types in my Python much. Yet at least.

Can you submit a PR to add this? That'd be awesome.


I was wondering if there was any equivalent library for Ruby. Turns out there is: https://github.com/AndyObtiva/puts_debuggerer. With that library,

  pd bug_or_band
Prints this:

  [PD] /Users/User/trivia_app.rb:4
     > pd bug_or_band
    => "beattle"


This is similar to an earlier package called "q"[0]

[0] https://github.com/zestyping/q


Pycharm is a neat IDE. I know some people do not like IDEs but all the debug tools Pycharm comes with are killer and helps greatly when debugging on a local environment.


This here is excellent:

    print(f"{d['key'][1]=}")
        d['key'][1]='one'
My immediate reaction was: How can I make this `form` easier to type. And that's what `Iceacream` does. Reviewed their code and realized they work with AST and inspection.

I recently started to learn (Common )Lisp. I realize how easy it would be to write a macro in Lisp that would implement the functionality of `Iceacream`, in a few lines of code.


I've built https://github.com/kunalb/panopticon to easily trace function execution to handle a similar use case, except that it generates output for chrome://tracing instead of printing out lines.

cPython's settrace/setprofile functionality enables so many cool tools.


I am looking at the examples (and the comments here about f-strings), but neither fit my own usage of debugging with print. When I use print, it is mostly for two reasons. To see the structure of a dict (often with multiple layers of dicts inside), or to get dir(x) to see what methods and attributes of a library object, since the documentation is not always so forthcoming.


Instead of introducing a new library, why not just instruct your editor to do the heavy lifting for you? For instance, I've written a small Emacs function that asks for the argument to print in the Minibuffer and then inserts it once quoted, once unquoted.

It's quite nice to have such a function for any programming language you use and bind it to the same keyboard short-cut.


For one, people not using emacs, or whatever editor it would be, can use it.


Then instruct whatever is your favorite editor likewise.


Sure, but why would anyone not use Emacs?!


sometimes, all you need is a print() statement


If you like this, you might also like my small debugging utility, a better_exchook replacement: https://github.com/albertz/py_better_exchook

Simple example:

    assert x == 4
When this fails, it will print the value of `x`.


Just a point of curiosity about reassigning `sys.excepthook`. Is there a reason you simply reassign it and lose information about the old excepthook:

    sys.excepthook = better_exchook
instead of something like:

    def generate_better_exchook(..., current_excepthook=None):
        previous_excepthook = current_excepthook
        def better_exchook(exception_type, exception_instance, exception_traceback):
            ...
            if previous_excepthook is not None:
                previous_excepthook(exception_type, exception_instance, exception_traceback)
        return better_exchook
and then

    sys.excepthook = generate_better_excepthook(..., sys.excepthook)
Do you prefer not to do this because it would keep this closure around in memory until the Python process exits?


Hey, sorry, just saw this question right now.

Actually, I don't know. I assume this is not standard, because many excepthook handlers would probably print some variant of the stack trace on stdout or stderr, and then you would end up having printed the stack trace multiple times. Or maybe it depends. If your excepthook instead prints it somewhere else (e.g. some log file), it makes sense to also call the default handler afterwards.

But no, this is not about the closure.



The problem with printing variables is when you want to find them there's nothing to search for.


if you name them well, sure there is.


wait isnt that just enforcing bad practice? doesnt python have proper debuggers with breakpoints ect? why in hell should i use a lib to print stuff just for debugging? i mean why not having a proper logging lib and pipe some statements to [debug] or whatever i dont get it.


Requirements for logging and debugging are quite different. For logging I probably don't want to print the source expression, I probably want to include a semantic description. For logging it is also intended to live in the code for longer, so I probably want something a touch more readable.

The debugging and logging spaces overlap but there are definitely differences if you really start optimizing for debugging experience. I don't think encouraging bad practices is a problem if the code will be deleted before being submitted.


You are right with some points but i still see it a bit different. I disagree with the statement "I don't think encouraging bad practices is a problem if the code will be deleted before being submitted.". The thing is the deadline just needs to be shifted a bit and suddenly you have to commit code in a hurry. There will always be the chance that you forget delete something. A great example is this one: https://arstechnica.com/gadgets/2021/03/buffer-overruns-lice... low level code with debugging printfs nearly made it into the freebsd kernel.

I mean everyone prints something for debugging purpouse from time to time, thats fine. but having a lib extra for that. i dont think thats a good idea. Logging frameworks also enable toggling if source expression is printed or not. So you can only show it if you give a --debug flag to your tool for example.


I used to teach people to use the amazing debuggers we have now so that they'd never have to use print statements to debug their code.

Since I became a front-end developer, all I've done is use print statements to debug JS. Feels like I've gone backwards.


Why don't you use the devtools debugger? I generally find people in webdev tend to use debuggers more than other fields because it's so readily available in devtools, or with `debugger` in JavaScript.


It's nowhere near e.g. Pycharm as a debugging tool, is why. It's awful.


I'd be lying if I said I never use print to debug, but honestly I mostly using logging.



The only thing I will say is that this would not meet my personal code review standards for logging. Rule number one is never put functional calls in a logging statement, only log static data. As optimistic as one might be that the 'ic' has no side effects, the most conservative approach is to never put anything that interacts with run-time behavior in a log statement. This is especially good practice when working in many code bases in many languages. If one gets comfortable with something like say 'ic' that has no side effects when wrapping function calls then one may assume this out of habit when interacting with other such utilities, other languages. The conservative approach is best in my experience and that is to never insert run-time code in log statements and only log static data.


It says on the tin that it's for print debugging. That's not the same as logging.


Interesting, similar to Julia's `@show` macro

  julia> @show sqrt(121)
  > sqrt(121) = 11.0
  > 11.0
The last line being the returned value.


Note: VS code made it a lot easier to use the debugger, it’s basically set up and ready to go, but this is a great project too!


Is something like that available in .NET. seems possible considering expression support. Maybe with bad performance.


This has the added benefit that you can turn it off in CI/CD and prevent people committing debug statements.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: