Hacker News new | comments | show | ask | jobs | submit login
Programming in the Debugger (willcrichton.net)
117 points by wcrichton 87 days ago | hide | past | web | favorite | 86 comments

In VB the feature requested here was called Edit and Continue. VB users loved it.

The original release of VB.net did not have it, and was the highest priority feature to add back. When we added it back into Visual Studio, C# got it as well because it’s users demanded it. Anders was trying to ensure genetics made it into .net 2.0 and did not want the feature. I know he wrote some critiques that it encourages producing bad code.

I added it in the debugger, while each language team had work, and there was a massive work for the CLR.

Changing the current function is actually pretty easy .. it’s when you need to remap the instruction pointer in functions that have been edited multiple times and have frames trapped on a callstack across multiple threads.

Edit and Continue also works in VS C++ and has been a godsend for me when tweaking DSP code. This particular feature has remained unmatched by other IDEs/toolchains for what, 15 years? That's pretty impressive stuff you've worked on.

It's certainly not adapted to all domains, but in DSP applications where you're looking at immediate audio/video feedback (in my case, audio synths and effects) it makes a lot of sense to regularly switch into "tweaking mode" - not only in order to refine the code, but also to get a better understanding of how small variations in each part of the code can affect the live output, which can be a large portion of the work you'll do in DSP applications. It's just awesome, you're working on a full-fledged application and you can turn it into a live editor at any moment.

"Edit and Continue also works in VS C++ and has been a godsend for me when tweaking DSP code. This particular feature has remained unmatched by other IDEs/toolchains for what, 15 years?"

Edit and continue has worked in C++ toolchains and editors before VS, and continues to work now :)

Fix and continue support, for example, existed in GDB way back in 2003 (see, e.g., http://www.sourceware.org/ml/gdb/2003-06/msg00500.html) and in forks of gdb well before, which others used for C++ (hence the "implemented again" headline).

The message covers some of the other places this was implemented.

IIRC (it's been a while), HP WDB support for this goes back to the very late 90's.

Now, i'm certainly not going to claim these IDE's were as good as VS, but they definitely had edit and continue for C++. It's definitely a technical accomplishment, but it's not unique.

Not a very unique style as claimed though. Common Lisp implementations often have a very full featured debugger coupled with conditional restarts the experience is awesome and really should be a preresquite for any dynamic language to be called fit for serious application. Unfortunately it is often absent

Smalltalk has had this too since 1980 or earlier. The Smalltalk approach generally involves changing the code for a named method and then restarting either the method or a code block inside the method. Coupled with Smalltalk's flexible doesNotUnderstand error handling for when methods are not implemented, coding in the Smalltalk debugger is phenomenal for top-down programming where essentially you keep pushing deeper into a code base you are creating on the fly.

In CL you typically have use-value and bind-value restarts for various conditions, which do not imply modification of the running program, only of its state. Whether it is more or less powerful than "edit and continue" model (which you in fact can do in CL, but doing that comfortably requires IDE support) is for me a question without definitive answer.

Not related to your point, but I do think it's entertaining that every time I post something about programming languages, there is always at least one person who says "Lisp already solved it!" [1] [2]

[1] https://news.ycombinator.com/item?id=16726379

[2] https://news.ycombinator.com/item?id=12221818

Maybe it's because you sound like you detected new and unique (!) stuff or that it is actually like you describe. For example:

> Jupyter presents a unique programming style


> But the difference between Jupyter and a REPL is that Jupyter is persistent. Code which I write in a REPL disappears along with my data when the REPL exits, but Jupyter notebooks hang around.

Sounds like a typical Mathematica notebook to me.

Macsyma (a Computer Algebra system written in Lisp) has had also Notebooks like that: https://people.eecs.berkeley.edu/~fateman/macsyma/docs/intro...

REPLs on Lisp systems have been persistent in Lisp images (not in notebook files and not transferable) since a very long time and some display text/graphics inline.

I'll grant "unique" is not an ideal world choice. I meant it not as much as "you literally can't find this anywhere else" so much as "it's something you don't find much these days."

And I think it's a little uncharitable to think I was claiming that wholesale, since I explicitly reference other implementations of the same idea at the end.

> it's something you don't find much these days

This is approach is available with all of the JVM languages too. Putting that with what others have noted about the CLR, C, and LISP implementations, this may be more common than rare.



It may not be so much the case the people are being uncharitable as that there really isn't much that is new or unique in software.

For my own part, I have hit that wall so many times that I now begin with the assumption "if I can imagine how it might work, someone has probably already done it".

That is actually nice in the sense that I nearly always have a starting foundation when solving problems these days but also frustrating to the point of cynicism when discoveries or inventions I put a lot of time in to turn out to be minor variations on something 1k+ people already knew about.

It seems more embarrassing than entertaining.

I find this comment frustrating, just like many of the Lisp comments. Asserting a condescending statement with zero explanation. These Lisp comments aren't taking the form "oh, Lisp has something like this too, we can learn from it," instead it's, "something like this feature exists in Lisp, therefore these ideas clearly aren't new." For example on the thread about gradual typing, Common Lisp has definitely not had gradual typing since the concept didn't exist until the last 10 years (distinct from optional static typing). Yet that does not stop the commenter from asserting "oh Lisp has had this for forever" without any further discussion or nuance.

I think the point isn't asserting that it isn't new, rather it is a sad state of affairs that many tools are still playing catchup with developer tools of the 80's.

For example, every time we discuss Lisp on HN, many talk about SLIME, SBCL and such.

Yet, the actual modern experience of what it meant to use Lisp on its glory days lives on Allegro Common Lisp and Lispworks.

Reading papers from Xerox PARC research always makes me sad that many think replicating a PDP-11 is the ultimate developer experience.

Siek/Taha in http://scheme2006.cs.uchicago.edu/13-siek.pdf say

> Common LISP [23] and Dylan [12, 37] include optional type annotations, but the annotations are not used for type checking, they are used to improve performance.

Actually, type annotations in Common Lisp have been mostly used for three different purposes:

  * improving performance
    (by choosing specialized operations
     and/or removing runtime type checks/dispatching)
  * improving runtime safety, plus better error messages
    (by adding more and more specific runtime type checks)
  * improving compile-time safety
    (by compile-time warnings of type errors)
The CMUCL compiler has used optional type declarations and type annotations at compile time for some static type checking, combined with some type inference.


There are at least two forks of CMUCL (SBCL and Scieneer CL), which use this,too.

Thus CMU Common Lisp has optional static typing + a limited form of compile-time type inference/propagation/checks.

Not sure when CMUCL added this, probably in the late 80s/early 90s. It was for example published in 1992 in the paper: 'The Python compiler for CMU Common Lisp', https://www.researchgate.net/publication/221252239_Python_co...

SBCL is quite good at the static checking even in the imo. Not sure what level of maturity it was at 10 years ago though.

Same. This came from the python compiler in CMUCL, in the 80ies.

I know that feel :)

It's just as bad in any discussion of graphics techniques, operating system features, and again in security approaches/features. Migrating to devops and security spaces has been like a conceptual "Groundhog Day".

I'm starting to think our industry is actually not so innovative and fast moving as we all like to say it is. It may be that software is merely a new engineering field and new fields begin with a wave of invention followed by reflections of reiteration as boundaries are found and concepts get refined.

It's mostly frustrating for me that the state of art for dynamic programming had not really advanced and instead in many cases is an objective regression.

Sometimes I look at the bug database, pick up the next highest priority bug, have zero idea what the problem may be, sometimes there isn't any screenshots and the description is worded so poorly it is hard to make sense of. So I follow the repro steps, which perhaps take 15 minutes and trigger the issue.

After that I may pause execution and examine some variables and work out what is wrong. From there I need to work out what has caused this to go wrong, perhaps a hardware breakpoint and reloading part of the scene will help.

From doing that may be I see what triggers the hardware breakpoint, a for loop not ending correctly or taking too long to end.

With edit and continue it maybe possible to make the fix, trigger the function again and close the bug. Without it may require a compile, link and the 15 minutes of repro steps.

So edit and continue can massively cut down the amount of time required to fix a bug.

It seems really weird it would be described as amateur or not professional. As usual I suspect it is part of the trend of making everything black and white rather than looking at things on a case by case basis.

Sure if some maths needs to be solved to fix a collision bug when the use is drawing a line on the screen, I would work out the maths and make sure the function is right. But if the maths is right or I don't know what is causing the issue, edit and continue is just another tool for fast and efficient debugging.

In my opinion...

But would you be sure you've actually fixed the bug? Perhaps your edit changes the state that would be reached by the continue (e.g. bug is on first iteration of a loop, and your edit inadvertantly changes the initialisation).

I think it's useful for fleshing out a bug and could save time if there's another issue downstream, but I wouldn't count it as solved until it has been tested on a fresh run.

Again there is no need for black and white. Take each case as it comes. Often edit and continue is enough sometimes you do need to stop and restart to be sure. Sometimes you are 99% sure and that is enough when you have 4 weeks of bugs and 1 week to ship a product. Heck sometimes you are trying to repro a complex but and half way through hit and unrelated issue, I make a sound of pain and someone else asks what's up, I tell them, they say it is fixed in latest, but I can edit and continue a fix now to save having to spend ages reproing. There are loads of times the feature is useful. I would suggest the people finding it amateur and not useful aren't using it correctly or at sensible times.

That's easy to do. After having the fix, go from the beginning and see if it still happens. Of course, this flow does not apply to all kinds of bugs, but I'm willing to bet that at least 90% of them can be fixed like this.

I use this kind of workflow all the time in Emacs. I write code in one window and have a Jupyter REPL in the other and send code to the REPL to test things live as I'm coding.

I'll have "pdb" set up in Jupyter REPL so it drops into a debugger when I have an error with a function.

In the case of a long running function I want to debug (like the video example at the end) I'll set local variables with the same names as the variables of the function and then execute the function line-by-line/chunk-by-chunk.

The only problem is losing track of global state, but the same thing can happen in Jupyter notebooks.

This type of workflow is really common in Lisp and R.

If I want to integrate documentation and code, I'll use org-babel or R Markdown (which does work with Python).

What do you use for running the jupyter REPL in emacs? I haven't found a great way of doing so yet. Just running it in a comint buffer?

Yeah, just comint with elpy. There are some rough edges; I can't input multiline in the comint buffer (but can send it), and gives an error on the very first command, but otherwise works well.

If you connect your code buffer to an jupyter notebook via Emacs IPython Notebook [1] then you can hit C-c C-o (ein:console-open) which will launch a jupyter REPL.

- [1] https://github.com/millejoh/emacs-ipython-notebook

Looks like that just calls make-comint-in-buffer eventually[1] with some fancy stuff to make sure it connects to the same jupyter server as being used for ein. Not sure I see a lot of utility there if I'm just wanting to run a jupyter REPL next to a regular code buffer or e.g. org-mode.

[1] https://github.com/millejoh/emacs-ipython-notebook/blob/cfd0...

ob-ipython works really well.

The REPL opened with org-babel-switch-to-session has been really buggy for me. Has it worked for you? Could just be because I'm using it with non-python kernels, which isn't well supported yet in general as far as I can tell.

I've only used it with a Python kernel so far, and doing eval on each or all blocks in the buffer (org-ctrl-c-ctrl-c, and org-babel-execute-buffer). Haven't had problems so far.

i hate actually writing code in Jupyter. part of it is just that my muscle memory is all emacs. but its not just the muscle memory, its all the features missing that I usually use but are common in an IDE or good editor. Does anyone know whether emacs ipython notebook is still viable?

Yes Emacs IPython Notebook is alive and kicking [1]. I recently wrote a tool on top of it and found most everything to work, even as I was banging on it pretty hard at times.

- [1] https://github.com/millejoh/emacs-ipython-notebook

I've seen others do similar things (including beginners I was trying to teach), although in languages like C/C++, and I do not think it is a good idea at all. It leads to "tunnel vision" and concentrating on small pieces at a time while ignoring the "big picture" as well as mindless fiddling with code in an attempt to "get it to work", which pretty much leads to worse code quality overall and a decrease in productivity. I suppose it may feel productive to be making lots of small changes and seeing the results immediately, and somewhat addictive too, but my experience says that it's not. I recall helping a coworker who had spent several hours messing around with a small fragment of code that he was sure was causing a bug; he had made lots of little changes and yet the problem remained, but was reluctant to look somewhere else because he was almost addicted to that allure of "just one more little change... hope this fixes it...!" I told him to stop doing that and think more about the data coming into that piece of code and going out, which it turns out in its original unmodified form was perfectly fine --- the problem was actually somewhere else, and later was fixed in a few minutes after he made that realisation. If I hadn't intervened I would not be surprised if he spent another few days of fruitless labour.

The other thing that this sort of interactivity/short-edit-continue-cycling is heavily promoted for is learning, which IMHO is probably the worst way to do it; it's essentially encouraging programmers to write code without actually understanding what they're writing, and when it doesn't work, to try random things until it does. That's no way to write good software.

It's better to spend more time thinking about the problem, writing, and then proofreading code before even trying to compile and run it, than try to write something you vaguely think may work and then spend much more time debugging and "fixing" it. Those from the latter school of thought are often surprised when they see me write several hundred lines of code and it works perfectly the first time. To me, it's the other way around... seeing those who can't write more than a few words without making blatant syntax or logic errors.

In other words, "reducing the cost of mistakes" may only encourage making them; and in any case, does not discourage it.

While it's true that shorter write/run cycles can encourage a throw-spaghetti-at-a-wall approach to programming, the other side is the Bret Victor point of view [1], i.e. that a programming environment that clearly exposes the behavior of the computer and allows the programmer to react to it can help her more easily form a mental model of the program's behavior. People can only keep but so much program state inside their heads, so careful thinking will only take you but so far. I'm optimistic that in the long-term, developing these kinds of interactive programming tools will be a net positive for programmer productivity even in the face of additional bugs that may be introduced.

[1] http://worrydream.com/LearnableProgramming/

I think there is a lot of merit to what you say here. For example, one of the strongest realizations I had while using an interactive programming tool was while hacking on a 30-50 line function I wrote. In the process I thought to myself "gee, I'm glad I have this tool otherwise I would have no idea what is going on!".

However, before long, I realized I needed the tool because the function was too long and needed to be split into smaller pieces. In that case having the crutch of an interactive programming tool led to worse code. Once I refactored the function I realized there wasn't much need for the tool anymore.

That being said, I have taught many a student to code and can't tell you how many conversations have ended with me saying "Maybe it will work. I'm not sure. Just try it!". Students (not all) in an unfamiliar environment will sometimes overthink things rather than trying a failing quickly and correcting their mistake. In this case, failing quickly is an indispensable tool to help student "hone in" on the solution and make progress towards it.

Of course I have seen a fair number of students engage in "shotgun programming", switching a < operator to a > operator and hoping that it will work, so obviously there needs to be a balance.

It’s useful to realize Edit and Continue is not for professional programmers, and the qualities we’d most want are not appropriate to every problem.

The horrible code you inherited as the programmer brought on is because some non-professional domain expert made a ‘hairball of extreme utility’ that unlocked product-market fit on back of a mess of experiments.

VB’s persona was ‘Mort’, literally a scientist trying to get an answer, or a business person trying to solve a problem, not a programmer trying to build a system. It’s 99 hairballs of no utility and 1 that wouldn’t have been built otherwise because no one knew it was useful until it exists.

Some categories of UI experience problems lend themselves to simply making the trial loop as quick as possible.

It is also true that understanding a dataset has an initial experimental component. No amount of thinking about code is going to tell you the dataset has a fatal flaw, until you discover it.

In the old adage "there are two difficult problems in programming: caching, naming things, and off by one errors" edit and continue drastically helps with the last one (even for "professional programmers"). I loved that feature in C# while I was working at MS, it saved me so much time from checking that all my indexing was correct.

There are lots of interesting problems with interacting with external underdocumented systems.

I find edit and continue really useful for making something worm with these.

Are these out of the reach of "professional programmers"? Can they only work on systems with fully specified models, with exacting requirements?

I agree with you and poorly worded this for a larger audience that doesn’t have the full context of design decisions on the feature.

As an example ... for a ‘professional programmer’ there is an argument that allowing E&C to work if you edit in vim or emails is a feature. For what we needed to accomplish that is an anti-feature.

I should probably not care, but i do: Emacs, not emails. I should also not write these on an iphone.

> Edit and Continue is not for professional programmers

Then I prefer to be called an amateur, as I really like using that feature, instead of loosing my debugging context and starting everything from scratch, some times spending several minutes trying to replicate the issue that landed me there.

Not sure if I'm reading incorrectly into the tone of your response, but, for context, 'SteveJS was a developer on that feature, as he says elsewhere in the comment tree.

I sure noticed that, I just think when we work as professional programmers are worthy of having productivity features like "Edit and Continue".

I don't have to "enjoy" doing everything the hard way, just for the sake of being professional.

UI/UX and language features designed for productivity, help everyone, not only newbies.

What I understood from SteveJS's remark, is that although he took part designing that feature, he doesn't share this opinion.

If not, I would be gladly corrected.

I think it is more than fine to use Edit and Continue as a professional. I’m acknowledging the trade offs and making an argument meant to be heard by a resistant audience that is likely to be the ones making the tool.

E&C is a useful tool, just like a debugger in general is a useful tool, despite early enthusiasm for unit testing having people claim you shouldn’t use debuggers anymore, but instead write unit tests to debug every issue. E&C can be misused. If i was using C# i would take generics over edit and continue if I could only have one, but I definitely prefer to have both.

The theory behind using personas is you get a better tool or solution by highly focusing on a concrete user rather than spread around a bunch of features across every possible customer. What i expressed is what is necessary to ‘get through’ to someone who hasn’t read a book like ‘the inmates are running the asylum’. That was true of many in Devdiv back at that time. UX is now recognized more firmly as a separable highly valued skill. If you read and understand the persona and still think ‘i’m a programmer, this is a dev tool, my opinion is more import than what the persona would want .. that is the audience for the above. I think that is what is present in the comment to which i was replying, so i was attempting to put it in those terms.

I do believe it is helpful to understand who a tool is designed for to see why it works as it does. Edit and continue when done well is incredibly ‘safe’. It does what you expect every single time, even if you don’t know how it is doing it.

A good example where there is a hard choice is changing a linq statement. To make E&C in a dev tool that holds true to ‘what is on the page is what runs’ you need to violate aspects of how deferred execution works. If the closure was already captured are all existing instances of them going back to the source as it is on the page now or the source as it was when when captured? I have a strong opinion that it should be the source on the page now to build a good E&C, but that choice is detrimental to learning what is happening with any type of deferred execution. The cost of implementation for my opinion is also probably two or more orders of magnitude in effort.

I also think ‘non-professionals’ should be able to program and get to something that works. Particularly in people who write developer tools the bigger chore is gaining empathy for people who deserve to program, but simply aren’t making infrastructure. My reply is counterproductive in trying to model proper empathy so thank you for calling that out.

Thank you very much as well for clarifying your point of view.

I fully agree with your point of view now.

I really, really, really don't understand what it is that you're saying here. Slow compile times make better programmers? And you base that on one anecdote? And somehow your example coworker would transform into a sensible programmer if you add a Sleep(30000) in the compiler? According to you, it is better if the compiler always makes everyone wait for long periods of time than letting the programmer decide when to pause and think? What??? Are you serious?

When I build C++ stuff at home I go to great lengths to shave milliseconds off my compile time, because it is just so much better to have your program built in under 2 s than in 2 minutes.

Slow compile times make better programmers?

To put it bluntly, yes.

And you base that on one anecdote?

I've seen it many times; I was just giving one example.

And somehow your example coworker would transform into a sensible programmer if you add a Sleep(30000) in the compiler?

If every compile-and-test cycle took minutes instead of seconds, he wouldn't be so inclined to keep trying random changes.

Fast compile-and-test cycles can be good, but I've seen it abused more often than not.

My biggest issue with notebooks is that we're throwing aways years of best practices. Notebooks often lead to untested code with poor structure, frequent use of global variables and readability issues. Even using it as debugger is limited since there's no way to step into functions.

but who builds whole applications in a notebook anyway? most software dev best practices are really out the window when what you are doing is data analysis, or explication, or putting together reports.

I have built several dashboards with nothing but jupyter notebooks + jupyter widgets [1] with the help of jupyter-dashboards [2]. Granted they were little more than api calls followed by visualizations.

- [1] http://jupyter.org/widgets

- [2] https://github.com/jupyter/dashboards

> Interactive editing and debugging is limited to top-level code.

Very true. When programming in jupyter notebooks I often put off wrapping my code in a function because it becomes harder to interact with and debug. But this obviously becomes problematic when you want to leverage code reuse and abstraction.

In order to tackle this (somewhat) I made a tool [1] which allows you to take code anywhere and get it back into a jupyter notebook and make it top-level (e.g. function function/method and loop bodies).

- [1] https://github.com/ebanner/pynt

This has been one of my major gripes with ipython/jupyter notebook workflow.

And I realized that, past an intermediate level of expertise in this ecosystem, you could really benefit from dropping down to JSON of your notebook and doing a bit of "metaprogramming" (the way you did). Yet the jupyter development team doesn't appear to be taking that into account, i.e., not making it easy/consistent for users to programmatically manipulate their notebooks.

I would love to see "Jupyter for power users." I think the programming language tooling community needs to take a serious look at how interactive programming interfaces could work in the development cycle for all expert programmers, not just novices or data scientists.

Agreed. Indeed, the original punchline I had for pynt (the tool I linked above) was "Jupyter notebooks for software engineers". The idea being that software engineers might look at jupyter notebooks and think "they look cool, but my code isn't in a jupyter notebook, so it looks like I can't use them".

I'm curious for what ideas you have that you think jupyter notebooks (or other interactive programming tools) would have the most promise for gaining adoption amongst software engineers.

One pynt feature I had going facilitated a "debugger-driven development" workflow where when your code hits an exception it would save the state and generate a jupyter notebook with your code in it, hence allowing you to fix the bug faster.

In MATLAB I run in debugger mode very frequently, and it allows you to step inside functions, and query variables. When encapsulated within functions, it also allows variable assignment at the prompt. This is pretty useful when writing data munging or analysis scripts; in that case, the code in the editor is the running canonical copy, while the current running environment is exploratory in nature.

Programming in the debugger is a massively overlooked paradigm. We use gdb for interactive usage. You could become a super-productive programmer if you can use gdb programmatically. The possibilities are endless. The problem: gdb developers don't think of that kind of usage as a core part of the software (let alone making programmability more important than interactivity, something I believe in). And that's why it's one of my goals to develop a programmable debugger in the future if I get some time.

> Programming in the debugger is a massively overlooked paradigm.

I wonder if the tool I mentioned in [1] facilitates the workflow you have in mind? It promotes a workflow of setting a breakpoint at the beginning of a function, attach a jupyter notebook to it, dump your code into a jupyter notebook, edit the code in there, and then paste the result back into the code buffer (once you've verified everything checks out locally).

One reason I suspect the debugger is not seen as part of the software engineering process (i.e. to extend or modify code) is because of the editing capabilities within a debugger CLI are usually very primitive. At least with pynt, the "debugger" (i.e. the jupyter notebook) is still in emacs so you can leverage all the strengths of your editor while doing interactive programming.

> let alone making programmability more important than interactivity, something I believe in

Can you elaborate on what you mean here? Maybe give an example of a programmable gdb feature that you think would be useful?

- [1] https://news.ycombinator.com/item?id=16899023

I thought about this idea at a deeper level long time ago and don't remember all the details (and don't have the notes in front of me) but here are two points related to that:

- Programmable debugger would be a massive benefit to compiled langauges like C, and not as much of a benefit for interpretive systems like Python. Especially with the REPL mode of programming in Python (you run something, don't like the results, try something else, then move to next step) is already a pretty effective way to do exploratory prototyping. A programmable gdb would allow REPL like workflow for C.

- As an example, let's say a C program seg-faults at a certain point in code. If you run your C program inside gdb, gdb will stop there and let you do some exploration. That exploration is very tedious if you can't programmatically manipulate your registers and memory areas. In principle, you should be able to write an arbitrarily complex program to bring the seg-faulted state back into a functioning state, and then when the code continues, you don't get the seg-fault again, and you didn't even have to rerun the code.

One use case in which this is massively useful is long running scientific programs. You start a complex scientific simulation based on a C program and it is expected to run for, let's say, 48 hours, but it seg-faults in the 46th hour! If you run it in a programmable gdb you could fix the seg-fault right then and there, and continue the program instead of trial and error of multiple runs, each run taking 46 hours before you know if your fix worked or not.

I think this scenario might require more than just a programmable gdb, e.g., it might require a way for the program to be recompiled and put in memory replacing the older buggy program, before the program has actually finished. But a programmable gdb would be a big part of that system.

You should check out ups


It has a builtin C interpreter which allowed you to dynamically add and change code while debugging.

Unfortunately the code has somewhat bitrotted, so it's hard to use on a modern Linux system

Similarly, Cling, from CERN [0], not the easiest thing to set up, but interactive C++ programming using a JIT interpreter.

And I can't mention Cling without what they're trying to replace CINT [1], which gives you C and C++ programming, but with a rather slow interpreter.

[0] https://root.cern.ch/cling

[1] https://root.cern.ch/cint

Another great example of this is editing CSS in the Chrome inspector. It makes it easy to see where values are derived from, and add/toggle new rules and classes inline. It's a lifesaver when your UI rendering relies on a bunch of JavaScript state that you don't want to recreate each time you tweak a color.

This has been possible for 50 years at least, and the lisp, smalltalk, and functional programmers have been doing this for a long time now.

After programming my whole life I have to say this industry is surprisingly math averse, regressive, and led by cargo cults.

What wheel will we reinvent next week!? Stay tuned...

Yeah the article is a little bizarre. E.g.

> This works exactly as intended! We were able to edit our program while it was running, and then re-run only the part that needed fixing. In some sense, this is an obvious result—a REPL is designed to do exactly this, allow you to create new code while inside a long-running programming environment. But the difference between Jupyter and a REPL is that Jupyter is persistent. Code which I write in a REPL disappears along with my data when the REPL exits, but Jupyter notebooks hang around. Jupyter’s structure of delimited code cells enables a programming style where each can be treated like an atomic unit, where if it completes, then its effects are persisted in memory for other code cells to process.

> More generally, we can view this as a form of programming in the debugger. Rather than separating code creation and code execution as different phases of the programming cycle, they become intertwined. Jupyter performs the many functions of a debugger—inspecting the values of variables, setting breakpoints (ends of code cells), providing rich visualization of program intermediates (e.g. graphs)—except the programmer can react to program’s execution by changing the code while it runs.

I just don't understand what's so amazing about this. This is total standard debugging. In python you can do it with with the built-in debugging module pdb:

    import pdb; pdb.set_trace()
Or you can run your script with

    python -m pdb script.py
Also the REPL doesn't just exit, it only exits if you allow it. E.g. if you call the script as

    python script.py
it will exit, but you can also call it as e.g.

    python -i script.py
or do any number of things and it will not exit.

I mean it's great that more people start debugging code, but calling this a feature of jupyter is a little ridiculous. The exact same feature exists in the python REPL and that's reflected in jupyter.

I think the article's point is that you don't restart script.py again and again but start it once, then modify it while it's running, and once you are happy with the final debugged version, the script.py program saved on your hard disk is that final debugged version.

You can sort of do something like this with REPLs, by copying code around between an editor window and a REPL window until you are satisfied that it's what you want. But you have to keep the REPL and the editor in sync manually. For example, if by experimenting in the REPL or debugger you find a bug that requires changes to three functions, you must either change them in the editor and copy all off the changes to the REPL to test the new program state; or you can redefined them in the REPL's less-than-ideal editor and then make sure to copy the updated versions to the real editor to save them in the source file.

I've worked with a few systems that were not REPL-based but more notebook-based, and it's very cool if the system keeps track of stuff for you. In particular, Coq and Isabelle have such environments.

(I've never worked with Jupyter, but I think I should give it a try.)

just don't do something like:

q = 'foo and bar'

inside the pdb

I'm interested if anyone has good tools for this kind of workflow, especially for Python.

As others have mentioned in the comments it seems that similar workflows have existed across a number of languages and IDEs for many years, but it seems that they haven't really caught mainstream attention (in the sense of there being common conventions that multiple languages and tools follow).

For my own workflow, I have a debugger-like tool for IPython (https://github.com/nikitakit/xdbg) that lets me set a breakpoint anywhere in my code and resume the current Jupyter session once the breakpoint is reached, making sure that scope is properly adjusted to provide access to all local variables. When combined with text editor integration (such as https://github.com/nteract/hydrogen), this is the best I've managed to come up with in terms of minimizing the "penalty for abstraction" while maintaining interactivity.

Interesting approach. I hadn't thought of using a single notebook to adjust the scope and enter into functions. I my tool I take a one-function one-notebook approach because that's how I did it by hand for a long time.

I'm curious how you handle the issue of scope in pynt.

From what I see in the readme, a lot of the features are about extracting code snippets from the source file and sending them to the notebook. But when you're trying to interactively edit a function that's deep in the call stack, there's a question of how to pause execution right as the function is being called and set up the execution scope for the REPL/notebook.

I actually think handling scope is perhaps the key issue for interactive programming. If the global scope is easier to interact with than anything nested, you continue to have a "penalty for abstraction". This is the case even if your language is purely functional/reactive/supports reversible debugging -- in some of his talks about Eve, Chris Granger mentioned how scope proved to be a real challenge even in a functional programming setting (IIRC).

This is one of the reasons I prefer Hydrogen to Jupyter notebooks. In a notebook all cells are toplevel, whereas in hydrogen I can select any chunk of text (or chunk of the AST) and execute it. It's not as good at presenting computational narratives, but much better for interactively changing an existing program to do something new. There's still a lot of issues on the UI front though, so I'm definitely interested in any ideas about that.

I thought about this problem - how to have interactivity and a nice editor at the same time, so I created a solution based on IPython.

I am using an inline ipython shell to interact with the local variables from any scope. I just drop a call to "DBG()" (how I called it) where I want to take a peek. In order to write the code, I use a normal program editor and work remote with SSH. The only drawback is that I have to exit and reload the program for each change in the source files, but at least I have direct access to all scopes.

When I start a new program the last instruction is a call to DBG(). I compose functions in the REPL and move them to the source file, above the DBG call. I love the interactivity, ability to inspect, compose and test until I get the code right, while at the same time being able to structure larger source files in a nice editor.

Here's the library: https://github.com/horiacristescu/romanian-diacritic-restora...

Pros: lightweight, access all scopes, works remote, you can use your text editor

Cons: no inline graphics

> Jupyter presents a unique programming style where the programmer can change her code while it's running....

Is the author trying to make some kind of statement by using 'her' as a gender neutral third person pronoun? Surely, if this is something they care about (and reasonably so), 'their' would be the logical choice? It's frustrating that the author chooses to be deliberately obtuse in their use of language - to the (slight) detriment of readability.

I understand "their" as a plural, so for lack of it, there are no good gender-neutral singular third person pronouns. In my writing I've quite often used "he" in the past, so I try to keep it balanced. I don't consider this deliberately obtuse, since I find it just as disorienting to see "their" used in a singular context.

> I understand "their" as a plural

Singular "they" has been in use in English for centuries. See https://en.wikipedia.org/wiki/Singular_they or https://www.merriam-webster.com/words-at-play/singular-nonbi... for example.

That's interesting, since 'her' in this context causes something equivalent to a Parse Error in my brain, but 'their' flows completely naturally for me.

Perhaps that's something you should evaluate about yourself then, because I don't think you'd raise the same ParseError if the profession were "nurse". Unconscious biases don't have to be malicious to exist, and I catch myself thinking the way you describe sometimes before realising "wait, this should be fine though".

It's nothing to do with the fact that it's about software development, I don't think (or at least hope). I just find it jarring to see 'her' used as an 'abstract' gender neutral pronoun. In my mind, 'his' has two semantically distinct definitions, and 'her' has just one - and that's just how the English language works.

Agree to disagree on this being a universal 'abstract' gender neutral pronoun problem, and not being profession related.

their is an acceptable third person pronoun. its been in use for centuries in English. some 19th century grammarians may have tried to intervene. it is my preferred usuage.

"her" doesn't bother me at all, or at least no more than "his". but, i guess, some people see the former as political or agenda driven, and the latter as neither. but i suppose that a white person's policy of always addressing a black man with the same honorifics as they would have addressed a white person, might have been considered agenda-driven and ideological by their peers.

I do it all the time since I started using Borland tools in MS-DOS.

Already in Turbo Pascal 5.5, one would loose the context, but the compilation times were so fast that it hardly mattered.

I used to do it all the time too and hated having to write javascript precisely because of the toolchain limitations.

Eventually I settled for Chrome Debugger once they added the ability to map the code to a workspace folder, so I mapped the webstorm project where the files came from, closing the loop. however webstorm doesn't republish until I alt+tab to it and doesn't have a local history feature as nice as Eclipse, but it's close. also any query parameter on the scripts breaks the mapping to the files - I append a build version tag, for caching, and had to have that removed in local tests.

I skimmed a bit because this sounds just like how Eclipse debugger will jump back to the start of a function if you change the code in it, mixed with the Display tab that lets you write code using all the variables in the current scope. I don't use it while writing my own code, but it's invaluable for figuring out surprises in other people's.

The shortcoming presented by the author is exactly why I don't like Jupiter. It gives you a lot of power for simple things, but when you start to create more complex algorithms it gets into the way of a true debugger, which you could access when using a standard python environment (or any other language for that matter).

I keep thinking I should be more interested in Jupyter than I am.

Maybe someone who knows things can tell me this: Can I use it to develop and debug larger existing programs? That is, can I take 10k lines (say) of Python, that are spread across some modules, load them into Jupyter, not get a horrifying mess, and work on my program?

I think so and that's the goal of my tool (do a Ctrl-f for pynt on this page - I don't want to spam the link too many times).

This functionality is also present (somewhat) in pycharm where you can attach a jupyter console to any region of code. Though, that's just a console and not a notebook.

Probably not, but it depends on your program. I've been suprised how well it worked in some places just loading the parts of a program and working with that shimming between them, but as is stated elsewhere it's not really about debugging but more about code exploration.

I think it’s good to have all these different approaches at your disposal. I like having the powerful features of an IDE like PyCharm (including the debugger) but sometimes hop over to Jupyter to try stuff out in a different space.

this was how i wrote matlab - stub out a function, set a breakpoint, run the app, manipulate the variables in the debugger till the algorithm is working, copy and paste back into the editor, save and run again

all without touching the mouse

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact