The original release of VB.net did not have it, and was the highest priority feature to add back. When we added it back into Visual Studio, C# got it as well because it’s users demanded it.
Anders was trying to ensure genetics made it into .net 2.0 and did not want the feature. I know he wrote some critiques that it encourages producing bad code.
I added it in the debugger, while each language team had work, and there was a massive work for the CLR.
Changing the current function is actually pretty easy .. it’s when you need to remap the instruction pointer in functions that have been edited multiple times and have frames trapped on a callstack across multiple threads.
It's certainly not adapted to all domains, but in DSP applications where you're looking at immediate audio/video feedback (in my case, audio synths and effects) it makes a lot of sense to regularly switch into "tweaking mode" - not only in order to refine the code, but also to get a better understanding of how small variations in each part of the code can affect the live output, which can be a large portion of the work you'll do in DSP applications. It's just awesome, you're working on a full-fledged application and you can turn it into a live editor at any moment.
Edit and continue has worked in C++ toolchains and editors before VS, and continues to work now :)
Fix and continue support, for example, existed in GDB way back in 2003 (see, e.g., http://www.sourceware.org/ml/gdb/2003-06/msg00500.html) and in forks of gdb well before, which others used for C++ (hence the "implemented again" headline).
The message covers some of the other places this was implemented.
IIRC (it's been a while), HP WDB support for this goes back to the very late 90's.
Now, i'm certainly not going to claim these IDE's were as good as VS, but they definitely had edit and continue for C++.
It's definitely a technical accomplishment, but it's not unique.
> Jupyter presents a unique programming style
> But the difference between Jupyter and a REPL is that Jupyter is persistent. Code which I write in a REPL disappears along with my data when the REPL exits, but Jupyter notebooks hang around.
Sounds like a typical Mathematica notebook to me.
Macsyma (a Computer Algebra system written in Lisp) has had also Notebooks like that: https://people.eecs.berkeley.edu/~fateman/macsyma/docs/intro...
REPLs on Lisp systems have been persistent in Lisp images (not in notebook files and not transferable) since a very long time and some display text/graphics inline.
And I think it's a little uncharitable to think I was claiming that wholesale, since I explicitly reference other implementations of the same idea at the end.
This is approach is available with all of the JVM languages too.
Putting that with what others have noted about the CLR, C, and LISP implementations, this may be more common than rare.
It may not be so much the case the people are being uncharitable as that there really isn't much that is new or unique in software.
For my own part, I have hit that wall so many times that I now begin with the assumption "if I can imagine how it might work, someone has probably already done it".
That is actually nice in the sense that I nearly always have a starting foundation when solving problems these days but also frustrating to the point of cynicism when discoveries or inventions I put a lot of time in to turn out to be minor variations on something 1k+ people already knew about.
For example, every time we discuss Lisp on HN, many talk about SLIME, SBCL and such.
Yet, the actual modern experience of what it meant to use Lisp on its glory days lives on Allegro Common Lisp and Lispworks.
Reading papers from Xerox PARC research always makes me sad that many think replicating a PDP-11 is the ultimate developer experience.
> Common LISP  and Dylan [12, 37] include optional type annotations, but the annotations are not used for type checking, they are used to improve performance.
Actually, type annotations in Common Lisp have been mostly used for three different purposes:
* improving performance
(by choosing specialized operations
and/or removing runtime type checks/dispatching)
* improving runtime safety, plus better error messages
(by adding more and more specific runtime type checks)
* improving compile-time safety
(by compile-time warnings of type errors)
There are at least two forks of CMUCL (SBCL and Scieneer CL), which use this,too.
Thus CMU Common Lisp has optional static typing + a limited form of compile-time type inference/propagation/checks.
Not sure when CMUCL added this, probably in the late 80s/early 90s. It was for example published in 1992 in the paper: 'The Python compiler for CMU Common Lisp', https://www.researchgate.net/publication/221252239_Python_co...
It's just as bad in any discussion of graphics techniques, operating system features, and again in security approaches/features. Migrating to devops and security spaces has been like a conceptual "Groundhog Day".
I'm starting to think our industry is actually not so innovative and fast moving as we all like to say it is. It may be that software is merely a new engineering field and new fields begin with a wave of invention followed by reflections of reiteration as boundaries are found and concepts get refined.
After that I may pause execution and examine some variables and work out what is wrong. From there I need to work out what has caused this to go wrong, perhaps a hardware breakpoint and reloading part of the scene will help.
From doing that may be I see what triggers the hardware breakpoint, a for loop not ending correctly or taking too long to end.
With edit and continue it maybe possible to make the fix, trigger the function again and close the bug. Without it may require a compile, link and the 15 minutes of repro steps.
So edit and continue can massively cut down the amount of time required to fix a bug.
It seems really weird it would be described as amateur or not professional. As usual I suspect it is part of the trend of making everything black and white rather than looking at things on a case by case basis.
Sure if some maths needs to be solved to fix a collision bug when the use is drawing a line on the screen, I would work out the maths and make sure the function is right. But if the maths is right or I don't know what is causing the issue, edit and continue is just another tool for fast and efficient debugging.
In my opinion...
I think it's useful for fleshing out a bug and could save time if there's another issue downstream, but I wouldn't count it as solved until it has been tested on a fresh run.
I'll have "pdb" set up in Jupyter REPL so it drops into a debugger when I have an error with a function.
In the case of a long running function I want to debug (like the video example at the end) I'll set local variables with the same names as the variables of the function and then execute the function line-by-line/chunk-by-chunk.
The only problem is losing track of global state, but the same thing can happen in Jupyter notebooks.
This type of workflow is really common in Lisp and R.
If I want to integrate documentation and code, I'll use org-babel or R Markdown (which does work with Python).
-  https://github.com/millejoh/emacs-ipython-notebook
The other thing that this sort of interactivity/short-edit-continue-cycling is heavily promoted for is learning, which IMHO is probably the worst way to do it; it's essentially encouraging programmers to write code without actually understanding what they're writing, and when it doesn't work, to try random things until it does. That's no way to write good software.
It's better to spend more time thinking about the problem, writing, and then proofreading code before even trying to compile and run it, than try to write something you vaguely think may work and then spend much more time debugging and "fixing" it. Those from the latter school of thought are often surprised when they see me write several hundred lines of code and it works perfectly the first time. To me, it's the other way around... seeing those who can't write more than a few words without making blatant syntax or logic errors.
In other words, "reducing the cost of mistakes" may only encourage making them; and in any case, does not discourage it.
The horrible code you inherited as the programmer brought on is because some non-professional domain expert made a ‘hairball of extreme utility’ that unlocked product-market fit on back of a mess of experiments.
VB’s persona was ‘Mort’, literally a scientist trying to get an answer, or a business person trying to solve a problem, not a programmer trying to build a system. It’s 99 hairballs of no utility and 1 that wouldn’t have been built otherwise because no one knew it was useful until it exists.
Some categories of UI experience problems lend themselves to simply making the trial loop as quick as possible.
It is also true that understanding a dataset has an initial experimental component. No amount of thinking about code is going to tell you the dataset has a fatal flaw, until you discover it.
I find edit and continue really useful for making something worm with these.
Are these out of the reach of "professional programmers"? Can they only work on systems with fully specified models, with exacting requirements?
As an example ... for a ‘professional programmer’ there is an argument that allowing E&C to work if you edit in vim or emails is a feature. For what we needed to accomplish that is an anti-feature.
Then I prefer to be called an amateur, as I really like using that feature, instead of loosing my debugging context and starting everything from scratch, some times spending several minutes trying to replicate the issue that landed me there.
I don't have to "enjoy" doing everything the hard way, just for the sake of being professional.
UI/UX and language features designed for productivity, help everyone, not only newbies.
What I understood from SteveJS's remark, is that although he took part designing that feature, he doesn't share this opinion.
If not, I would be gladly corrected.
E&C is a useful tool, just like a debugger in general is a useful tool, despite early enthusiasm for unit testing having people claim you shouldn’t use debuggers anymore, but instead write unit tests to debug every issue. E&C can be misused. If i was using C# i would take generics over edit and continue if I could only have one, but I definitely prefer to have both.
The theory behind using personas is you get a better tool or solution by highly focusing on a concrete user rather than spread around a bunch of features across every possible customer. What i expressed is what is necessary to ‘get through’ to someone who hasn’t read a book like ‘the inmates are running the asylum’. That was true of many in Devdiv back at that time. UX is now recognized more firmly as a separable highly valued skill. If you read and understand the persona and still think ‘i’m a programmer, this is a dev tool, my opinion is more import than what the persona would want .. that is the audience for the above. I think that is what is present in the comment to which i was replying, so i was attempting to put it in those terms.
I do believe it is helpful to understand who a tool is designed for to see why it works as it does. Edit and continue when done well is incredibly ‘safe’. It does what you expect every single time, even if you don’t know how it is doing it.
A good example where there is a hard choice is changing a linq statement. To make E&C in a dev tool that holds true to ‘what is on the page is what runs’ you need to violate aspects of how deferred execution works. If the closure was already captured are all existing instances of them going back to the source as it is on the page now or the source as it was when when captured? I have a strong opinion that it should be the source on the page now to build a good E&C, but that choice is detrimental to learning what is happening with any type of deferred execution. The cost of implementation for my opinion is also probably two or more orders of magnitude in effort.
I also think ‘non-professionals’ should be able to program and get to something that works. Particularly in people who write developer tools the bigger chore is gaining empathy for people who deserve to program, but simply aren’t making infrastructure. My reply is counterproductive in trying to model proper empathy so thank you for calling that out.
I fully agree with your point of view now.
However, before long, I realized I needed the tool because the function was too long and needed to be split into smaller pieces. In that case having the crutch of an interactive programming tool led to worse code. Once I refactored the function I realized there wasn't much need for the tool anymore.
That being said, I have taught many a student to code and can't tell you how many conversations have ended with me saying "Maybe it will work. I'm not sure. Just try it!". Students (not all) in an unfamiliar environment will sometimes overthink things rather than trying a failing quickly and correcting their mistake. In this case, failing quickly is an indispensable tool to help student "hone in" on the solution and make progress towards it.
Of course I have seen a fair number of students engage in "shotgun programming", switching a < operator to a > operator and hoping that it will work, so obviously there needs to be a balance.
When I build C++ stuff at home I go to great lengths to shave milliseconds off my compile time, because it is just so much better to have your program built in under 2 s than in 2 minutes.
To put it bluntly, yes.
And you base that on one anecdote?
I've seen it many times; I was just giving one example.
And somehow your example coworker would transform into a sensible programmer if you add a Sleep(30000) in the compiler?
If every compile-and-test cycle took minutes instead of seconds, he wouldn't be so inclined to keep trying random changes.
Fast compile-and-test cycles can be good, but I've seen it abused more often than not.
-  http://jupyter.org/widgets
-  https://github.com/jupyter/dashboards
Very true. When programming in jupyter notebooks I often put off wrapping my code in a function because it becomes harder to interact with and debug. But this obviously becomes problematic when you want to leverage code reuse and abstraction.
In order to tackle this (somewhat) I made a tool  which allows you to take code anywhere and get it back into a jupyter notebook and make it top-level (e.g. function function/method and loop bodies).
-  https://github.com/ebanner/pynt
And I realized that, past an intermediate level of expertise in this ecosystem, you could really benefit from dropping down to JSON of your notebook and doing a bit of "metaprogramming" (the way you did). Yet the jupyter development team doesn't appear to be taking that into account, i.e., not making it easy/consistent for users to programmatically manipulate their notebooks.
I'm curious for what ideas you have that you think jupyter notebooks (or other interactive programming tools) would have the most promise for gaining adoption amongst software engineers.
One pynt feature I had going facilitated a "debugger-driven development" workflow where when your code hits an exception it would save the state and generate a jupyter notebook with your code in it, hence allowing you to fix the bug faster.
I wonder if the tool I mentioned in  facilitates the workflow you have in mind? It promotes a workflow of setting a breakpoint at the beginning of a function, attach a jupyter notebook to it, dump your code into a jupyter notebook, edit the code in there, and then paste the result back into the code buffer (once you've verified everything checks out locally).
One reason I suspect the debugger is not seen as part of the software engineering process (i.e. to extend or modify code) is because of the editing capabilities within a debugger CLI are usually very primitive. At least with pynt, the "debugger" (i.e. the jupyter notebook) is still in emacs so you can leverage all the strengths of your editor while doing interactive programming.
> let alone making programmability more important than interactivity, something I believe in
Can you elaborate on what you mean here? Maybe give an example of a programmable gdb feature that you think would be useful?
-  https://news.ycombinator.com/item?id=16899023
- Programmable debugger would be a massive benefit to compiled langauges like C, and not as much of a benefit for interpretive systems like Python. Especially with the REPL mode of programming in Python (you run something, don't like the results, try something else, then move to next step) is already a pretty effective way to do exploratory prototyping. A programmable gdb would allow REPL like workflow for C.
- As an example, let's say a C program seg-faults at a certain point in code. If you run your C program inside gdb, gdb will stop there and let you do some exploration. That exploration is very tedious if you can't programmatically manipulate your registers and memory areas. In principle, you should be able to write an arbitrarily complex program to bring the seg-faulted state back into a functioning state, and then when the code continues, you don't get the seg-fault again, and you didn't even have to rerun the code.
One use case in which this is massively useful is long running scientific programs. You start a complex scientific simulation based on a C program and it is expected to run for, let's say, 48 hours, but it seg-faults in the 46th hour! If you run it in a programmable gdb you could fix the seg-fault right then and there, and continue the program instead of trial and error of multiple runs, each run taking 46 hours before you know if your fix worked or not.
I think this scenario might require more than just a programmable gdb, e.g., it might require a way for the program to be recompiled and put in memory replacing the older buggy program, before the program has actually finished. But a programmable gdb would be a big part of that system.
It has a builtin C interpreter which allowed you to dynamically add and change code while debugging.
Unfortunately the code has somewhat bitrotted, so it's hard to use on a modern Linux system
And I can't mention Cling without what they're trying to replace CINT , which gives you C and C++ programming, but with a rather slow interpreter.
After programming my whole life I have to say this industry is surprisingly math averse, regressive, and led by cargo cults.
What wheel will we reinvent next week!? Stay tuned...
> This works exactly as intended! We were able to edit our program while it was running, and then re-run only the part that needed fixing. In some sense, this is an obvious result—a REPL is designed to do exactly this, allow you to create new code while inside a long-running programming environment. But the difference between Jupyter and a REPL is that Jupyter is persistent. Code which I write in a REPL disappears along with my data when the REPL exits, but Jupyter notebooks hang around. Jupyter’s structure of delimited code cells enables a programming style where each can be treated like an atomic unit, where if it completes, then its effects are persisted in memory for other code cells to process.
> More generally, we can view this as a form of programming in the debugger. Rather than separating code creation and code execution as different phases of the programming cycle, they become intertwined. Jupyter performs the many functions of a debugger—inspecting the values of variables, setting breakpoints (ends of code cells), providing rich visualization of program intermediates (e.g. graphs)—except the programmer can react to program’s execution by changing the code while it runs.
I just don't understand what's so amazing about this. This is total standard debugging. In python you can do it with with the built-in debugging module pdb:
import pdb; pdb.set_trace()
python -m pdb script.py
python -i script.py
I mean it's great that more people start debugging code, but calling this a feature of jupyter is a little ridiculous. The exact same feature exists in the python REPL and that's reflected in jupyter.
You can sort of do something like this with REPLs, by copying code around between an editor window and a REPL window until you are satisfied that it's what you want. But you have to keep the REPL and the editor in sync manually. For example, if by experimenting in the REPL or debugger you find a bug that requires changes to three functions, you must either change them in the editor and copy all off the changes to the REPL to test the new program state; or you can redefined them in the REPL's less-than-ideal editor and then make sure to copy the updated versions to the real editor to save them in the source file.
I've worked with a few systems that were not REPL-based but more notebook-based, and it's very cool if the system keeps track of stuff for you. In particular, Coq and Isabelle have such environments.
(I've never worked with Jupyter, but I think I should give it a try.)
q = 'foo and bar'
inside the pdb
As others have mentioned in the comments it seems that similar workflows have existed across a number of languages and IDEs for many years, but it seems that they haven't really caught mainstream attention (in the sense of there being common conventions that multiple languages and tools follow).
For my own workflow, I have a debugger-like tool for IPython (https://github.com/nikitakit/xdbg) that lets me set a breakpoint anywhere in my code and resume the current Jupyter session once the breakpoint is reached, making sure that scope is properly adjusted to provide access to all local variables. When combined with text editor integration (such as https://github.com/nteract/hydrogen), this is the best I've managed to come up with in terms of minimizing the "penalty for abstraction" while maintaining interactivity.
From what I see in the readme, a lot of the features are about extracting code snippets from the source file and sending them to the notebook. But when you're trying to interactively edit a function that's deep in the call stack, there's a question of how to pause execution right as the function is being called and set up the execution scope for the REPL/notebook.
I actually think handling scope is perhaps the key issue for interactive programming. If the global scope is easier to interact with than anything nested, you continue to have a "penalty for abstraction". This is the case even if your language is purely functional/reactive/supports reversible debugging -- in some of his talks about Eve, Chris Granger mentioned how scope proved to be a real challenge even in a functional programming setting (IIRC).
This is one of the reasons I prefer Hydrogen to Jupyter notebooks. In a notebook all cells are toplevel, whereas in hydrogen I can select any chunk of text (or chunk of the AST) and execute it. It's not as good at presenting computational narratives, but much better for interactively changing an existing program to do something new. There's still a lot of issues on the UI front though, so I'm definitely interested in any ideas about that.
I am using an inline ipython shell to interact with the local variables from any scope. I just drop a call to "DBG()" (how I called it) where I want to take a peek. In order to write the code, I use a normal program editor and work remote with SSH. The only drawback is that I have to exit and reload the program for each change in the source files, but at least I have direct access to all scopes.
When I start a new program the last instruction is a call to DBG(). I compose functions in the REPL and move them to the source file, above the DBG call. I love the interactivity, ability to inspect, compose and test until I get the code right, while at the same time being able to structure larger source files in a nice editor.
Here's the library: https://github.com/horiacristescu/romanian-diacritic-restora...
Pros: lightweight, access all scopes, works remote, you can use your text editor
Cons: no inline graphics
Is the author trying to make some kind of statement by using 'her' as a gender neutral third person pronoun? Surely, if this is something they care about (and reasonably so), 'their' would be the logical choice? It's frustrating that the author chooses to be deliberately obtuse in their use of language - to the (slight) detriment of readability.
Singular "they" has been in use in English for centuries. See https://en.wikipedia.org/wiki/Singular_they or https://www.merriam-webster.com/words-at-play/singular-nonbi... for example.
"her" doesn't bother me at all, or at least no more than "his". but, i guess, some people see the former as political or agenda driven, and the latter as neither. but i suppose that a white person's policy of always addressing a black man with the same honorifics as they would have addressed a white person, might have been considered agenda-driven and ideological by their peers.
Already in Turbo Pascal 5.5, one would loose the context, but the compilation times were so fast that it hardly mattered.
Eventually I settled for Chrome Debugger once they added the ability to map the code to a workspace folder, so I mapped the webstorm project where the files came from, closing the loop. however webstorm doesn't republish until I alt+tab to it and doesn't have a local history feature as nice as Eclipse, but it's close. also any query parameter on the scripts breaks the mapping to the files - I append a build version tag, for caching, and had to have that removed in local tests.
Maybe someone who knows things can tell me this: Can I use it to develop and debug larger existing programs? That is, can I take 10k lines (say) of Python, that are spread across some modules, load them into Jupyter, not get a horrifying mess, and work on my program?
This functionality is also present (somewhat) in pycharm where you can attach a jupyter console to any region of code. Though, that's just a console and not a notebook.
all without touching the mouse