Undo founder here. I just got a slack from one of our marketing team who is thrilled you appreciate their work. :-)
Our customers are top tier tech companies and our users are among the smartest engineers on the planet. I'm proud of our marketing, but no-one is going to spend a bunch of money with us just because of that.
Undo co-founder here. rr is indeed awesome. If it works for your use-case, you should use it!
Undo is mostly used by companies whose world is complex enough that rr doesn't work for them, and they understand how powerful time travel debugging is.
There has now been a LOT of engineering invested by a lot of very smart people into Undo, so it does also have a lot of polish and nice features.
But honestly, if rr is working for you, that's great. I'm just glad you're not doing printf debugging the whole time :)
Undo founder here. We've been at this for getting on 20 years now. Originally it cost $295 for a perpetual license. Eventually we understood that the majority of developers (actually employers of developers) will pay $0. But some are happy to pay for tooling, as long as they're confident they'll get a many multiples return-on-investment. Hence our pricing. Happily, enough do that we can run a modest but profitable business (40+ people). Customer churn is practically zero.
Why do people pay for Undo when they can get rr -- which is also really good -- for free? Those whose code or environment is big enough and complex enough that rr doesn't work for them, and they understand how powerful time travel debugging is. If rr works for you, you should use it. This includes most independent developers.
If rr can work for you and you're still not using any kind of time travel debugging, you have effective tied one hand behind your own back! If you're independent (incl student or academic) and rr doesn't work for you, get in touch -- we give free licenses for academic and certain other use cases.
There is a wider thing here about software companies paying for dev tooling. So many companies over the years who made really cool things who couldn't make their business work.
Co-founder of Undo here. This is a common misunderstanding, and just not true -- neither for Undo nor rr. Most races will reproduce at least as easily in Undo, especially if you use our "thread fuzzing" feature (rr has something similar, called chaos mode).
Sure, there will always be some races/timing issues that just won't repro under recording (Heisenberg principal and all that), but in fact most races are _more likely_ to occur under recording. Part of this is because you slow down the process being recorded, which is equivalent to speeding up the outside world.
And of course, when you do have your gnarly timing issue captured in a recording, it's usually trivial to root-cause exactly what happened. Our customers tell us that races and timing issues are a major use-case.
I haven't tried UScope yet (I shall), but I don't agree with you about GDB. I don't find it especially buggy unless doing niche things like non-stop debugging -- I guess you may well have a different experience though.
I think the UI is much maligned unfairly. It has a few quirks, but ever used git? For the most part it's fairly consistent and well thought through.
By terrible API you mean the Python? I admit it can be a bit verbose, but gives me what I find I need.
What features do you most miss? My experience is that most people use like 1% of the features, so not sure adding more will make a big difference to most people.
It's been a while since I bothered to try to use it because my experience has been so bad. So I don't remember all my specific complaints about bugs and features. I do remember multi process debugging was a big hole last time I looked. In contrast, I was able to get multi process debugging working really well in Visual Studio.
By terrible API I mean GDB/MI that frontends use. I'm sure people will come try to defend it but the proof is in the pudding and I don't think it's a coincidence that every GDB frontend sucks.
I'll +1 GDB/MI being utter garbage. Bespoke format (ordered multimap as the only keyed data structure, why), weird quoting requirements (or sometimes requirement of lack thereof) on input, extremely incomplete, and in some cases nearly unusable even for what it does support. Feels more like carelessly shoehorning some of the existing gdb commands with a different syntax (but sometimes not different) than an actual API.
If it's been a long time I recommend taking another look. TBF you can tell it hasn't had the millions of dollars of investment that the Microsoft debuggers have, but still it's come a long way over the last 5-10 years.
e.g. it now has decent multi-process support.
I agree MI is kinda horrid, but no need for it these days, you can do everything via the Python API, and the modern equivalent is Debug Adapter Protocol which GDB claims to support now (although I haven't tried).
There a million frontends, including both Visual Studio (via ssh) and VSCode, if you like those.
The perfect developer tool does not exist, but I believe that if you're debugging native code on Linux more than a few times per year then you should really know how to drive GDB. It won't always be the best tool for the job, but it often will be.
one time I wanted to write generic printers. E.g. printer of any type which support C++ iterators. But gdb can't call C++ functions from python api (excepting weird hacks like evaluating `print c.begin()` and catching it output). Although this is not very useful because most of types we use changes very rarely, that's why writing printers is only matter of time.
Another feature is breakpoints which sleep next N seconds. We have breakpoints which can skip next N triggering, but similar with time will be useful to me to debug mouse events in gui apps, etc.
Also the most new gdb still have problems with tab-tab completion (and even Ctrl-C don't return control immediately).
Also lately I often meet problem cannot insert breakpoint 0. Probably this is a bug, because answers from stackoverflow isn't relevant for me
> one time I wanted to write generic printers. E.g. printer of any type which support C++ iterators.
How would that work for types where the required functions are not instantiated, or not instantiated as a standalone function? Most iterators' operator++ are inlined and probably never instantiated as a distinct function.
I would say yes, your CI should accumulate all of those regression tests. Where I work we now have many, many thousands of regression test cases. There's a subset to be run prior to merge which runs in reasonable time, but the full CI just cycles through.
For this to work all the regression tests must be fast, and 100% reliable. It's worth it though. If the mistake was made once, unless there's a regression test to catch it, it'll be made again at some point.
> For this to work all the regression tests must be fast,
Doesn't matter how fast it is, if you're continually adding tests for every single line of code introduced eventually it will get so slow you will want to prune away old tests.
But if you don't squash, doesn't this render git bisect almost useless?
I think every commit that gets merged to main should be an atomic believed-to-work thing. Not only does this make bisect way more effective, but it's a much better narrative for others to read. You should write code to be as readable by others as possible, and your git history likewise.
Individual atomic working commits don't necessarily make a full feature. Most of the time I build features up in stages and each commit works on its own, even without completing the feature in question.
(Warning: contains me trying to play Doom :)