My assumption is that the revolutionary advance was military aerial camera systems (such as Angel Fire) that continuously record movement over an entire city in high resolution. When something happens, you can play it backwards to see who drove or walked where, who met whom, find all the connections between people, and trace events to the beginning.
Especially modern stuff. I've been talking with a lot of hobbyist drone owners lately and several folks are working on fascinating projects using a fleet of semi-autonomous lightweight fixed wing and quad drones along with modern autonomous aircraft techniques.
These things are small (you could hold them one handed on a crowded bus), fast (even the quads break 140mph), can be quiet (especially the fixed wings) and have cellphones.
One of the projects I saw was proving that an automated drone network could create a full coverage, high resolution video coverage of a parking lot even with drones having to leave the site to get fresh batteries.
They're working for a team that has a DARPA grant, so...
Drones which can zoom into an area at 100 mph plus and establish a complete surveillance perimeter. They could dock on telephone poles throughout a city. I imagine "calling 911" could dispatch these monitors to your location in under 60 seconds on average.
Drones were a lot more prominent in the recent hurricane response, I think it's the tip of the iceberg.
Just like how stealth aircraft were a breakthrough, and also totally useless in this situation.
I've never had much use for debuggers. Of course I've had bugs that could have been found faster with debuggers, but I usually find myself using logs of the information I want to be quicker to work with. Of course this depends upon the program and the environment one is debugging in. I'd rather reason about the program statically than try to follow it dynamically in a debugger.
On the other hand, one of the best kernel programmers I ever worked with carried his own debugger around and would implement it in a new kernel he was working on just so he could have it at hand. (That's you Dammon if you're out there!)
 Rand's Extendable Debugging and Monitoring System, https://www.rand.org/content/dam/rand/pubs/research_memorand...
 Feldman, S.I. and Brown, C.B. (1988). Igor: A system for program debugging via reversible execution. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85. 2238&rep=rep1&type=pdf
If you're using logs, you're using a debugger, just a very limited one.
Reasoning about a program statically is great, but once it gets above a certain size you can't keep it all in your head and you must collect data to narrow down the problem. That's where a debugger comes in. Then there are the cases where you're debugging someone else's code and you don't have it in your head in the first place.
rr sounds fantastic... unfortunately, I can't use it because AMD CPUs aren't supported (they simply don't have adequate performance counters; it's not rr's fault). With the exception of Ryzen, but even there, the performance counters apparently aren't entirely reliable (meaning sometimes the debugger will go haywire)
As best I can tell there is no mention in this article of how this feature will be priced, which winds up limiting how often it will be useful simply because of availability. Let me know when this shows up in the free SKUs ("Express") or perhaps even the drug-pusher-like "Community" (first hit is free, unless/until you're making money) edition.
It is perfectly reasonable on their part as a company selling dev tools to do this; I just feel like I'm window-shopping, unable to actually afford the really cool bits. In my dreams I am way off base and this functionality is freely available for all now - I kind of have the feeling something similar been around for a while in the Ultimate SKU.
(Also, let me know if this ever comes to managed code.)
I mean, yeah it's not free-for-all, but "first hit is free" is kind of unfair. A company is not allowed to use it if they have more than "(a) more than 250 PCs or users or (b) one million U.S. dollars... in annual revenues" , and if it doesn't meet that criteria, it's usage is limited to 5 concurrent instances. More accurate would be "first hits are free, and you won't start getting charged until you're big enough where you can afford it".
Just making the distinction because I still have friends who think they can't use Community at home if they want to have a side project that makes money.
> It is perfectly reasonable on their part as a company selling dev tools to do this
The Community edition licensing terms are, at best, awkward. I personally would have the hardest time with the 250 user limit - can I publish a free mobile app without fear this restriction might kick in?
It is not as blatant as the BizSpark / Imagine (DreamSpark) programs, so I guess that's a plus. The end result looks close enough to be considered the same to me.
> I personally would have the hardest time with the 250 user limit
From the document, the text is "more than 250 PCs or users", so my understanding is that a company is limited if it has >= 250 computer users, not 250 customers. E.g. you could have a million free app users and it would be fine.
That fact that I had to preface that with "my understanding is that" means that the document is fuzzy, which is annoying, but at least further documents can clarify a bit:
> - Any individual developer can use Visual Studio Community, to create their own free or paid apps.
> - In non-enterprise organizations up to 5 users can use Visual Studio Community. In
enterprise organizations (meaning those with >250 PCs or > $1M in annual revenue) no use is permitted for
employees as well as contractors beyond... [the exceptions as noted].
So you (as an individual not as part of a company) could create a paid or mobile app and have as many users as you want and make as much as you can. If you're a company, then you need to be under 250 PCs, <= $1M, and <= 5 Visual Studio users and you're still good. Exceed those numbers and you have to pay up.
WinDbg can work with managed code through extensions, see https://docs.microsoft.com/en-us/windows-hardware/drivers/de....
For managed code, VS2010 (Ultimate) introduced Intellitrace for C# .NET code. It let you step "backwards" which is a form of "time travel".
Since 2010, people have wondered if Microsoft was going to introduce a similar debugging capability for C++. It looks like they did -- but instead of reusing the previous "Intellitrace" brand and calling it "Intellitrace for C++", they've labeled it "Time Travel Debugging".
I seem to recall geohot doing similar to TTD in the past?
Skip to around 4:15 for the actual content.
Getting the overhead down to < 2x (for rr) or 10x (for TTD), with reasonable trace sizes, requires much higher tech ... and is absolutely necessary for most users.
hn discussion: https://news.ycombinator.com/item?id=7593032
Maybe .NET Native should produce identical results, but in practice it doesn’t always do that.
- Without TTD, I can inspect values in the call stack where I set a breakpoint
- With TTD, I can inspect values in my whole program, and can jump back/forward in time to see how those values changed
Is that right?
Kernel modules need constant recompilation but that's a very separate issue from programs.
If a product is deprecated, it may not receive security updates or fixes and such. As a user that puts you in a tough situation.
Unfortunately as a user there's not a lot you can do to influence the governance of those products or try to do something to keep them around. If the vendor decides to sunset a product that's it. That's all I am trying to say.
For your own applications you will most likely use a container anyways and that gives you fairly reproducible builds.
Many of the .NET APIs became COM based ones on Vista, with plain C Win32 ones being less and less common until we got WinRT, as the sibling comment points out.
Interesting enough WinRT shares many ideas with the genesis of .NET, before they decided to go the CLR route.
WCF is still being supported, there was a .NET Conf 2017 session.
Highly ironic for a framework that's called Universal Windows Platform :D
Microsoft's incentives still don't quite align with their developer/customers, but Microsoft does an amazing job pushing toward whatever is tossed out for sale as the new & shiny each year. The entire company is an amazing marketing machine.
Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? [...] The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features.