Yes, I used VSS as a solo developer in the 90s. It was a revelation at the time. I met other VCS systems at grad school (RCS, CVS).
I started a job at MSFT in 2004 and I recall someone explaining that VSS was unsafe and prone to corruption. No idea if that was true, or just lore, but it wasn't an option for work anyway.
The integration with sourcesafe and all of the tools was pretty cool back then. Nothing else really had that level of integration at the time. However, VSS was seriously flakey. It would corrupt randomly for no real reason. Daily backups were always being restored in my workplace. Then they picked PVCS. At least it didnt corrupt itself.
I think VSS was fine if you used it on a local machine. If you put it on a network drive things would just flake out. It also got progressively worse as newer versions came out. Nice GUI, very straight forward to teach someone how to use it (checkout file, change, check in like a book), random corruptions about sums up VSS. That checkin/out model seems simpler for people to grasp. The virtual/branch systems most of the other ones use is kind of a mental block for many until they grok it.
It's an absurd understatement. The only people that seriously used VSS and didn't see any corruption were the people that didn't look at their code history.
I used VSS for a few years back in the late 90's and early 2000's. It was better than nothing - barely - but it was very slow, very network intensive (think MS Access rather than SQL), it had very poor merge primitives (when you checked out a file, nobody else could change it), and yes, it was exceedingly prone to corruption. A couple times we just had to throw away history and start over.
SourceSafe had a great visual merge tool. You could enable multiple checkouts. VSS had tons of real issues but not enabling multiple checkouts was a pain that companies inflicted on themselves. I still miss SourceSafe's merge tool sometimes.
Have you used Visual Studio's git integration? (Note that you could just kick off the merge elsewhere and use VS to manage the conflicts, then commit from back outside. Etc.)
As I recall, one problem was you got silent corruption if you ran out of disk space during certain operations, and there were things that took significantly more disk space while in flight than when finished, so you wouldn’t even know.
When I was at Microsoft, Source Depot was the nicer of the two version control systems I had to use. The other, Source Library Manager, was much worse.
My memory is fuzzy on this but I remember VSS trusting the client for its timestamps and everything getting corrupted when someone's clock was out of sync. Which happened regularly because NTP didn't work very well on Windows back in the early 2000s.
Source Depot was based on Perforce. Microsoft bought a license for the Perforce source code and made changes to work at Microsoft scale (Windows, Office).
TFS was developed in the Studio team. It was designed to work on Microsoft scale and some teams moved over to it (SQL server). It was also available as a fairly decent product (leagues better than SourceSafe).
Microsoft had a research version of the CLR called Rotor (2002) that predated Mono (2004). Rotor built for Windows, FreeBSD, and macOs, albeit with a not-very-open license.
When Mono came along, the internal position at Microsoft was surprisingly positive. There was a dev slide deck that went into Mono in some depth. And a telling slide that said it wasn't a threat because the performance wasn't competitive at the time.
I have various snapshots of the Rotor 1 and 2 sources around and they have the SSCLI license. There is a file that contains BSD licensed code (pal\rotor_pal.h).
Thank you for the follow up. You know after I posted that my thought was am I mistaking their BSD release for a BSD license, and of course I was. The memory isn’t what it used to be.
A decent chunk of AI computation is the ability to do matrix multiplication fast. Part of that is reducing the amount of data transferred to and from the matrix multiplication hardware on the NPU and GPU; memory bandwidth is a significant bottleneck. The article is highlighting 4-bit format use.
GPUs are an evolving target. New GPUs have tensor cores and support all kinds of interesting numeric formats, older GPUs don't support any of the formats that AI workloads are using today (e.g. BF16, int4, all the various smaller FP types).
NPU will be more efficient because it is much less general an GPU and doesn't have any gates for graphics. However, it is also fairly restricted. Cloud hardware is orders of magnitude faster (due to much higher compute resources I/O bandwidth), e.g. https://cloud.google.com/tpu/docs/v6e.
Agree on NPU vs CPU memory bandwidth, but not sure about characterizing the GPU that way. GDDR is usually faster than DDR of the same generation, and on higher end graphics cards has a width bus width. A few GPUs have HBM and pretty much all datacenter ML accelerators (NVidia B200 / H100 / A100, Google TPU, etc). The PCIe bus between the host memory and GPU memory is a bottleneck for intensive workloads.
To perform a multiplication on CPU, even SIMD, that values have to fetched and converted to a form the CPU has multipliers for. This means smaller numeric types penalised. For a 128-bit memory bus, an NPU can fetch 32 4-bit values per transfer; the best case for a CPU is 16 8-bit values.
Details are scant on Microsoft's NPU, but it probably has many parallel multipliers; either in the form of tensor cores or a systolic array. The effective number of matmul's per second (or per memory operation) is higher.
Yeah standalone GPUs do indeed have more bandwidth, but most of these Copilot PCs that have NPUs just have shared memory for everything I think.
fetching 16 8 bit values vs 32 4 bit values is the same, this is the form they are stored in memory. Doing some unpacking into more registers and back is more or less free anyway, if you are memory bandwidth bound. Largely on these lower end machines everything is memory bound not compute bound, although the CPUs cant often use the full memory bandwidth in some systems (eg the Macs) but the GPU can.
Yes, agree. Probably the main thing is the NPU is just a dedicated unit without the generality / complexity of a CPU and so able to crunch matmuls more efficiently.
Perhaps someone ex-Intel can comment, but there was a story within Microsoft that Intel proposed a different x86 derived 64-bit architecture after AMD64, but Microsoft didn't want to support two x86-64 ISAs and that put pay to Intel's proposal. There was also talk at Microsoft about having input into the AMD64 design, but no idea what the details were there exactly.
Thanks to everyone who works on it. We've used the app relentlessly for a couple of years in the UK and when you show it to people they are amazed. People thank us for it and all we did was share it with them. Great work!
Unfortunate that you found it disappointing and it wasn't my intent for the title to imply data. It was intended to describe my experience. I can see how you were primed to think otherwise.
This is very much a rant. Hope you find your numbers elsewhere :)
I visited the article after the title was edited, so I knew what to expect. And it far exceeded my expectations. I hate the palpable feeling that as a software engineer I’m part of an industry that is becoming harmful to society. And we are the ones doing or at least putting up with the excellent examples in the article.
How many "independent" voices you see depends on your bubble. How many users are there of these platforms? What fraction wrote a rant? Not saying those opinions are wrong, but it's not particularly insightful.
See, a company that is user orientated would be appalled by such resonance regardless of numbers and would try to alleviate concerns. Microsoft doesn't do any of that. Their escalating behavior cannot really be explained away.
Sure, maybe there is a happy island somewhere with enthusiastic users, but working in the industry, I doubt that very much. Also, many of their forums about their frameworks have developed into ghost towns. So if you now that happy isle, I would gladly like to see it.
Typically when building a device like this you get a version of Linux / embedded-OS from the SoC vendor and you are stuck with it because SoC vendor doesn't provide docs that would allow drivers to be maintained. This makes ongoing support harder than it might otherwise be.
I started a job at MSFT in 2004 and I recall someone explaining that VSS was unsafe and prone to corruption. No idea if that was true, or just lore, but it wasn't an option for work anyway.