Yeah, SoftRAM 95 was possibly the first instance of what is now known as app store spam: a heavily-advertised app that does literally nothing, but comes with significant side-effects to the stability of your system.
Despite leading Windows internals experts calling this already when the product came out (e.g. SoftRAM95: "False and Misleading": https://ftp.st.ryukoku.ac.jp/pub/published/oreilly/windows/w...), many hobbyist pundits continued to insist this was actually a valuable product. No idea if they were paid or not, but I'm sort-of afraid they really weren't.
Another instance in this category of vaporware is SpinRite. When MFM disk controllers were a thing, this tool could actually fix errors (by physically rewriting sectors) and optimize performance (by changing sector interleave). However, once IDE came along, that was all practically impossible: yet, recommendations to "fix your problem by running SpinRite" persisted pretty much into the next decade.
> many hobbyist pundits continued to insist this was actually a valuable product. No idea if they were paid or not, but I'm sort-of afraid they really weren't.
This pattern persists in hobby circles today. Once an idea takes hold as being true and enough people buy into it, it becomes extremely difficult to convince them it was wrong all along. Nobody likes admitting they were fooled, so they become invested in propagation the myth.
I’ve even encountered this in the business world. I joined a company, discovered they had been doing something with no value (actually negative net value when considering the cost) and explained that it was a myth. The first reaction was that they couldn’t just stop doing it, because that would be tantamount to admitting they were wrong all along. They wanted me to work backward to find a new justification to continue doing it. I left.
Wow, based on the numbers in the linked article, these guys sold ~750k copies of the software which retailed for around $35 in 1995. Even if you assume it sold at wholesale for half that price, it’s still something like $13mm without inflation adjustments, or around $25mm in current dollars. That’s a huge amount of money for an outright fraud. They really should have put people in jail for this. Maybe not the devs, but the management that knowingly shipped a defective fake product.
> Maybe not the devs, but the management that knowingly shipped a defective fake product.
This was the era of software where a product could have a single developer. That developer might have also been one of the key business leaders.
It is funny to see how much modern software development has expanded to the point that nobody can imagine single developers making a product (especially a nonfunctional fraudulent product) alone.
I think this is a bit different from the case of, say, the programmers who made the fake accounting software for Madoff, where it was completely obvious that it was a fraudulent application. Here, if they had actually used a real compression algorithm instead of memcpy, maybe it could have sort of worked. And they split the job up into different parts, so perhaps it wasn’t so clear to the devs that it was fake.
Just to give an opposite anecdote on that, I had a failed (IDE) hard drive several years ago that the usual tools couldn't fix but SpinRite got it going again...
Good for you. `ddrescue` or a vendor-specific tool would have been equally effective, yet a lot cheaper. It's quite well-known that SpinRite had no special access to vendor-proprietary IDE commands or anything, and thus... did nothing.
There were a number of us that looked into this at the time, talking on Compuserve. I think the columnist Robert X Cringley (Infoworld maybe, it’s fuzzy) hooked me in with Mark.
One of the interesting things about the disassembly was there was a pretty decent sized chunk of valid code that had no entry points, that really looked like a naïve implementation of LZ compression. I always figured that the developer just couldn’t get it working correctly, and they had spent so much money on the ad campaign that they just put no-ops in the handler table and shipped it. It’s even possible that management had no idea it was done.
I once worked at a small software developer that had an interesting new application and was looking for a distributor. Two distributors made pretty good offers.
One was a new distributor and one was Syncronys, the company that had developed and distributed SoftRAM 95. Synchronous would be able to get our product into more stores, but many of us were leery of working with a company that had previously sold a fraudulent product.
We raised these concerns to Syncronys. They told us was that the fraud was done by the guy who had been in charge of engineering, who had also been the person who actually wrote SoftRAM 95, and the people on the business side of things had thought they were selling a real working product. That guy was out, and they were looking for a good product to try to rebuild their reputation.
That was a bit hard to believe, because almost always it is the people at the top that do these finds of fraud.
Then they told us the name of the engineer. I'll just call him X. It was someone we were familiar with. Before the founder/CEO (who I'll call F) of our company had founded our company he had co-founded an earlier company--and X was his co-founder!
F had taken some time off from college to move to Silicon Valley and earn some money to be used to start his own company. There he met X, worked with X on a few consulting jobs, and together they came up with an idea and started a company.
X stayed in SV to deal with the business side of things, and F came back to college to finish his final year and to recruit students to work part time for their company and then run the engineering side of things. Engineering was run out of his room at college for a few months then out of a rented house near the college. All the employees were friends or acquaintances of F (and of me) from college.
At some point people started getting paid late, and when F tried to get that fixed found out that X had not been accurate in the information he'd been giving F. Things fell apart, somehow resulting in X getting whatever money there was, and leaving F with with nothing and several employees with missing pay.
In light of what we knew of X from that, and from some subsequent things we'd heard about his subsequent endeavors in SV, the Syncronys thing made a whole lot more sense, and it was quite believable that the business people there were indeed taken in by the fraud.
We ended up putting the matter of which distributor to pick to a company wide vote and Syncronys won by a small margin. We went with them and as far as we could tell they dealt with us honestly and did a good job.
I was friends with F and most of their employees, but F and the employees were in the Los Angeles area. X was in Silicon Valley so I never had occasion to run into X when I'd visit their workplace. When F and I talked about his work we talked about engineering matters, so X didn't come up.
So to me X was just F's remote business partner who I only heard mentioned a few times.
Bonus reading: I was not the only one to do an analysis of the internals of SoftRAM 95. Some guy named Mark Russinovich also took the product apart and came to the same conclusions. I wonder what happened to that guy. He seems kind of sharp.
> In 1996, Russinovich discovered that altering two values in the Windows Registry of the Workstation edition of Windows NT 4.0 would change the installation so it was recognized as a Windows NT Server and allow the installation of Microsoft BackOffice products which were licensed only for the Server edition.[7]
While all “Ram doubling” software for Windows was always a scam, the original RAMDoubler for Mac wasn’t.
Before OS X, you as the user had to tell each application how much memory it could use by going to “Get Info” and setting the memory amount.
The memory also had to be contiguous. That means as you opened and closed apps, your memory could become fragmented and you had to close apps to free up contiguous memory. RD fixed this.
Also, if you wanted to enable disk swapping (sic “virtual memory), you had to allocate enough storage space for the total amount of memory allocated. I had 10MB of RAM for my Mac LCII and an 80MB hard drive. If I wanted 15MB of memory in all, I had to preallocate 15MB of hard drive space. With RD, you only needed 5.
> While all “Ram doubling” software for Windows was always a scam
I could be wrong, but Windows 10 and 11 have a memory compression feature built in, and I think it basically works the same way SoftRAM was supposed to, except with actual compression.
The thing is that the Mac suffered from unique technical issues that Ram Doubler hacked around. The compression feature was probably the least impactful.
I bought the RamDoubler/SpeedDoubler/CopyDoubler combination.
SpeedDoubler was a much better 68K emulator than was built into the first generation PPC Macs and CopyDoubler was a way of copying files in the background.
No mention that all started with the german it magazine c't. All others either ignored it or sung high praises of the program until the magazine ran the program and found out that it made the computers even slower.
> In late 1995, I was asked to investigate the product because it was causing Windows 95 machines to crash and was generating a lot of support calls, not to mention bad PR
Ah, the classics. "I'm using a bog shit hardware with software what claims to enlarge my p.. I mean RAM and when it crashes it's Windows fault! And Gates personally!".
Compare to PEBKAC by default and even after that in the other team: "No, it's your problem you didn't hunted down the specific hardware revision of the notebook to have a stable (if any) WiFi in it".
There was a similar product for the Mac at the time called RAM Doubler by Connextix. It seemed to work better than the Mac virtual memory scheme at the time.
"Developed by Connectix, RAM Doubler was one of the most magical utilities of the early days of the Macintosh. As its name suggested, RAM Doubler promised to double the amount of usable RAM in your Mac, and amazingly, it generally delivered."
Imagine going to all the trouble of creating a custom paging driver, but then just not implementing the compression algo.
Really boggles the mind, as it seems like getting an off the shelf compression algorithm would be easy compared to implementing a custom paging driver. I mean i guess this was mid 90s, were drop in open source compression libraries not a thing yet?
>"Whether this is a net win depends on the memory access patterns of the applications you use. If you had an application that had 6 very hot pages, and 14 additional warm pages, then this could very well be an improvement, since the very hot pages could stay in normal memory, and the 14 warm pages could take turns occupying the two remaining normal memory pages. On the other hand, If you had an application that had 10 very hot pages, and 10 additional warm pages, then this could end up a net loss, because there aren’t enough normal pages to hold the application’s very hot pages, so you are constantly compressing and decompressing memory, and that extra time spent on compression could exceed the time saved by avoiding the I/O to the four pages that didn’t fit in RAM before."
This is an excellent point!
Also -- another observation (for future CPU engineers): The interception and on-the-fly compression/decompression of memory pages that were decided to be swapped out to disk -- is currently handled by additional software which implements this functionality. (zstd for swapfile, etc.)
But, maybe in future computers -- this shouldn't be handled by software...
Maybe a future CPU, if well engineered -- would implement page tables with extra information -- not just a bit that says "this page is present/absent in/from memory" -- but also a bit that might say that it has been compressed and cached to a special pre-determined section of memory by the CPU (and/or cached on the CPU itself) -- which also has onboard compression/decompression hardware specifically created for this purpose...
In other words, it might make sense to move all of this functionality to a future CPU -- where the compression/decompression could get done completely in hardware (should be much faster than software if properly engineered) -- and if a page can't be compressed -- then the CPU hardware "knows" that -- in, or in close to real-time...
Such a CPU should allocate extra data in its page tables for storing advanced statistical data about the historical "heat" of a page -- such that it could decide the best policy for it on-the-fly -- swap to disk, compress/decompress on-the-fly, or simply keep in memory and swap or compress out a different less "hot" page...
Anyway, there is no doubt some future work to be done in that area...
If the engineering for this was absolutely perfect -- then the net effect is that future computers would have X [MB,GB,TB,PB,?] of physical RAM like they do now -- but they'd appear to have Y [MB,GB,TB,PB,?] of RAM -- where Y is greater (much greater!) than X, and where all of that extra "compressed/virtualized" memory -- runs at the exact same speed that regular memory does now!
Impossible to engineer?
Well, maybe... but that would be the ideal, the pinnacle of that engineering -- if it could be obtained...
And I think it would be interesting to try, no matter what the outcome -- even if the effort failed, something would be probably be learned that might help the next future engineering effort...
It's interesting that it was apparently doing something (compressing memory pages) that made sense and was potentially useful even if poor implementation made it unstable. I remember seeing ads for SoftRAM 95, but never tried it because I assumed it was just a scam because the idea of increasing memory by software seemed so improbable.
It was written as if it would be compressing memory pages, but in reality all it did was copy them back and forth shoddily, as if they never got around to implementing a compression algorithm before shipping the software.
> because the idea of increasing memory by software seemed so improbable
DoubleSpace/DriveSpace was a thing back then, so no, idea was fine. More so, a similar product made by a... competent team actually worked[0]
> It's interesting that it was apparently doing something (compressing memory pages) that made sense and was potentially useful even if poor implementation made it unstable.
Poor implementation made it not unstable, it made it returning wrong data. If the thread safety were implemented then it wouldn't have anything positive but at least it wouldn't crash, even in absence of a compression facilities. Alas...
Despite leading Windows internals experts calling this already when the product came out (e.g. SoftRAM95: "False and Misleading": https://ftp.st.ryukoku.ac.jp/pub/published/oreilly/windows/w...), many hobbyist pundits continued to insist this was actually a valuable product. No idea if they were paid or not, but I'm sort-of afraid they really weren't.
Another instance in this category of vaporware is SpinRite. When MFM disk controllers were a thing, this tool could actually fix errors (by physically rewriting sectors) and optimize performance (by changing sector interleave). However, once IDE came along, that was all practically impossible: yet, recommendations to "fix your problem by running SpinRite" persisted pretty much into the next decade.