I worked as an intern at DEC for two summers. During the 2nd summer between Jr and Sr year, my roommate and I got a small apartment in Marlboro within walking distance of our jobs. That summer, I wrote the spooler for the KL-10 (not yet released), and my roommate write the microcode for the KL-10. I am still in touch with him, he is still awesome like that. Anyway, I got intrigued by a hush-hush project in Maynard where a guy had taken some pdp-8 boards and with a small monitor stuffed them into a salesman's suitcase. DEC did, in fact, have a Portable Computer. At this time, the 8008 had been released, and it was 'obvious' to me that microcomputers would 'take over'. This was 1975. I invited Ken Olsen over one Sunday afternoon to chat with us co-op students and a few employees, and he came over! He must have been there a couple of hours, chatting us all up. I told him that he should put the LSI-11 into a nice small box and sell it at a competitive price, to stop all these crappy chips like the 8080, 6502, etc from gaining traction. Ken didn't seem to understand why anybody would want to have his own computer. Frustrated by not communicating the urgency properly, back in the pdp-10 group, I sent a note to Gordon Bell that DEC should forget about these crappy little 8- and 16-bit chips, and should put a pdp-10 onto a board -- and that could command a larger price, and could establish the pdp-10's instruction set as a standard. Gordon wrote back to me with a short phrase written at the top of my memo: "Do it!" Now of course I was going back to school in September, and could not do it. I have wondered whether I should have just chucked school, and stayed at DEC to "do it". If so, the trajectory of the Computing Industry may well have been substantially altered. I guess this sort of thing is called "Life". Sigh.
Fascinating story. But I doubt anyone could have made a KL-10 on a board in 1975. A TTL implementation would have been difficult to squeeze into a small space, and a VLSI implementation was at least a decade away.
In fact there was a short-lived project called Minnow which was supposed to put a KL-10 on a desk, but it was overtaken by VAX - much like the Jupiter super-KL-10 project, which never got past the design notes and emulation stage.
LSI-11s did end up in small boxes, so that wasn't the problem.
The problem was that Olsen didn't have a vision for commodity personal computing. And to be fair not many people did in the 70s.
He seems to have been a very 1950s corporate kind of manager, and couldn't imagine computing outside of technical and corporate settings. DEC's culture was geared to selling to corporates and academia and trying to compete with IBM, not with smaller startups. So when DEC did eventually try to make hardware that could compete with PCs they tried to sell it to their usual customers instead of finding new markets. And that... didn't work.
Apple meanwhile had worked out that you needed a national dealer network with stores and hands-on demos all over the country. And plenty of approachable PR and ad spend. That was really what made Apple stand out from the competition, and that kind of marketing, to users who didn't work in big offices, factories, and research labs, and didn't have degrees, was unimaginable at DEC.
I'm fascinated by DEC as a company. There was some incredible world-leading engineering and a progressive culture, but the vision seems to have got stuck on a peg in the 70s, kept it together in the 80s because VAX was already in place, but couldn't deal realistically with the 90s.
Gordon Bell has said that Olsen never quite understood what VLSI implied culturally and socially - never mind technically and financially.
There's an apocryphal story about someone handing Olsen one of the pre-Alpha VAX-on-a-chip designs and he has a very polite mini-meltdown when he realises that it literally outperforms the top water-cooled mainframe ECL VAX hardware - for just a few hundred dollars, instead of a couple of million.
> There's an apocryphal story about someone handing Olsen one of the pre-Alpha VAX-on-a-chip designs and he has a very polite mini-meltdown when he realises that it literally outperforms the top water-cooled mainframe ECL VAX hardware - for just a few hundred dollars, instead of a couple of million.
There is a fair bit of history [1] written by the author of the SIMH emulator, Bob Supnik. The page on NVAX basically confirms your story.
the Jupiter super-KL-10 project, which never got past the design notes and emulation stage.
The scuttlebutt in the Boston area at the time was that DEC was frantically trying to debug one or more ECL prototypes, engineers working in 3 shifts, 24 hours a day. Further, that the head designer was poached from the IBM mainframe world, and did not understand what made a PDP-10 fast.
In particular, unlike the KL-10 which had a 72 bit wide FPU, its was 36 bits wide, which after the lateness was the final straw for its viability in the marketplace. And people marveled that DEC was willing to throw away a $100 million a year business, $275 million today, supposedly a lot of dedicated PDP-10 and Decsystem-20 shops declined to move to the Vax, which they could predict would be a bad long term move.
All this is at the Nth hand rumor level, but from some people who were very interested in the architecture's fate, some with purchasing influence if not capability. It sounds like you have some better sources. The Olsen never groked VLSI tidbit is fascinating, and I suppose credible.
The KC-10, which you were referring to, had one (long) shift of very dedicated engineers, and the lead engineer came from ITEL which made IBM compatible mainframes. He had (an easy to understand) fundamental misunderstanding about PDP-10 architecture which impacted functionality, not performance, as the KC-10 was going to be the fastest machine DEC ever made. He caught mono, and found his misunderstanding while stuck at home with the hardware manuals.
The thing that actually killed the KC-10 (and the KS-10 desktop successor) was the VAX, which Gordon Bell went all in on.
As far as pdp-10 on a board, I knew the LSI-11 was an 8-bit microcoded processor https://en.wikipedia.org/wiki/MCP-1600 In fact, the "EIS/FIS" instruction set option was just another ROM available as an optional chip. And, given that my roommate was doing the KL-10 microcode, I had some idea how you could trade microcode space for actual hardware execution units. So, yes, my original idea was to use the LSI-11 (Western Digital) base chips with different / more microcode. Yes, it would have been slow. But it would have been a pdp-10. Years later, Gordon Bell told me that somebody had built a pdp-10 on a board. I think it may have been an internal project (not the KS10), a hack like the 'pdp-8 in a suitcase' that I saw in Maynard. I need to ask him.
I, too, have difficulty understanding the killing of the pdp-10 line; what often goes unnoticed is the very high level of capability in Marlboro (vs e.g. Maynard). When I started as a co-op student, I wanted to work in Maynard with the pdp-11 folks, which I thought was the cat's meow. But wiser folks told me, "Hey, you should consider yourself special, as they are putting you in the pdp-10 group!" And of course they were right. So when DEC canceled the '20, they also -- effectively -- broke up the best / brightest concentration of computer geeks in the whole company. It's difficult to appreciate the pride the pdp-10 group took in their machines - after all, they were the machines for AI research and were the first machines on the ARPAnet. And, as anybody alive today knows, 32 bits isn't quite enough, but 36? nee 72? Ahhh... ;-)
Ken was a very bright engineer. He used his engineering skills to 'engineer' DEC, I think. I asked him if he thought DEC was competing with IBM, he said "No, we serve different markets." I wish I remembered more about that conversation that day. DEC was only 18 years old at the time.
I've always wondered what the computing landscape would be like if DEC hadn't been so desperately afraid of undercutting their mini/mainframe business, and unwilling to serve the small business and/or home markets.
All of their early micros were hobbled in some way to make them not-quite-compatible; the Rainbow, despite running MS-DOS, was not IBM PC compatible, and the Professional series had incompatibilities with standard PDP-11 systems. The Rainbow didn't even include a FORMAT command for formatting floppy disks; you were expected to buy preformatted ones from DEC, and using standard 5.25" floppies with hub rings was a good way to kill your floppy drive due to the RX50's mechanical design.
Probably the best shot at a home PDP-11 came via Heathkit with the H11 and H11A machines, but that was a small niche.
Even when they finally admitted to themselves that micros were the way of the future, they concentrated on the enterprise market, to the exclusion of all else. Back when the Alpha was the performance king, I would have loved to be able to have one, but the cost was well out of reach of mere mortals.
In my experience, the main reason these companies fail and especially why acquisitions don’t turn out well is the assumption that the way you do business and the people you have are just successful in their own right and will just take in the new sheep and keep fleecing the larger flock. It doesn’t work like that.
>All of their early micros were hobbled in some way to make them not-quite-compatible
All the minicomputer makers suffered from this to greater or lesser degrees. I think their micros were so compatible with the IBM PC relative to the completely vertical silos that their minis lived in--own chips, own systems, own disks, own apps, etc.--that they just couldn't see that mostly-compatible == incompatible. Furthermore, as another comment noted, they were reluctant to even sell in computer stores much less something radical like mail order.
Thank you for sharing such an interesting tale. Life is one of those journeys where you're presented forks in the road that you can only see a little way down and you need to choose just one, always to wonder what the other fork held.
Thank you for sharing your story!
It is really interesting to hear these kind of anecdotes.
Who knows what kind of ripple effect would have happened had you stayed at DEC, but that thinking goes both ways.
There is a relevant xkdc comic regarding how there is always going to be “what if” moments in your life.
Incredible story! The DEC Alpha that eventually emerged was ultimately responsible for Linux universality in the mid-1990s. As Linux was originally written to take advantage of 68k based home PCs like the Atari and Amiga. DEC basically gifted one to the Helsinki CS dept in hopes they would port a free unix to it. The environment back then for workstations was wonderfully heterogeneous: HP Apollo, SGI, Sun, DEC and even the NeXT Color Cube all tried to differentiate and dominate. But were all subsumed by commodity x86 Linux on PCI architecture and the explosion of web servers running apache ;)
The original Linux was not only extremely PC-centric, it wallowed in features available on PC's and was totally unconcerned with most portability issues other than at a user level. The original Intel 80386 architecture that Linux was written for is perhaps the example of current CISC design, and has high-level support for features other current CPU's would not even dream about implementing (see for example [CG87]). Linux did not even try to avoid using the x86 features available to those early versions — quite the opposite. Linux started out as a project to find out exactly what you could do with the CPU, and as such it used just about every feature of the CPU you could find ranging from using segments for inter-process protection to hardware assisted process switching.
However, the initial unportable approach was a case of an unportable implementation rather than an inherently unportable design. The goal of being compatible with other UNIX's resulted in a system that had portable interfaces despite the implementation details. That portable design essentially made Linux itself reasonably portable.
The first Linux that was based on a architecture different from the Intel 80386 was the port of Linux to the Motorola 680x0 family that actually got started rather early in the development of Linux. However, the original Linux/68k project was not really concerned with portability, but rather with making Linux run on 68k-based Amiga and Atari computers.
...
While the Linux/68k project was in itself a huge step forward, the real portability work began when the author was offered an Alpha system by Digital in the hope of making Linux work on the new Alpha architecture. Very early it became clear that in order to be able to maintain both the stable Intel-based platform and support a new and in some respects radically different platform the kernel really needed some major re-engineering to make it fundamentally more portable. The issues and the end result is what is described in this paper.
Looks like I got my timeline somewhat backwards! Initial Linux build was the 32-bit x386 version, though 68k amiga portability also started rather early. Ultimately resulting in a variant for the 64-bit dec alpha, after receiving an AlphaStation 500. My recollection of the AlphaStation circa 2000: it was indeed a powerhouse for CAD type apps, though Silicon Graphics would come to dominate in arenas like Hollywood. Thanks for clarifying ;)
Being very pedantic, Linux was very much built for 386 architecture. The 68K had a number of features missing that Linux originally leveraged from the x86 world. Atari and amiga were not initial targets for the runtime.
The very first port of Linux To a different architecture was 68k, but it was really rough. Linus did then do the port over to the dec alpha as well as fix the aspects that were overly wedded to i386.
I would also argue that to a very large degree Linux’s success was driven by the rising of x86 is the dominant compute platform for server architectures. It’s universality was very much a second order effect, and the paying off of technical debt.
It has always fascinated me how many of the most famous architects and overall aesthetics of early computing and networked infrastructure began/occurred at MIT's Lincoln Laboratory, yet few of my fellow students/coworkers that I've discussed it with over the years were aware of its existence/contributions to computing before I brought it up.
- Ken Olsen and Harlan Anderson's idea for DEC (from the article)
- J. C. R. Licklider's Intergalactic Network
- Wesley A. Clark's TX-2/light pen/etc.
- Jay Forrester's SAGE consoles (which Margaret Hamilton worked on from 1961 to 1963, btw)
- Roberts and Marill's TX-2 <=> Santa Monica Q-32 circuit-switching proof of concept
- Marvin Minsky and John McCarthy founding MIT CSAIL in 1959, a year after he joined the Lab staff
- the 1965 packet switching experiment between two Lincoln computers
- Robert Fano's work in the Radar Techniques Group (yes, the Fano from Shannon–Fano coding)
This all began with the DEW line. I worked at the lab years ago, and I believe it was Edwin L. Key as a guest of William W Ward that gave a very long and rather entertaining Friday morning lecture on the topic for us. Aside from the radar technology and logistics problems, there were three areas of computing innovation that came out of it. First, signal processing, which created demand for increasingly complicated circuits from Bell Labs. Second, the user interface to the data, which included both hacking large-format oscilloscopes into displays, and converting radar signals into sound. Third, the communications systems linking remote stations thousands of miles away. This was critical to be reliable during wireless jamming or wire cuts, so this was the front line of coding theory, including Fano and then Solomon at LL, leading to digital switching systems like Arpanet. My first job there was to implement Solomon’s work for a space communications system, so I’m a little more familiar with that line of work at the lab.
> Digital Equipment Corporation, or DEC, who began paving the way for everyone starting in 1957.
I would love to know the funding model for the next 25 years. Altavista was their latest, but it looks like they got burned on their Rainbow 100 in 1982. Took 8 years to report their first loss, and 10 years for a founder retirement. Looks like Vax was their bread and butter for a couple decades.
I love this stuff. If I smell this even close. DEC was basically funded to develop most of modern computing infrastructure, when their was none. They never really came out with icons like a PC. They just crafted the way things get structured for decades....with the funding of....?
For most of their existence, DEC had very aggressive IBM like sales people. They sold large systems. When they tried the Rainbow, they pretty much did everything wrong. They didn't have a sales staff that knew how to sell a $2000 computer. You'd call them about the Rainbow and they wanted to send a sales team. Also, the Rainbow alienated the hackers at the time by its closed nature compared to the IBM PC. For example, you could only use DEC floppies that cost significantly more than normal ones.
Apple is back because they learned their lessons and picked a market that is more forgiving for closed systems. But they won't succeed forever. Just now they had to maintain their lead by vertically integrating processors.
This has positioned them against the entire chip market where everybody is threatened by Apple. Can Apple outspend everybody else, including the entire Chinese and Korean economy?
Sooner or later, the open market will have the better components, just like the PC market. How many more buffers does Apple have to stay in front?
I feel like you're echoing Clayton Christensen in this comment. It may be possible to escape the inevitable destruction he describes by sustaining a culture that cares about innovation.
The thing that woke me up to the Apple way was when I learnt that an early macbook did not need a crossover cable, because its ethernet adapter would auto-negotiate. In design terms, this was low-hanging fruit that was lying around for years. Nobody gets paid for innovations like that, so nobody did it. Until Apple did.
Imagine working at a normal company, and trying to implement that. You would be scorned by colleagues and middle managers as someone who does not focus on the bottom line.
I doubt that IBM, HP or Dell could execute a transition like M1. Apple maintains some spark that makes it possible.
> Can Apple outspend everybody else
The best things in engineering come from small teams who care about quality, and who are given room to chase it. No amount of spend or manpower can compensate for a lack of spark.
> in design terms, this was low-hanging fruit that was lying around for years. Nobody gets paid for innovations like that,
That is perhaps not the best example. Auto-MDI/X was pretty expected after auto-duplex and auto-speed. It was available in switches for years before it was available in NICs. This wasn't an Apple initiative. They probably sourced the autosensing NIC in your Macbook from some established vendor.
What held all three standards back, apart from the additional components and cost, was mainly compatibility issues. Early on, vendor incompatibilities caused autodetection to misfire sometimes, link drops made for a frustrating experience.
You're articulating an investment thesis that has been fairly popular the last 20 years: That other tech companies are protected by impregnable moats, while Apple is perennially one botched product away from going out of business.
But it seems to me that the company is in reality demonstrating far greater resilience and versatility than the cliches would have it.
Consider "picked a market": In 2005, I was joking that my job (macOS Engineer) was "writing firmware for iPod docks", because back then Apple was perceived as a Music player manufacturer that built computers as a hobby. Then, for a good decade, everybody decided Apple was a phone manufacturer. The increasing share of iPhone revenue was seen as an alarming sign. The last couple of years, iPhone revenue share decreased slightly, and that was seen as an alarming sign as well... go figure.
Now (after having done a casual drive by on the fitness tracker and mid-range watch industry), Macs are getting talked about again. You see this as a desperate lunge for survival. I beg to differ.
As a more specific argument, competing against the open market was a disadvantage for Apple when they were low volume, in the PowerPC days. But at this point, I believe they have sufficient volume to maintain custom components. The rest of the industry still has greater volume, but they have to live on lower margins, and are lacking some of the synergies working for Apple, so it's not a given to me that Apple is truly at a disadvantage.
However, at the level of semiconductor manufacturing as opposed to design, the considerations are somewhat different (no clear long run advantages for anyone, no clear benefits to vertical integration, open market wins in the end), which is, I think, why Apple is still outsourcing that step.
Every other hurr-durr company seems to think that the only way forward is to keep piling crap on top of crap. Apple shows everyone when it's time to take something out and leave it in the past.
And people fucking love it. That's their biggest "buffer".
Microsoft & Samsung et al. are the kings of "Me Too", whereas Apple (like Nintendo) is "Not me".
There are people that don't give a fuck about specs and numbers, but what something can do and how pleasant it is to use. Until other companies can figure that shit out, Apple is always going to have a lead.
> And people fucking love it. That's their biggest "buffer".
Yes, Apple has a strange cult appeal. Their customers are happy even when they are being exploited.
> Disc drives, legacy ports, headphone jacks, blinking lights, chargers...
Apple shows everyone when it's time to take something out and leave it in the past.
There are plenty of manufacturers refusing to throw away headphone jacks, and customers are making buying decisions based on it. It may yet turn out to be a poor decision for Apple. It certainly hasn't resulted in them clawing back any market share from Android.
I believe Asus' EeePC was the first massively popular computer to omit a disc drive, not anything from Apple. Ultrabooks were similarly an upmarket response to the popularity of netbooks.
Most of the legacy ports Apple has discarded were their own unpopular proprietary ones, adopting USB dug them out of their own hole and put them on an equal footing with PCs.
For years Apple was trying to push firewire and avoid USB2, to their customers' detriment, which was an obvious mistake to everyone at the time and proved to be a failure.
> Most of the legacy ports Apple has discarded were their own unpopular proprietary ones, adopting USB dug them out of their own hole and put them on an equal footing with PCs.
Uhh the first iMac was the first computer to ditch all legacy ports and go USB-only, and Apple got flak for that too.
Market cap is just public opinion, one bit of negative press can destroy it. To handle failures you need cash on hand, which can't get suddenly erased. Apple has plenty, but not 2T.
Apple survived in part by lasting long enough for there to be a consumer market for whom openness was irrelevant but who were willing to pay a huge premium for design.
My comment was more of an additional reason (on top of the openness of their earlier computers) for why arguing that closed systems didn't stop Apple is not a great argument. Apple got incredibly close to failing. Had it not been for Microsoft's investment they very well might have.
Their closed approach took a long time to start paying off, and so using Apple as an example for why DEC's closed approach wasn't a problem doesn't really add up.
I think the VAX was an icon. I did most of my CompSci undergrad on VAXen running Ultrix. But the VAX and even the later Alpha mnachines were too expensive for individuals or small businesses. Their entire business model was built around sales to corporations, government agencies, and academic institutions that wanted and could afford centralized multiuser machines.
As a sort of microcosm, my undergrad CS started off with a top end dual-VAX, and ended with Sequent Symmetry 386 multi-processor systems that had vastly more processing power and memory.
For correctness sake, Marvin Minsky and John McCarthy founded the MIT AI Lab in 1959. LCS, the Laboratory for Computer Science, was established in 1963 as Project MAC in the EE Department and was renamed LCS in 1976. The AI Lab and LCS joined in 2003 to become CSAIL.
The alpha was a screamer, that’s for sure. This article ignores another influential architecture that arose from Gordon Bell’s 18-bit PDP-1: the 36-bit PDP-6 that turned into the 36-bit “mini mainframe” PDP-10 and later -20 series that were the default research machines for major research universities corporate research labs, and in particular ARPANET development. In fact network byte order is big endian because these machines were.
The PDPD-10s were also the first “lisp machines”: two 18-bit addresses fit in a single machine word, and several lisp primitives were single instructions. The instruction set was compact, regular, and fun to program in.
Thanks for a great game! Used to play MSDOS Empire frequently back in the day. I actually started work on an Empire clone/derivative with network multiplayer support (a challenge for a turn-based game like Empire, but I'm borrowing ideas from Warlight - an network multiplayer version of Risk).
Dude, my GPA seriously suffered when we got Empire running on USC-ECLC (the DEC KI-10 that was running TENEX at the time, later upgraded to a KL-10 model B and TOPS20).
Yes, I believe it was the first machine Lisp ran on though that was before my time.
But the PDP-6 was a different class of device as every machine word was (could store) a cons; the common list manipulation functions (not just car and cdr but rplaca et al) were actual machine instructions etc (even if the names were different, e.g. CAR was HLRZ, as I can still remember).
In other words, lisp was implemented _on_ the IBM machine while the design of the 6/10 architecture was influenced by the needs of lisp. Both Marvin and Gordon independently confirmed this to me.
Is it, though? Can anyone compare/contrast the 2 designs? Just because the same people work on them, doesn't mean they're necessarily related (Alpha vs ARM).
Dave Cutler worked on VMS and NT and as far as I know NT is quite different, for example.
They were not very different under the hood. In fact, many units within the OS were similarly/equally named. The UI was a different beast all together and I remember the UI crashing in WinNT 3.51 while the kernel and services were happily chugging along.
As you'd expect for the failure of such a large company, it wasn't just one thing that went wrong. Among the reasons were: a too-rapid expansion into the commercial area, leading to huge redundancy costs when that had to be scaled back; management flailing about with competing projects; failures to exploit the PC and Unix markets; and selling off profitable arms to finance unprofitable ones.
As an example of the flailing, they cancelled their PRISM project for a new RISC architecture and OS (the immediate result of which was Cutler and others leaving for Microsoft), yet developed RISC later with Alpha. But they'd also been doing that kind of thing through the successful years. The last PDP-10 was cancelled when there was still a market for them, to the extent that two other companies made money by supplying PDP-10–compatibles that DEC wouldn't. I recall a DECUS conference from around the VMS 4.0 era in which we were told the cluster lock manager + RDB was going to eventually replace the FILES-11 file system; the lock manager was delivered, but RDB eventually sold off.
One anecdote from the linked paper is particularly sad. John Sculley, of Apple, met with Olsen in 1991 to talk about using Alpha in Macs, when those were still 68k-based. But Olsen wanted to concentrate on VAX and wouldn't do it, so Apple went with PowerPC instead. The anecdote is referenced as being in an email from Sculley to the paper's author.
It's a real shame the Alpha didn't survive. They were pretty brutal about cutting out anything unnecessary and most things that were difficult for a compiler to take advantage of. The earliest versions couldn't load or store singe bytes, and the compiler would insert the necessary bit manipulation to do single byte manipulations using 32-bit or 64-bit loads and stores. Apparently, anyone who wanted to add a new instruction needed to show a good business use case with simulated benchmark.
The Alpha's memory model is notoriously low on guarantees, to give the processor maximal opportunities to re-order memory operations. It's the lowest-common-denominator that influenced what guarantees are made in the Java memory model. Of course, Java could have gone with a stronger memory model and forced Alpha implementations of the JVM to insert a lot more memory fence instructions.
One under-appreciated aspect of the Alpha was that its firmware (PALCode) was essentially a hypervisor that only supported one guest. The True64 or VMS Kernel actually made upcalls to the PALCode to manipulate page tables, etc. I think it would have transitioned well into the hypervisor world, needing minimal modification to the PALCode, and all OS kernels on Alpha were already para-virtualized to use PALCode upcalls, so there would be very little overhead in running those kernels on a multi-guest hypervisor.
If I remember correctly, DEC also invented/patented hyperthreading, and successfully sued Intel over it. Though, as I remember, the settlement conditions forced Intel to buy the StrongARM business off of DEC, which I think was another case of selling off a profitable business to get operating cash for unprofitable businesses. It's the business equivalent of payday loans.
I started my history with computers on the PDP 11/40 at my Dad's university.
Years later when my company was writing Unix software and we were porting to all the major vendors, DEC loaned us an Alpha workstation so we could port to OSF/1. That machine was blindingly fast, way way better than any of the Sun, HP, IBM, etc. hardware we had in the office. It quickly became my desktop because it came with a gorgeous monitor. I strung out the loan as long as I could and was sad when it had to leave.
James Gosling spoke at our developer conference around that time, and talked about the Alpha design team (who got to create this fast RISC processor without worrying about the "embarrassingly large" backward compatibility footprint the modern x86 chips were saddled with), and how great the Alpha was as a platform for Java. DEC and Sun were my two favorite tech companies; I'm glad I got to use their products when they were at their height.
I worked scrap when I was young and helped a fair amount of DEC equip on it's way. In college I learned COBOL on a VAX-11/750.
When I needed a break I'd start poking around VMS, trying to find some way to cause trouble. Every so often the admin (diff campus) would msg me with a Hotter/Colder message or just encourage me to keep at it.
My Unix stories aren't as cheerful. Like when I brought down the only RS/6000 box for two weeks, because I redirected the entire manual into the email system. Day and night instructors made a point of informing every class who was responsible.
Proud to have compromised an 11/780 as an 18yo intern, by writing a trojan that perfectly emulated the VMS login service, leaving it running on an admin terminal after inventing some excuse to be in their office and to log in on a terminal there. Somewhere I still have the printout of sys$uaf with my account having "Change mode to kernel privilege".
Then I had to learn how to write memory resident keyloggers, and even that didn't help us catch the sysop password.
What did eventually help us cause a lot of trouble by mistakenly tearing down the entire school's Novell network and forcing them to rebuild from backups, was sneaking up behind the sysop while he was logging in and looking over his shoulder. I would have preferred one of the other solutions.
In return, the sysop used a similar technique to catch the guilty ones, by putting a PC speaker buzzer in his own login script and patiently waiting next door until we logged in to see the result.
Now you go to jail... I miss the old days, it was an innocent time. When I was in high school I worked as a telemarketer and we had old vt100 terminals and a classic sunOS system. Root password was easy to guess (root). I would subtly alter peoples names and the scripts..... It went on for a year and was always entertaining to listen to the reactions.
DEC was incredibly innovative, and not just in hardware. No mention in the article of DECNet, which was the networking suite for the VMS operating system. [1] It was my first intro to inter-process communications. The libraries were easily accessible from VAX FORTRAN, another awesome DEC product.
Besides enabling processes on VAX/VMS hosts to share data easily, DECNet could also interoperate with networking on IBM operating systems like MVS. At one point DEC apparently could network IBM computers--which had wildly different architectures--better than IBM itself.
A few years ago, I met a former DEC guy. I told him the story of how Cisco Systems got its start by running bootleg versions of the TOPS-10 operating system on its very first routers. Even with the overhead of emulation, Unix of that time simply didn't have what it took to do the job under those hardware constraints. Later, Cisco came clean to DEC.
The guy hadn't heard that story, but he did remember that DEC got a really sweet deal on Cisco hardware.
A lot of ideas from DEC operating systems are due for resurrection.
> Cisco Systems got its start by running bootleg versions of the TOPS-10
> operating system on its very first routers
AFAIK, TOPS-10 only ran on 36-bit PDP-10 hardware. And the only PDP-10 clones I know of were made by Foonly and some even more obscure companies, all of which folded pretty quickly in the 80s.
I can't imagine Cisco doing a PDP-10 clone, or even emulating one. The networking software on TOPS-10 wasn't much to write home about, either.
The previous post made me want to go digging and ended up finding this gem[0].
Looks like you're both kind of right- Stanford wrote the code for their first routers to run off on top of DEC/TOPS machines, and then the code was lifted from Stanford by Cisco.
Cisco got its start by pirating a competitor’s operating system?
The same thing happened decades later, when Huawei pirated Cisco’s operating system for their own routers, when they first started out.
They eventually settled out of court, and Huawei paid up for their malfeasance, so Cisco no longer complains about it, at least not publicly.
But the irony here is unreal.
Cisco here is claiming to be holier-than-thou, but in actuality, they did the same thing that Huawei did.
Also, I heard Huawei travelled into the future, and stole Cisco’s 5G technology. Otherwise, how else could Huawei have come up with their own cutting edge telecommunications technology on their own?
Suns should really have taken that market. People were using workstations of on routers already. The were perfect positioned to get into it. Scott McNealy called it one of their biggest failures.
Their slogan was literally 'The Network is the Computer' but they failed to go into routers.
Session management is one. In Unix, programs like GNU screen or tmux are needed for that. In most DEC operating systems, it was built into the system.
Another feature of the Tenex line of operating systems (and others of that era) was that the command shell also served as a 'system monitor,' like some of the 8-bit computers had. In other words, the CLI was also a rudimentary debugger.
An example of this was explained by Gerald Sussman. He told a story of how Marvin Minsky walked a student through making a full-featured program at the command line on an ITS system at the MIT AI Lab (or was it the Media Lab?).
Minsky simply made a program that consisted of nothing but the assembly language RETURN statement. Running that program brought the system back to the shell. Minsky then added a few more instructions to that program; not much, but enough to keep it in a working state. Then, Minsky—from the command line—examined the contents of certain registers, and incorporated them into the next few assembly language instructions which he added to the program.
At each stage of that loop, Minsky always had a working program. So, from the command line, one could develop an assembly language program in an environment that was more dynamic and responsive than that offered by many modern development environments for modern languages.
VMS also had automatic file versioning. It was pretty simplistic as I recall, like every time you saved a file, it got a .1, .2, .3, etc added to the file name. I don't recall how many versions it kept.
Files-11 was the file system that supported versioning. There was a variable in each directory that specified how many versions of a file to keep.
Files were named like
login.com;1
login.com;2
etc... where login.com was your login batch file to set prompt, etc.
My daughter's favorite sweatshirt is a bit of corporate swag from Data General, which she stole from me, which I myself stole from my father who worked there beginning in the 70s and then also for EMC another decade after they acquired DG. You want a tragic tale, Data General is a tragic tale.
I worked at DG for about 13 years. (Mostly product management on the computer systems side.) On my list of things to do is to scan "A year in development" book that was put together in about 1985 or so. I left shortly after the EMC acquisition.
DEC had such a great branding and design language. They did merch like no other! So many different things from keychains to jewelry boxes and giant "d i g i t a l" name plates. You can still find old swag on Ebay.
(Disclaimer: I helped with this podcast episode and was interviewed for it but it wasn't used because we were able to get enough of the people with first hand experience; I joined a few years later.)
Re-read it a couple years ago after reading it when it first came out. It is a great narrative of the engineering process. Even now, as I approach retirement, the idea of working long, hard hours on something that you truly believe in is made so appealing by the “kids”. The opportunities to do that are rare, you need to grab them when you can.
Funny DEC trivia, their "personal computer" ran a variant of RSX-11 called "POS". The joke on campus "It runs POS, and it is." It had some ungodly co-processor to run DOS or CP/M or something too. Mostly I've pushed it out of my memory.
Second, the book "Computer Engineering: A DEC View of System Design" is a treasure trove of information about early DEC machines and computer architecture in general. I have a copy that I got to get Gordon Bell autograph at a Computer History event. If you're interested in system architecture I can easily recommend it.
Finally, Ken Olsen's inability to see how "personal" computers would change the world is pretty legendary. It is right up there with the folks who believed it would never be possible for someone to make a business out of flying people places.
The Alpha was such a fantastic chip. I thought it would take off since you could get them in an AT form factor system and Windows was ported over. They were so much cheaper than any Sparc or MIPS.
> The Alpha was such a fantastic chip. I thought it would take off since you could get them in an AT form factor system
At that time period it failed because the design wasn't sufficiently open to allow manufacturing of motherboards by ten different Taiwanese motherboard makers. The whole x86 AT form factor PC component ecosystem was there to support it, except for the motherboards, which were treated as a proprietary secret sauce.
It was totally cost prohibitive to attract a large base of independent software developers. At the time the first Alphas came out, you could buy or build a pretty nice Pentium based desktop for around $2000 to $2500, when the alpha desktops cost $4000+. And this was in 1994-1996 pre-inflation US dollars. Ultimately it had the same problem as the NeXT workstation and cube, that very few ordinary people could actually afford to buy one.
Way back in the day when I worked at Microsoft I had a DEC Alpha, MIPS and X86 computer each running Windows. I miss the Alpha but I don't miss having to support so many processors for the same platform.
I worked on the System Management Server team. We had to support all of the platforms. I also worked trying to get the product to compile for the PPC as they were working on NT for the PPC. They never solved the problem of linking MFC as a DLL working. They put the brakes on PPC not long after.
There related to function addresses being put in a table and when the functions were from the DLL and the call was from the EXE it would wind up with a register set to execute functions in the EXE and not the DLL.
It was just after this that they were trying to get their server products compiled (SMS included) for PPC. But the tools were not ready for everything. They put on the brakes not long after I pointed out the problem with the register not being set properly for function calls using pointers into the DLL from the EXE.
It lived on in a much larger way than that by becoming, in a certain sense, the AMD Opteron. There's been a travelling team of hotshot CPU architects who move around the industry together and they gave us the later (i.e. the ones that were actually good) DEC Alpha, the AMD K8 "Opteron", the Apple A-series CPUs that we're all fawning over right now, and the AMD Zen series.
I remember keeping my eye out for the PASemi IPO, based on the track record of that team, and then Apple went and ruined it by buying the company to acqui-hire that team for their A-series CPUs.
I don't understand how this theory about Alpha and Sunway originated. Even if numerology suggested the Sunway chips were Alpha-derived, Dongarra[1] refutes it. TaihuLight is memory-constrained. The Alphas we used had enough cache to hold a crystallography image, and more-or-less a whole dataset in main memory. If people hadn't insisted on using PDP-11 software designs, visitors would have been even more impressed how fast they were.
1. https://performance.netlib.org/utk/people/JackDongarra/PAPER...
How does he know? Was Dongarra completely misled, and how come it appears nothing like the Alphas I used, which certainly didn't just have a small scratchpad? Dongarra must have been pretty familiar with Alphas, of course.
Hmm... so things started going downhill in 1990. What was happening around that time? Intel/AMD/Cyrix were churning out commoditized x86 CPUs. The 486 was the hot chip around then, and it was quite impressive. Meanwhile, HP was horning in on their business with PA-RISC, and HP was still a massively respected company at that point. Their workstations were rock-solid.
With all of these rising problems, DEC still had 120k employees... They probably should have specialized, laid off some product groups. Instead, they made a search engine, which was an unprofitable business back then.
Actually, it's strange that they made a search engine, because search engines show the power of the "everyone has a crappy computer, but they're all networked", which is the opposite of DECs "1 powerful computer for every N employees, who have to share" model.
I seem to recall the entire index was on one system and it showed off how fast and powerful a single Alpha system could be. There were still many markets in the early 90s that were highly constrained by chip and system performance.
The guy who coded Alta Vista gave a talk at Stanford that I was able to sit in on. The most interesting story was when a person contacted him and begged him to remove something from the index because his boss was going to see it the next morning. He wrote a filter just to take care of this guy. It was fun stuff back then.
Well I don’t know, Google is in the business of filtering track titles it doesn’t like on Frank Zappa albums so I guess people are still out there writing filters
Yes, cheap X86s were clearly running for all the workstation vendors, but most people in that world couldn’t really believe it in their gut. Since the article mentions Christiansen, I’ll say that those workstation vendors, most significantly Sun, faced the classic Innovator’s Dilemma-described failure: doing their best shortly before they died.
While search engines do exhibit the benefit of highly parallel divide and conquer, Google was really the first to take it to the limit. That was sequoia’s motivation for investing. I know because I also had a divide-and-conquer startup around then and Sequioa was quite interested in the approach.
I’ll say that those workstation vendors, most significantly Sun, faced the classic Innovator’s Dilemma-described failure: doing their best shortly before they died.
Sun died in part because they didn't negotiate the dot.com crash. Many startups following it wanted to buy Sun servers with Intel CPUs, they were of high design and build quality, but unless you could put your purchases on credit cards, or were buying an enterprise system's worth of hardware, Sun the company wouldn't give you the time of day. They insisted you use their 3rd party channel including VARs, which also usually wouldn't give you the time of day. So they bought kit from Dell, which actually wanted to sell you stuff, got used to dealing with its quirks and failures, and by the time would be buying enterprise quantities of computers, were on balance happy with buying more of their cheaper Dells.
The also made a big enterprise push, and threw it all away when their failure to check results from off CPU cache memory intersected with IBM's chip radiation problem. Instead of owning the problem, offering a solution like the IBM of old, they first blamed their customers, then made them sign NDAs before Sun would try to fix the problem. In other words, proved they culturally were still a flaky workstation and superminicomputer company, which was not what the enterprise market then was looking for. Nor those who still buy IBM mainframes, z and i (AS/400) series. One of my banks uses the latter as a hosted solution, it's rock solid.
It was obvious to me in 1990 that the Intel 486 was the death knell of the minicomputer companies. My 486 PC could run UNIX, was 32-bits, was on par with with the MIPS-based DECstations (also faster than VAXstations) and yet was an order of magnitude cheaper. It did take a while though- a decade later people were still using SUN workstations because the high-end EDA tools were not yet ported to Windows (well 1990 was before even Windows 3.1).
About that time, Focus, the DG users-group magazine, mentioned a sorting benchmark, on which a 386 PC had outdone, or nearly, and MV/20000, which was the second-fastest minicomputer that Data General ever produced. Back then, 386 PCs were not cheap, but yes, they were an order of magnitude cheaper that just about any mini.
Even NeXT ported their OS to the 468. The legacy of that is likely why they were able to fairly easily transition MacOS from PPC to Intel years later. I had heard rumors that every version of Mac OS X had an Intel build secretly running at Apple.
> I had heard rumors that every version of Mac OS X had an Intel build secretly running at Apple.
Hardly a rumour, Steve Jobs said it at the announcement of the switch to Intel. "Mac OS X has been leading a secret double life", and then showed on a map at which Apple office they'd been working on it.
I had friends working on tools for Apple (gcc, gdb, etc) who had no idea. They got bug reports from this team that had been washed of apple-specific info and posted publicly on external sites.
> While search engines do exhibit the benefit of highly parallel divide and conquer, Google was really the first to take it to the limit.
Inktomi were doing it first - since their inception, and they started long before google. Their founder, Eric Brewer, also came up with the CAP theorem.
By modern standards, the AltaVista search engine was practically “one powerful computer for everyone”. IIRC, at the height of its popularity it was running on only a few dozen Alpha servers. The Internet was very much smaller back then.
Apropros of nothing, I had a conversation with a taxi driver in Boston circa 2014, he said he drove Ken Olsen (CEO of DEC) to the airport a few times. In his words, "he was very intelligent, but not very smart."
In a much earlier encounter we had a couple of DEC engineers visit our small-to-medium-ish company in Canada to try to sell us on their technology. They basically defined the word supercilious, and we went with a different vendor.
Two explanations, both in the article near the end.
The first is the standard:
> In a Quora thread that asked the question “Why did Digital Equipment Corporation fail?” it was interesting to see so many previous DEC employees and members of the MIT community speak up about what they noted during their tenure there. Almost unanimously, they supported the theory—also commonly held by experts—that the failure of the company ultimately fell to the leaders who were unable to foresee what was coming in personal computing and were not able to take decisive or quick enough action in time to save the company.
The second is the Innovator's Dilemma:
> “Digital Equipment Corp. had microprocessor technology, but its business model could not profitably sell a computer for less than $50,000. The technology trapped in a high-cost business model had no impact on the world, and in fact, the world ultimately killed Digital. But IBM Corp., with the very same processors at its disposal, set up a different business model in Florida that could make money at a $2,000 price point and 20% gross margins—and changed the world.”
The latter seems to fit better with the evidence. The timeline in the article makes it clear that DEC saw where things were headed and tried moving in those directions.
But there's a big difference between little experiments and betting the company on an auto-cannibalization strategy. DEC didn't have the guts to go there.
> But there's a big difference between little experiments and betting the company on an auto-cannibalization strategy.
IBM didn't do "have the guts" to do that either. The IBM PC was a skunkworks project by a team of 12 lead by Don Estridge in Boca Raton Florida. The reasons for its success can be seen, in hindsight, to be a consequence of its lack of internal funding and status within IBM, which had a business model focussed on leasing mainframes to large corporate customers.
IBM didn't have resources to develop custom hardware or to write software for the PC. Estridge's team used off the shelf chips including Intel's 8088 cpu. To make it easy for 3rd party developers to support the system, IBM published the BIOS source code. This had the effect of making it possible for Taiwanese manufacturers to clone the PC and sell cheap copies. DEC had no matching ecosystem. Its Rainbow 100 sold for $4K whilst IBM PC clones sold for $1K.
The killer apps for the first generation of PCs were word processing and spreadsheets. PC's could perform word processing better than minicomputers, and spreadsheets were invented on PC's with VisiCalc and then later, Lotus 1-2-3. Consequently, IBM and the Taiwanese clones, the best known example of which was Dell Computers, were able to sell PC's into corporate accounts where IT managers would allow them to take the burden off mainframe data-processing systems by performing the personal office tasks which mainframes and minis couldn't efficiently do.
> Its Rainbow 100 sold for $4K whilst IBM PC clones sold for $1K.
The Rainbow also requires special floppy disks that cost $5 each instead of a buck, with zero difference in performance or capacity. DEC-aficionados I knew literally laughed at this and shook their heads, and abandoned DEC.
The floppy disk drives, known as the RX50, accepted proprietary 400 kB single-sided, quad-density 5¼-inch diskettes. Initial versions of the operating systems on the Rainbow did not allow for low-level formatting, requiring users to purchase RX50 media from Digital Equipment Corporation. The high cost of media ($5 per disk) led to accusations of vendor "lock-in" against Digital. However, later versions of MS-DOS and CP/M allowed formatting of diskettes.
Of note was the single motor used to drive both disk drives via a common spindle, which were arranged one on top of the other. That meant that one disk went underneath the first but inserted upside-down. This earned the diskette drive the nickname "toaster". The unusual orientation confused many first-time users, who would complain that the machine would not read the disk.
“But there's a big difference between little experiments and betting the company on an auto-cannibalization strategy. DEC didn't have the guts to go there.”
See also Kodak (invented the digital camera in 1975!), Xerox (We’re a photocopier company! We sell photocopiers!), Microsoft (owned the entire global PC market for damn near 20 years—till Apple overnight redefined what “Personal Computing” meant). All caught out while sitting atop their laurels.
The only thing today’s successful product is good for is funding the development of tomorrow’s—because if you don’t disrupt yourself first then sooner or later your rivals will do it for you, and then it’s already far too late to do anything about it.
Ken Olsen (DEC's co-founder) was quoted saying "There is no reason for any individual to have a computer in his home." He was openly skeptical of the desktop computers and thought of them as "toys" used for playing video games.
It's a shame, because DEC had the engineering discipline to be successful in this market, if only they didn't dismiss it.
In fairness, that quote seems to have been in reference to having a "home computer" that controlled everything in the home which was a popular notion at the time and did not, in fact, come to pass.
That said, DEC and others did not really recognize the shifting of the computing environment to horizontal layers from vertical stacks and the importance of compatibility. Even to the degree they introduced PCs, they didn't understand they played to an entirely different business model than minicomputers. (Source: I worked for Data General, a DEC offshoot, for over a decade.)
> In fairness, that quote seems to have been in reference to having a "home computer" that controlled everything in the home which was a popular notion at the time and did not, in fact, come to pass.
If you controlled dumb devices, including all your appliances, from a single centralized computer. Things like Alexa, Nest, and some light bulbs notwithstanding, it's all still mostly gadget gee-gaws rather than any kind of truly useful home control.
Anyone else sad that money and business took over IT? I feel like most of the commenters here would love to go back to a time where computers were fun to experiment with and we’re just bound to “lab work and play”
In part, yes. But at the same time, all of that money and business has spurred research, development and investment which have improved computing technology dramatically.
Today, you can buy an STM32 Nucleo board for under £20 with all sorts of interesting capabilities. Then you have Arduino (ironically both more expensive and less capable than the STM32), then RPi and all the rest. All of these can be fun to experiment with, and they are all pretty open systems.
With Apple's launch of their ARM systems, I do think we're entering a new stage of computing evolution: it's the beginning of the end of the dominance of x86. I do wonder where it will go for the mainstream. It's only a matter of time for ARM desktops and laptops to become more generally available. We've already got systems like the RPi 3 and 4 which are totally usable as desktop replacements. But they are designed to meet a low price point, and lack general PCI-E, fast and reliable storage options etc., which limit their use. It wouldn't take much to make a slightly bigger ARMv8 board with PCI-E/NVME, and sell it as a general purpose system. (I'm sure there are already options for this, it's just not widely available or price competitive.)
Yeah, I hear what you’re saying... but it’s not as fun. And I assume that we would have gotten to these developments eventually, maybe just a little later.
I’m hoping the end of x86 dominance teaches people to not target single platforms. It would be nice to see a POSIX but for hardware (even a HAL)
Kinda sorta. The way people, even in this thread, talk about working at some of these pioneering companies sounds really engaging, which is kill for. Half of the stories make it sound like the sort of problems they were working on were if nothing else, technically interesting and they had genuinely smart people to help guide them.
Totally, money and business have taken over IT. Computers are still fun to experiment with however, it is just that people have a choice now of experimenting or playing games, and most people choose games
I worked on Vax/VMS about 19 years back. Their documentation was the best. There was a mailing list of vax users, one of the best I have ever been a part of . The people were extremely helpful for a newbie like me. Part of the old internet that is dead now .
One of the critical mistakes DEC made when they produced the Rainbow microcomputer.
You couldn't format diskettes yourself. You had to buy pre-formatted diskettes from DEC.
And the only OS they supported was CP/M. They did not support DOS on the Rainbow.
Ironically, IBM made the same mistake with the PS/2 and a machine with proprietary parts that could not be copied like the parts of a traditional 8088/8088 based PC.
"I will walk through your walls" is somehow incredibly intimidating despite being a totally nonsensical threat. Great book, but they didn't make it seem like a lot of fun to work on NT.
I think another problem for the minicomputer companies is that they were doing everything themselves: CPU design, board design & manufacturing, system design & manufacturing, OS, compilers, and other required software, sales, support, etc.
With microcomputers, Microsoft focused on the OS, Intel on the CPU, and a gaggle of other companies did board design, system design, sales and support.
I worked at Prime Computer, a competitor of DEC. Prime was a very innovative company too. But like DEC, they had built their business selling largeish minicomputer systems to largeish customers. Prime tried to make a small Unix tower system, a PC, port their OS to Intel CPUs, etc., but they had to continue supporting their current proprietary customer base at the same time. And when making new products like a Prime PC, it had to interface well with the proprietary systems or existing customers wouldn't be interested. That means more engineering work and a higher price tag. Existing customers wouldn't be interested because of the higher price.
It's tough to build a huge business, continue to support that, and at the same time start a completely new business. Company cultures usually can't change fast enough to make the transition.
> I think another problem for the minicomputer companies is that they were doing everything themselves: CPU design, board design & manufacturing, system design & manufacturing, OS, compilers, and other required software, sales, support, etc.
Aside from manufacture maybe, isn't Apple doing exactly all that? And not only succeeding, but dragging everyone else along with them; M1 Macs seem to be getting better third-party support at a faster rate in mere months than other attempts at ARM desktops/laptops did in years, like Microsoft/Windows.
The similarity of CP/M's behavior to DEC's operating system is pretty obvious, which is why I was a bit bemused by claims that DOS copied CP/M. You'll still see it in the Windows command prompt.
I bought an H-11, the Heathkit version of the PDP-11. It was the microcomputer that might have been, far superior to anything else. All DEC had to do was market it properly.
>> Still looking for a cheap MicroVAX or a PDP-11/93.
LSI 11/73?
Hopefully, the SoL has arrived, I bought my 1st uVAX II motherboard, KA630 from a Field Service Engineer I Played Lax with for $500. Pulled out the 1/2 width PDP board, made a few console changes and voilla, VMS magic. In between was the interminable tape drive but it messes up the story.
I left for Maynard/Nassau a few months later, the 6 best/worst years of my life. Watching us piss away the WS market to salvage big iron was the worst but as GB much later pointed up -- VMS was the gold and we had no idea how to mine it.
I remember leaving off the initial "I" for insert-mode a few times when writing long TECO edits. Hitting the double escape and realizing in horror that the editor was parsing my assembly language program as editor commands wasn't fun.
That got fixed eventually iirc by keeping your text input in a buffer.
By the way, TECO was a CHARACTER editor, not a line editor!
I dimly recall one was supposed to figure out what TECO would do when one typed one's name in (followed by the double-escape). Never did figure out what mine would do - was too afraid to try.
In my early 20's, which was the late 80's, I became friends with the same age son of the then CEO of DEC. About all I can say there is that larger extended family of their CEO and it looked like the entire social set of their C Suite was completely out of control with wealthy excess, including the CEO himself. Being the 80's, that wealthy excess was luxury goods, travel and expensive drugs. I remember his father learning I was a developer and saying he could get me in at good spot, and my thoughts were "and join your excess club? no thanks." FWIW, not a single person I knew from back then in that crowd is successful today.
That was not Ken Olsen, I can assure you. It might have been Bob Palmer, but I don't think he was ru(i)nning the company until the early 90s.
Ken was a frugal and deeply religious man. The first time I met him, he was picking up trash in the parking lot outside the Mill, and he offered to hold the door for me because I was carrying a stack of books. Ken Olsen made Digital a great place to work, until he was forced out. The company would not subsidize birth control with their health insurance, but would pay for a couple to have up to 3 IVF proceedures, which were 5 or 6 figures back in the day.
It's great that the current owners of the digital.com domain are paying tribute to its history.
I cut my teeth on VAXen back in the day and still have many fond memories of them running both VMS and UNIX. But their name is lost entirely on recent generations.
Now my question is, how did they make the site so fast? Checking my Firefox dev tools I'm seeing it's not even being requested on subsequent page views, just the cached version. How'd they do that?
> Now my question is, how did they make the site so fast?
Simple. First, it uses a CDN (Cloudflare). Second, the site doesn't have 20 useless tracking scripts (I only see Google Analytics) or 10 useless pictures, the entire web page is only 446.37 KiB. The website is more than qualified to join the "1 MB Club" (recently discussed on HN, https://news.ycombinator.com/item?id=25151773). Third, all images are lazy-loaded (I see the WordPress plugin a3-lazy-load was used), so heavy images won't be downloaded until you need them. Fourth, it uses an aggressive caching policy:
> Cache-Control: max-age=31536000, public
> Expires: Sat, 04 Dec 2021 17:06:47 GMT
So after the first click, 95% of the static resources will be cached and it significantly reduce the download size, which is a good practice (add version number to the file name, so you can always use aggressive caching). Although it's sad to see the website digital.com has become a web hosting marketing site, but at least the site owner did do her SEO homework by not creating yet another toxic webpage, I can consider it a positive (or at least, non-negative) contribution to the web.
OK So the Expires header is set to 15 minutes in the future. I think this is the secret sauce. I've never seen this before but I plan on trying it out heavily in the future. You don't even need to wait for a 304 content not modified, there's simply no request at all. So cool!
edit: It seems like Expires is a bit older than cache-control, so not as relevant as I thought. I'm not seeing the 365 day cache control header you listed, it's anywhere from 5 minutes to 30 minutes.
It's just weird to see a browser totally trust a cache and not even verify using "etag" or otherwise. I'll probably have to do a lot of trial and error to get this to happen but I really like it, because it feels so fast.
For me the point at which DEC screwed up was in not embracing Unix+, around 1985. If they had done so, then there wouldn't have been a time when everyone had to buy Sun hardware to run Unix. There would instead have been a choice, and many DEC shops would never have converted to Sun. Commentators saying, as the article does, that it was all about the rise of the PC are wrong. PCs were useless toys at that time. Of course DEC may have done down later just as Sun did, in the X86-64 take over.
+I know: Ultrix, blah blah...but seriously no Ultrix wasn't embracing.
It goes back to well before 1985. In fact, Ken Olsen explained DEC's Unix "strategy" very clearly back in 1984:
"One of the questions that comes up all the time is: How enthusiastic is our support for UNIX?
Unix was written on our machines and for our machines many years ago. Today, much of UNIX being done is done on our machines. Ten percent of our VAXs are going for UNIX use. UNIX is a simple language, easy to understand, easy to get started with. It's great for students, great for somewhat casual users, and it's great for interchanging programs between different machines.
And so, because of its popularity in these markets, we support it. We have good UNIX on VAX and good UNIX on PDP-11s.
It is our belief, however, that serious professional users will run out of things they can do with UNIX. They'll want a real system and will end up doing VMS when they get to be serious about programming.
With UNIX, if you're looking for something, you can easily and quickly check that small manual and find out that it's not there. With VMS, no matter what you look for -- it's literally a five-foot shelf of documentation -- if you look long enough it's there. That's the difference -- the beauty of UNIX is it's simple; and the beauty of VMS is that it's all there."
-- Ken Olsen, president of DEC, DECWORLD Vol. 8 No. 5, 1984
And I can, in fact, verify that what Olsen said about five-foot shelves of manuals too.
And VMS was (and OpenVMS still is) an impressive operating system, some features of which the Unix/Linux ecosystem is just gaining parity with.
That said, It was commodity microcomputer hardware that doomed DEC, not it's faint hearted support for Unix, IMHO.
I worked in a VAX shop early on. We had an intern who was pushing Linux on us. He said to me, "Sure, VMS is great, but the downside is you need a wall of manuals." My reply was, "Downside? That's the UPSIDE! You can get an entire wall of manuals for it!"
Old VMS programmer here. In addition to the above, the thing I remember is that there was more of everything in VMS, and it was all more complicated.
Unix has files which are just strings of bytes. VMS had those, but also several (or many) other file types with records and indices kind of like a database, and a library of functions for each different file type.
Unix has processes. VMS had processes and jobs -- a job is a group of processes, when you log in you start a job, all the subprocesses you start in that login session belong to that job etc. So there are functions for dealing the jobs as well as processes.
Unix has environment variables. VMS had logical names which could be used in a similar way, but there was a process logical name table, a job logical name table, and a system logical name table, and functions for dealing with all of them.
And on and on in this style. Whether you see this as an advantage or disadvantage depends on whether you have an application that can use some of these features, so you don't have to write them yourself as you might have to in Unix.
In addition to that, as the link above explains, VMS had facilities that early Unixes just didn't have, such as clustering - you could connect several VAXes to a common shared file system and share computational load. Also there was DECNET, a complete networking stack, a parallel
universe to TCP/IP (which was not so pervasive in the 1970s and 80s). Each of these contributed a couple of thick binders to that wall of documentation.
The VMS manual set was delivered on a pallet! It was bigger and heavier than most of the later VAX computers.
- At the time Olsen made that statement VMS had shareable libraries (DLLs), and none of the several Unixes I used did. The system administrator had to manually enable sharing of their code pages between processes if you wanted that, but they were there.
- The indexed files mentioned above were called ISAM. There were also utilities to reindex them, and load and save them to flat files with fixed column sizes; since that was the common way data was held in those days, it would be like having automatic conversion to and from JSON, XML and CSV now. At extra cost, there was a SQL-like query language, just different enough from IBM's SQL language not to get sued.
- The compilers were far ahead of Unix's. The FORTRAN77 had a lot of IBM extensions, so scientists could just compile their existing programs. It also had keyword extensions to work with the ISAM files. Later on, a newer and better Pascal came out, as well as an Ada.
- Terminal handling was handled by the kernel, in contrast to Unix where applications went off and read /etc/termcap and the user had to babysit eval tset in their dotfiles. If a command line program was nonresponsive, the input was queued instead of being half-duplex interleaved with program output as Unix still does it.
- The symbolic debugger was far ahead of Unix's adb.
My favorite feature of logical names was that they didn't just point to directories -- they could be a search path!
To test new code, all I had to do was create a directory to hold my changes, and copy in just the files I wanted to update. Set the logical name aliasing the code location to search first in my "branch" directory, then in the main directory. Any code in found the top-level search location takes precedence over the main directory.
Boom! Instant branches, but supported at the OS level.
The living computer museum has a VAX running the original VMS, accessible over the internet. Yeah the museum is closed, but the VAX is running, I've just logged in to verify.
Jesus Christ, having used VMS and SunOs back in the day... this is some serious delusion. From a user/programmer perspective, in my experience VMS just plain sucked compared to SunOS. Probably, this was down to the shell. The VMS shell just SUCKED.
My first exposure to non-personal computers was via VMS at the university. It was exciting. But then a few months later I got access to a SunOS lab (and shortly after, AIX and HP-UX labs). Mind blown! Instantly in my mind VMS became just a toy, UNIX is where the hacker ethic lived for real. Thirty years later, still on UNIX (BSDs and Linux now). VMS? Yeah..
They had real UNIX System V R2 (at least - R3, maybe?)) deployed by 1990 at the latest. GTE Mobilnet had their usage tracking running on whatever the biggest VAX was at the time. I had to fly to Bothell in August 1990 to diagnose a problem with Informix not playing nicely with DEC's bleeding-edge disk array controllers.
Ultrix was lacking compared to SunOS. It only supported static binaries and needed more disk space as a result. Licensing was weird. Open source stuff didn't always "just work" and compile out of the box like it did on Suns. In the early 90's through the dot-com crash, everyone wanted Suns. Still, DEC did some awesome stuff... The Alpha was amazing.
I worked at a mostly DEC shop for a while. They had a lot of VMS systems, but when it came to Unix, they went with Sun.
I spent many years using PDP-11s. I remember DECs attempt at a microcomputer, the LSI-11. It was twice the price of a PC, half as fast, and the size of a dishwasher. If you've never heard of it, it's no wonder.
We had one of those - a PDP11/23 - it ran our cyclotron. Handled about a thousand inputs and outputs, using RSX, a DEC real time OS. Ran almost continuously for about 30 years. I say again, it ran our cyclotron almost continuously for about 30 years. Outlasted DEC itself - eventually HP (which took over DEC's customers) told us they wouldn't renew the service contract. We kept it going with spare parts on hand. Eventually we replaced it with about a dozen x86 boxes running Linux.
Yes, with the 8" floppy drives, and a VT-52 clone. I gave it away to a father to give to his kid. A couple years later, I realized my mistake and asked to buy it back, but he'd thrown it in the trash :-(
I still have that Heathkit terminal, but I haven't turned it on in 37 years or so, though it was in perfect working order at the time.
I soldered all three kits together, and it all worked perfectly the first time I turned it on. Those were great kits.
I also had a very expensive Diablo 630 daisywheel printer to go with it, built like a tank. But I couldn't give it away, and finally it went to the dump around 10 years ago.
I did keep my original IBM PC, but the chips went bad in it while in storage and it won't power up anymore.
Most of the volume of the dishwasher-sized 11/23 was the two 14" removable hard disk packs - much higher capacity than the floppy disks of the day.
I recall the circuit board with the LSI-11 on it was quite a bit smaller than an IBM PC motherboard. Most of the volume of the packaged LSI-11 systems was I/O and disk storage - there were several models of different sizes, I recall the 11/23 was the largest.
The 11/23 was not a consumer-grade product. I don't think it was intended to compete with PCs.
I was thinking of the 11/03, not 11/23. And the disks were in a separate enclosure, most of the dishwasher size box was empty air. Yes, the boards in the LSI series were much smaller than the equivalent PDP boards - they had a name for the form factor but I can't remember what it was.
Of course I may be misremembering things, this was 40 years ago.
Curiosity got the better of me, and I decided to look it up on Wikipedia. The board form factor itself didn't have a name, but it was tied to the bus that was used. The PDP-11s used the Unibus, and the LSI-11 and its variants used the Q-bus.
They also have a picture of the 11/03, and it's indeed much smaller than I remember. Perhaps I was thinking of the 11/23.
The article had good
Chronological data in form of DEC’s history but the analysis of its epic fall , falls short of a well researched article. However the subsequent HN discussion is just awesome.