Hacker News new | past | comments | ask | show | jobs | submit login
The Untimely Demise of Workstations (deprogrammaticaipsum.com)
146 points by ingve 29 days ago | hide | past | favorite | 197 comments



I work in the workstation division of one of the companies listed in the article and the market for workstations is not going anywhere for a long time.

It's not that workstations died, it's that they look different and solve a different problem. Anyone can build a computer with off the shelf parts that has the absolute maximum specs that any vendor can produce. Anytime a new workstation makes the news (see Apple's latest workstation), the "PC Master race" gang is quick to point out that they can build the same system without the Apple/HP/Dell/Lenovo tax. What they somehow always forget about is that, if I'm an ITDM and I need 100 or even 1000 systems and they need to be configured, validated and ready to deploy from day one, custom built computers aren't feasible in any sense of the word. The value add from workstation companies is a mix of scale, availability, validity and uniformity.


On the other hand, the "PC Master Race" gang is a home/enthusiast demographic and from their perspective, they're probably correct.

They don't need that value add from workstation companies. And heck, they probably welcome excuses to tinker on their workstations.

And this is great. It keeps the open, build-able computer market going -- contrary to the alarming trend of locked down computing devices.


I build my own desktops (which are workstations in all but name) for work because I want to know exactly what goes into it and make sure it's quality parts that are widely available without vendor specific motherboards and other stuff.

It's never bitten me, worst case I'd have to next working day a part from amazon.

For development workloads you simply can't beat that approach.

Recent example, unit tests on work issued macbook pro, 2 minutes, same tests on my PC, 39s.

There simply isn't a laptop that fits my workloads better than a modern Ryzen with a crap-tonne of RAM


Dell can sell you a workstation with dual CPUs, dual GPUs, 3TB of ECC RAM, and a combination of up to 8 NVMe disks or 10 SATA/SAS disks. And the entire thing (including disks) can be covered by an onsite warranty that will send a technician, with parts, to your location within 4 hours, 24/7, for up to 7 years.

You cannot get those specs, or anything near that warranty, from commodity desktop hardware.


In some areas you could leverage "Amazon Prime Now" for 1-2 hour delivery on replacement computer parts

I did that back in 2016 when I needed to upgrade my GPU to play a new game after getting off work


You still have to pay more for replacement parts though, which is not how warranties work


Yes but is it matter if entire computer is 2x expensive?

Anecdote: In Japan, SanDisk sells "genuine" SD cards extremely expensive (about x3-x7 for US price). Importing SD cards from US (or buy from local importer, it's common) is makes sense even though it has no warranty.


Things are a bit different if you are running IT for a large organization instead of buying your own computer though.

Generally speaking Amazon is not going to have 1000 of a specific computer part in a 2-hour delivery window. You could order different parts but then you now have increasing numbers of variations of setups and you don‘t want to be fixing lots of small unrelated problems than a widespread issue which has the same fix all the time.

And for corporations it is better to have the cost paid for upfront; it is expensive, and hard to get approvals for unexpected budget items.


Fully agree. So even SanDisk can sell products in such price.


This. It was buy a new iMac time earlier this year for me and I ended up with a Ryzen custom build.

The crate itself, a 3700X w/ 64GB of RAM, 1TiB NVMe, 1660 GTX didn't cost much more than the 64GB of RAM for the iMac was going to cost on its own...


We do the same at our small company. We recently gave all the developers a budget to use to build a custom workstation. The only requirement was being able to do the work we need to do effectively. We also provided a few template configurations with Ryzens for those who didn't want to have to think about this. But many of us really enjoyed the opportunity to create a custom configuration using whatever internals we preferred, as well as monitors, keyboards, mice, headphones, etc. that fit our preferences.


Which ryzen? What are the other specs?


2700X, 64GB DDR-3200, RTX2080, 2TB nvme storage - I'm due a new one, thinking 5950X next time.


I have so many questions, please ignore if this is overwhelming.

But do you know how much of a cost savings that is ballpark?

Also do you have a guide that you follow? I'd like to replace my Macbook Pro but I don't really know where to start.

Also is there a resource for a noob to run unit testing to if my performance is better when I'm done?


That PC is old now but it’s modern equiv would run around £2K give or take.

That would get you a Ryzen 3 which would be substantially faster than mine and a RTX3070 which would crush my 2080.

With two 27” 4K screens my PC came out about the same as a MacBook Pro but is much faster on the workloads I care about and as nice the MacBook Pro screen is 2x4K is better.

Bit less portable though.

Software wise Fedora is as stable as OSX and has everything I need (I’ve been Linux as a primary dev platform since the turn of the millennium) and in fact things have never been better, pretty much everything supports Linux at least that most devs need in 2020 (Xcode is an exception).

Gnome will feel most like OSX but I prefer Cinnamon.

Prices are substantially cheaper in the US.


The "PC Master Race" Community is booming, more people than ever are building their own rigs, customizing. The market is very very hot for hardware and the latest graphics cards. I can only see this continuing. Workstations/Desktops are continuing onward.

This is a different market though, Dell/HP/Lenovo are mostly targeted toward businesses or the common consumer just looking to get a laptop for school.


> Anytime a new workstation makes the news (see Apple's latest workstation).

If you where talking about Workstations from the usual suspects (HPE, Dell) I'd agree but Apple really do put a fantastic markup on their kit.

There are also a lot of people buying those Apple Workstations where they only need one or two of them and someone like Puget could build something faster for much less.


Surprisingly, the margin isn't as insane as you would think. Linus Tech Tips actually did a video comparing a homebrew Mac Pro killer vs the Mac Pro and they came out surprisingly close. Ref: https://www.youtube.com/watch?v=l_IHSRPVqwQ

Once you factor in the cost of stuff like support and hardware validation it becomes pretty much a moot point. That's without even considering that you would need to hire a supply chain expert(s) to acquire large quantities of parts if you needed anything more than a few machines. At my work, we have whole departments full of people dedicated to making sure we have the right mix of hardware at the right time to fulfill customer needs.

There are definitely some configurations that really don't make sense (the lowest end config comes to mind) but, at the same time, if you run a business with a team of people training on Macs, the amount of money it would cost in training and lost productivity to switch over to Windows for possibly lower prices makes even less sense.

I'm a custom PC and linux guy myself but this seemed like a good time to remind everyone that there's more to computers than just the cost of making a single machine.


You are so right. At the moment its 14k for 28 * 2.5Ghz Xeon W with 32GB of ram and 2 TB of ssd storage. This is 10000 over an equivalent machine. This is 250% markup.


Insane. For that price you could build a monster Threadripper box.


I won't say it's insane but now lack of Threadripper or EPYC is serious lineup defect. I'm exciting how Apple make competitive chip.


> the "PC Master race" gang is quick to point out that they can build the same system without the Apple/HP/Dell/Lenovo tax

They also pay their workstations with post-tax money, vs a business that can write-off workstations as business expenses.

This suddenly makes the Mac Pro pricing a little more obvious.


That said, not all companies are that big. I did support at an engineering firm that built all it's own machines for about 400 users.

It was kinda nice, we could replace any part ourselves same day. As parts got older the machines were reconfigured for people who needed less power (like HR).

That said it wasn't all rosy there. The ticket system was passing sticky notes between people, and active directory/a few other windows management things were replaced with... some sort of lotus product? It replaced the login in screen.


I did engineering at a firm that built all its own machines for PC-based industrial testing equipment, about 50 per year.

It was totally not worth it for us, it was penny-wise and pound-foolish. We switched to buying some Advantech machines, and while the BOM cost was one $1200 line item compared to a long DIY BOM off pcpartpicker.com that ran closer to $600, all the engineering time we wasted on component selection and ordering and progress bars and BIOS configs and Windows update and cable ties and debugging reliability issues was much harder to quantify and probably significantly more than $600.

I think there's a few inflection points on the quantity/process value curve - Building 4 machines? That's a little one-day project for somebody. Building 400? Hire a technician, set up an assembly station, and develop some work instructions. Building 40? That's going to have one or two that need warranty work, and you're not going to recoup the investment required to develop a good process - just buy them from someone who has. Building 4000? Your process is now multiple technicians, an engineer, a purchasing agent, and some management, and support needs after-hours on-call people, and you need an inventory of spare parts...developing that capability is again more expensive than just buying it. Building 40,000? At that level, you're building a PC-construction business and you can sell your spare capacity to the 40-unit guys.


> uniformity

In my experience, there is no uniformity. If you buy 10 machines in the same order with the same SKU, you may end up with 10 different combinations of components between them...


Linux killed the Workstation vendors stone dead. Sun seemed to be very aware of the problem and fought really hard, but they couldn't cut off Linux completely.

I went to a seminar by an Open Source hacker at a Unix user group conference in the late 90s, where he talked about the Sun funded work he'd done on getting the Linux kernel to run well on Sun workstations. He was absolutely clear that he thought Linux would kill Sun because it completely undermined their software license business, but that this was their problem. They just had no good options.

I suspect the reason IBM was able to reconcile with Linux and even go all-in on it was that it complemented their lucrative very high end server and mainframe businesses. Their workstation businesses benefited from a halo effect from the super high-end mega-systems that protected it from becoming total road kill, but Sun and the other vendors never really managed to establish themselves at the super high-end, so when Intel ditched Itanium and PCs caught up with RISC they were left without a market.


That and Intel hardware became better than what the workstation vendors had.

Around 1999 SGI Octanes were about 50K, at the time NT machines (think Intergraph) appeared that were, for many, many purposes, just as capable. They were 10K.

In addition the MIPS chip was considerably worse than a dual proc x86 machine of the same vintage. I was working at a company where there was a C++ API. Compilation time on the SGI was 2 hours. It was 10 minutes on the Intergraph.

When cards like the Nvidia Quadro and FireGL cards came they were better than SGI machines and cost a few thousand dollars.

This guy worked for NEXT, who were eaten early because they produced massively overpriced workstations that there was a small market for. They didn't even survive the workstation market, let alone the coming wave of better hardware from very large development budgets for mass market parts.


> That and Intel hardware became better than what the workstation vendors had.

Exactly. As much as I liked the environment the premium you paid for what was sub-par performance really killed the market. And when SGI ended up selling intel Windows boxes the writing was on the wall for them as well.

https://en.wikipedia.org/wiki/SGI_Visual_Workstation

I disagree that next got 'eaten early' though, effectively every Mac you've used since those days was a Next workstation marketed under a different brand. Next morphed into Apple much more than that Apple acquired Next.


Without the Apple deal NeXT were toast, just like all the other workstation companies. They just didn't have the resources needed to elevate NS to the level of polish of OSX, or the market power to engage that may developers on their own.


Conversely, without the NeXT deal Apple would have likely been toast. That worked out for the best for everybody involved.


I am not so sure. At the time I thought BeOS was a better purchase. I wonder what could have been if Apple had gone with Gasse instead of Jobs.


Steve knew how powerful and essential networking was to the compute paradigm we accept.

Be’s TCP stack was never very good. It never had big money applications like Frame or Illustrator, and lacked the robust dev platform (and paying customers) that Next had.


BeOS was also a single-user OS, which made it a nonstarter for businesses. It wouldn't fit within corporate IT as it existed at the time. NeXT, however, and Mac OS X based in large part on it were both multi-user OSes which could actually be used by businesses more reasonably.

And I'm using "business" in a generic sense. This was an important quality for schools, as well, where Macs were incredibly popular (for a number of reasons, including favorable pricing from Apple and historical support for schools since the 80s). Single user OSes just don't fit within any organization trying to at least pretend to have decent IT.


I am mystified by your comment.

How has OS X's "multi-userness" helped organizations?

Do you literally mean two or more people using the same Mac?

Has it been useful for organizations that users can ssh into Macs? Or is Apple Remote Desktop involved? (Does Apple Remote Desktop even allow 2 people two use two instance of OS X's GUI at the same time?)

Or by "multi-user" do you refer to the services that can be enabled using the "Sharing" pane of System Preferences (e.g., file sharing, printer sharing, remote management, internet sharing)?


I mean that BeOS didn’t have a way to login as separate people and use the same computer at different times with a clear separation of access rights. Mac OS X did, like most Unix or Unix-related OSes.

It’s not about simultaneous use, but standard user management, access control, and permissions.

Though yes, technically this also allows multiple to access it simultaneously via ssh and other things that’s not the important part for IT in this instance.


Thanks. For some reason I couldnt think of that on my own.


At a guess it wouldn't be the most valuable company in the world today.


May be BeOS is better, but I would bet money Gasse wouldn't have save Apple. In the end it was NEXT, the OS, Hardware, Software or Engineers they were buying. It was Steve Jobs.


This entire subthread is silly.

- BeOS was not a great choice, though certain aspects were much better than NeXT, primarily the filesystem. There were a lot of things missing, but BeOS's lack of printer support was the headline issue; seems ridiculous today, but "desktop publishing" still mattered to Apple at the time, and it spoke to immaturity of the BeOS graphics stack.

- Aside from WebObjects, the vaunted NeXT software stack was bitrotting. It was buggy and unstable on commodity PC hardware, and it was never polished enough for non-Unix-savvy end users. Apple spent something like six years bringing it up to a consumer-ready spec before they felt comfortable dropping classic Mac OS.

- The original iPod used no NeXT-derived software.

- But without the NeXT purchase, Apple wouldn't have gotten Steve back.

- Without Steve back at Apple, the iPod wouldn't have existed.

- Without the iPod, you don't get the iPhone.


tl;dr: As a result, they bought back Jobs


Oh, I agree. But at the time, I thought BeOS had the edge from a technical standpoint. With Gasse, I doubt we would have seen the iPhone, iPad, and Apple Watch. Hindsight is 20/20.


I would argue that Jobs saved Apple with iPods. NeXT was just a way to be rid of a legacy business that was collapsing on itself with something that did not sad mac face if you looked at it wrong.


That would be a bizarre argument to make given that Nextstep first became MacOS X, and then iOS, the most commercially successful operating system in history.


it did go on to be a success. but at the time apple was cash flow negative. ipods and itunes changed that.


That is true.

This is not: “NeXT was just a way to be rid of a legacy business”

Apple needed an OS, they bought NeXT which was a superb investment given how critical the OS has been.

They also needed cashflow, and more importantly to appeal to consumers and differentiate themselves from Windows PC. The iPod was a successful part of that strategy.

The iPod served its purpose and now is gone. NeXT became the most commercially successfully operating system of all time.


They could have continued to develop and sell WebObjects, a Java based application server and framework. It was amazing for what it did at the time and many major companies were using it. They were leaders in that market. Unfortunately, it died on the vine at Apple.

Also, according to Avie Tevanian, Steve Jobs lost interest in NeXT when it shed its hardware business and shifted to a software company primarily selling to the enterprise business community instead of education, its intended market. Jobs focused his attention on Pixar, leaving management to the VPs of sales, engineering, and finance.


That thing was very clearly a hybrid - SGI's graphics stuff glommed force-ably onto NT. I remember trying to get PC code to work on it and basic stuff definitely didn't work. Probably it worked better the other way around (SGI app -> visual workstation NT). I never tried that though.


For a practical example of this, see http://next.com


This is really the answer. From about 1990 to 1996, RISC workstations were several times faster than PCs, especially in floating point performance. However, when Intel came out with the Pentium Pro in 1995, it had better integer performance than almost all of the RISC chips and was about even on floating point.

With Windows NT and Linux being viable operating systems for the sorts of applications people used workstations for, it was only a matter of time before workstations died out. The main thing holding back Intel machines by 1996 or so was software support for workstation type applications such as EDA and CAD tools, but this was sorted out within a few years.

The last holdout was SGI, which was used in the visual effects industry, but NVIDIA caught up to them in about 1999 and SGI gave up on its performance leadership, and the applications followed.


It's little bit funny because once upon a time MIPS CPUs powered workstations.

Today they're mainly used for low-end cheap routers, because high-end market is occupied from ARMs.


NeXT was bought by Apple who took its software, which still lives on as the basis for Objective-C in the NS... (Nextstep) classes.


It's actually Next-Sun....the original classes were NX, for NeXT


I always wanted a SGI, but the price was so privative for me as a student. I learnt that those workstations got outdated really quick and cheap pc market was gaining more and more power competing with SGI until it was eaten totally.


Looking back, it's not like there was anything uniquely fast about x86. They just had more money, so they won.


No, I don't think Linux killed the Unix workstations. The workstation market mostly shifted to x86 + Windows NT.

What Linux + x86 did kill was the entry- and mid-range Unix server market. Redhat grew very rich largely by offering a cheaper platform (x86 + RHEL) to run Oracle RDBMS. And when the web started taking off, Linux + x86 was the default platform (still is). Similarly, the technical computing market (clusters, supercomputers, etc.) more or less completely migrated to x86 + Linux.

IBM sort of survived this carnage by retreating into the high end, and by having a lot of other business (consulting, mainframes, software, etc.) that they could leverage.

Sometimes I wonder if Sun could have survived if they had gone for the x86 commodity hardware route. They already had built the first generation (https://en.wikipedia.org/wiki/Sun386i ) but instead decided to go all-in on SPARC and bespoke hardware.


I can't claim any authority on this topic, because I only witnessed the tail end of it as a clueless junior dev, but one thing I saw at a couple of companies was Linux creating a less risky path to commodity workstation hardware. People with Unix-based computing workflows saw that they could achieve something similar on Linux on commodity hardware, and the hardware was cheap so it was wasn't a huge risk to try it out. After they were successfully working with Linux and could see that the commodity hardware was sufficient for their workflow, then it wasn't so radical to cast their eye over to Windows and see what software was available there, maybe be tempted by more polished desktop software and easier IT solutions for businesses. And they could try it out by dual-booting their existing Linux workstations. I'm sure some people made the double jump straight from commercial Unix workstations to Windows, but for people who did not want to take any big risks, Linux broke up the leap into smaller steps.


I think a large factor was that many engineers had a PC on their desk as well for MS Office, email, various corporate apps made in VB (before the web), and so forth.

When x86 & NT became good enough for whatever engineering application they were working with, there was a large incentive to switch that one to NT and consolidate everything on one machine.


Yeah. Windows NT and commodity x86 killed Unix workstations. That's where the software support was. I'm actually not sure to what degree Linux "workstations" became a big deal. You have Linux clients but most of the software ran on back-end servers.


> "Sometimes I wonder if Sun could have survived if they had gone for the x86 commodity hardware route."

They did try again in the late '00s: https://en.wikipedia.org/wiki/Sun_Ultra_series#x86 I always wondered why that didn't work out for them; seemed like a natural pivot.


Sun/Solaris peaked with the dot-com boom. They did put the dot in dot-com, after all. Linux was often seen as too immature, so many startups (at least the ones I am personally familiar with...) went with Sun hardware.

By the late 00's, Linux was much more solid and Sun didn't stand a chance.


Too little too late, fundamentally.

They did a Hail Mary effort with ZFS & dtrace, but in the end few people cared enough to switch.


If you look at the kinds of commercial engineering applications that used to be staples of the workstation market, this is spot on. Even today, it can be hard or impossible to run them in Linux.

- Matlab

- Pro/E

- Solidworks

- Ansys

- Autocad

- Altium

- STK


Can't speak about all those applications, but from my (brief) experience in supporting engineering applications running on the compute cluster of a large car manufacturer in central Europe, I can say that setting those applications up / configuring them for batch processing might not be exactly trivial, but we did succeed eventually (sometimes with the help of the original developer) with all requested ones. Windows is used in this context only on the engineer's workstation -- in the cluster it's Linux only (no other *nix in sight).


Anything that needs serious amounts of compute runs on Linux, because no one runs cluster/batch processing on something other than Linux.


Matlab runs on Linux just fine, AFAIK.


It does, and we use it at work this way without difficulty. But its an exception. Windows is much more common among our users, even for Matlab.


Sun offered intel based workstations and servers.


> Linux killed the Workstation vendors stone dead

I don't buy this entirely. I'd agree it ate into the market share, yes of course it did. And the blurring of commodity server and desktop hardware ate further.

What Linux didn't replace was accountability: there were plenty of shops which needed hardware vendor supplied and supported OS. It wasn't a technical decision, but a checkbox their customers demanded as some sort of due diligence checkbox. Then Redhat arrived on cheap hardware and ate into that share further, because they checked some of the boxes.

Sparc and Solaris was still technically superior for some things and still had all the throats to choke.

What changed for Sun was Oracle buying them: jacking prices; buying all the competitors to control them like Sleepycat, TimesTen, and MySQL; suing customers; suing security researchers; suing benchmarkers; and charging triple (not an exaggeration) to reinstate lapsed support contracts etc etc.

"We're a Sun Shop" was not longer the appeal it once was on the server, and of course nobody would need a desktop Sun with those downsides.


> What changed for Sun was Oracle buying them

SUN were basically forced to sell or face bankruptcy in a year or so. Their market was already gone by then. They were completely uncompetitive against Redhat and commodity x86, and they knew it - hence the big opensource push, which was too little too late, and the overall feel of confusion they were projecting. There was just no reason left to buy SUN for anyone but the most hardcore Solaris geek.

In hindsight, they should have tried to pivot to services, instead of focusing on releasing free software in the hope it would drive hardware sales.


No need to pivot, they invented cloud computing.

https://www.computerweekly.com/news/2240061933/Sun-enables-o...


I'm sure Linux killed off proprietary server OSs. But "workstations" are specifically desktop computers, and Linux on the Desktop is not what took over.

There are many reasons for the demise of workstations, mentioned in other posts, but I don't think that Linux was one of them.


> ...and Linux on the Desktop is not what took over.

Depends on where you look. In academia and scientific computing, the landscape is completely different.

Most of the software is written Linux-first. Because it's both easier to move on to clusters and open source philosophy (GPL & MIT licenses) is more prominent than ever.

A lot of the cutting edge tools are open source and there's a strange model in some: Software is free, code is open, but for research you need a license. You need to give your license number in your publication otherwise it might get rejected and/or retracted and you'd be fined by the company developing the software.

Forking is not feasible since both the software is well known and developing that kind of scientific software is very hard (to put it mildly).

Yes, MATLAB works everywhere but, MATLAB is not the peak scientific software. It's generally the base camp where you start. If you want MPI even for local runs, you're in *NIX land, squarely.

There are also other interesting software, like Singularity which can run containers as non-privileged users and some software (like OpenFOAM) is also available as a prebuilt container.


How can the software be free if you need a license for some purposes? Or does it just cost $0?


"Free for noncommercial use" is a common license for IP outside of software.


Virtualbox comes with PUEL (Personal Use and Evaluation) license, which makes it free for non-commercial use.

Java is (was? still is?) free for non-commercial use only. Even if OpenJDK is present.

There were some other software which I was using but, forgot their names (since I don't use them anymore).


VirtualBox is GPL, that license is for the extension pack that really for most use cases is not necessary.


You're right, it's my foggy mind. Core VBox is indeed GPL, it's the extension's license.


That's not correct. OpenJDK is free for commercial use unless you are using Oracle's binaries. Since there's no difference between OpenJDK and Oracle's JDK these days there's generally no reason to use the Oracle tooling.


I wanted to say "Presence of OpenJDK doesn't invalidate licensability of Oracle's JDK" but, not being a native English speaker has its downsides :)

IIRC OpenJDK had to change/reimplement some image processing algorithms from Kodak et. al but, I'm not sure whether Oracle is using older closed source libraries or OpenJDK's re-implementation in its version.


Per the following link there are "cosmetic and packaging differences" only as of Java 11:

https://blogs.oracle.com/java-platform-group/oracle-jdk-rele...


It depends on the market.

I have been designing semiconductors since 1997. All of the EDA (Electronic Design Automation) tools from Cadence, Synopsys, Mentor, and others are written for Unix/X11. Back then everyone had a Sun SPARC or HP PA-RISC on their desk. A few had an IBM RS/6000 or maybe a DEC Alpha. In the server room we had some Sun Enterprise E4000 class machines with 12 CPUs and 16GB RAM.

We started using Linux on the desktop as X terminals to the Suns in the closet and got rid of most of our Sun desktops.

From around 2002-2012 all of the EDA vendors ported their tools to Linux and we ran them straight on our Linux desktops. For multiple simulation runs we would send those jobs to the server cluster machines through LSF.

Since around 2013 the 3 companies I have worked for have used virtual desktop sessions using NX or X2Go running on the remote servers. All of the remote servers run Linux with a GNOME/KDE/XFCE desktop. Then we run the client on whatever machine and OS you want. Disconnect your session at work, go home, reconnect, and you have the exact desktop with everything open just as you left it.


The workstation market (IMHO) had three main pillars. One was vertically integrated solutions, the linked article talks about this, where you would buy the workstation to run the software, or even buy them together as a packaged solution.

I used to work for a company that did this called MSI. It developed an application for designing mobile telephone radio networks. You would load up a terrain map, specify the location, height and antenna type of your transmitters, and it would calculate signal strength, interference, traffic capture, etc. we'd sell the software, workstations, backup systems, storage arrays, etc as a turn key system.

The second pillar was as development systems for server or mainframe applications. You wanted to be developing on a system with the same architecture and software as the target system. To an extent you can lump in sysadmin workstations into this category. When I was a sysadmin I'd develop scripts on my workstation, and often compile and test open source software on it before deploying it to our environment, such as GhostScript, Apache and early versions of Python. Yep, back then these didn't come along with the OS.

The third pillar was academia and scientific institutions, where they were used for analytical work, or to develop custom software needed for particular research projects, or tools needed by the institutions.

Most of the vertically integrated ISV stuff moved to Windows, but some such as CAD and VFX is still rooted in Linux. The other two niches, dev workstations and scientific applications, also either went to windows or to Linux. So yes it didn't all go to Linux, but the slices of the pies that didn't got to Windows mostly went to Linux.


> and Linux on the Desktop is not what took over.

It did in certain industries like VFX/CG: it used to be SGI workstations running IRIX, now almost all large high-end VFX companies are running Linux on x86, and have been since ~2003 (there were one or two stragglers for a bit, but some companies like DD moved to Linux (and NT for a bit) in the late 90s on x86).


I worked in a big lab of about 100 engineer and scientists in the early 2000s. 2000-2005 more and more high-end SUN workstations were replaced by up-spec Linux machines. Only a handful of mostly CAD SUN workstations stayed.

Main reasons were the Linux and GNU development environment slowly overtook the Solaris one in convenience - in the end even the SUNs ran mostly gcc and the whole open source stack. Also getting drivers and (patch) installation was fairly easy for Linux and a major headache with Solaris.


Argh, I just had a flash-back patching the Solaris kernel. Thanks for ruining my day.


There's a big company in California that that occupies SGI's former offices and they run on Linux workstations.

Another one does the same in Sun's former offices.


And these symbolize the transition of silicon valley away from tech. Sun & SGI were pure technology companies, the product was tech.

Whereas the current inhabitants are both pure advertising companies, who just happen to use tech as part of their operations.


I think that that's very poignant symbolism. It's taken such an ahem incredible journey from such an innovation hotbed (Fairchild Semi/Xerox PARC/Bell Labs) to companies that extract money from eyeballs.

That is not to say "real" innovation is not taking place, but I get the sense it's just optimization and refinement of internal processes. Maybe VR or high(er) speed networking, but with all the recent IP lawsuits (a copyright on APIs?!) it seems to me that the collaborative spirit of truly new discovery is gone. To use the cliche, I can't shake the feeling that we've discovered everything already.

But what do I know; it all happened before I was even born. I'm just amazed every time I read about all the discovery and experimentation that named it Silicon Valley in the first place.


NT killed the workstation market.

Once you saw the applications ported to NT (CAD, GIS, 3D graphics), people realized they could run them on much cheaper PC's. You would be able to buy 4 Pentium II's running NT for what a single Sun/SGI would cost. Hell you could buy two PC workstations for what a graphics upgrade cost on the SGI machine.


And you could always use a Hummingbird X server to run critical apps remotely off the *nix servers.


You could buy 20 x86 workstations for the cost of a EDS/Unigraphics workstation (Running on DEC Alpha)


> Sun seemed to be very aware of the problem and fought really hard, but they couldn't cut off Linux completely.

Given how baffling Solaris x86 was as a product (is it supported? is anyone at Sun taking it seriously? is hardware coming? are they all breathing swamp gas over there?) over the years to those of us who really wanted to like Solaris x86, this seems like a pretty generous comment.

I think you're right about IBM. I don't know if they were the least poorly managed of the Unix vendors, but they had some real customers because of their mainframe business. They definitely deserve some credit for stretching Linux to that hardware.


IBM wasn't the least poorly managed. It's just that they weren't a workstation vendor in the same sense.

IBM had some Unix workstations, but that was never their core. Their core was mainframes. IBM could lose the Unix workstation business without blinking. Whereas Sun, SGI, etc., if they lost the Unix workstations, they were dead, because that was who they were.


There was a time when it wasn't crazy to think that Sun might be able to survive on the basis of a strong server business.

https://www.zdnet.com/article/server-sales-sun-wins-the-batt...

That was never really true of SGI.


Yeah. Sun wanted to be internet servers (along with workstations); SGI wanted to be render farms along with workstations. But internet servers were a lot bigger market than render farms. Which left SGI with the workstation business.

I remember that some guy at SGI wrote an article titled "Pecked to death by ducks", that argued that SGI would never be done in by commodity hardware - that commodity hardware wasn't powerful enough, so it would be as absurd as being pecked to death by ducks. Then dual CPU machines came out, and so he wrote a sequel: "Pecked to death by ducks with two bills".

In the end, though, SGI got pecked to death by 700-pound, 15-foot-tall ducks - the commodity hardware got better faster than SGI's did.

But in the mid-to-late 90s, I loved SGI machines. Not by 2005, maybe, but in their day, they were great.


SGI's workstations offered tons of internal bandwidth that commodity PC hardware couldn't touch until the early 2000s. So for some things and during a certain era SGI didn't have meaningful competition.

However the market for workstations that start at 50k is really tiny and saturated quickly. When the PC workstations with comparable I/O performance appeared and sold for half to a quarter of what SGI wanted, SGI was doomed.


> SGI's workstations offered tons of internal bandwidth that commodity PC hardware couldn't touch until the early 2000s.

It's funny, there was a brief and very painful period where you could point to things SGI's hardware could still do better than everyone else, but it was a rapidly shrinking category of things and everyone could see they were doomed. It was made more painful by the totally ineffectual ways they attempted to save themselves. Proprietary NT hardware! Itanium! Totally random Linux stuff!


In stories of the demise of workstations, don't forget September 11, 2001. Sun Micro had made a spectacularly successful push of their products into what we now call Fintech. But a vast installed base fell with the Twin Towers, along with a vast maintenance revenue stream for Sun.

A lot of replacement equipment was ordered from large manufacturers of commodity computing products. Are those highly capable devices workstations? PCs? I think they're both.


>I suspect the reason IBM was able to reconcile with Linux and even go all-in on it was that it complemented their lucrative very high end server and mainframe businesses.

Linux also gave IBM for the first time an OS that ran on all its hardware, from low-end x86 to POWER workstations to xSeries/iSeries minis to mainframes to supercomputer clusters, a goal it achieved with System/360 in the 1960s then soon had to abandon when the minicomputer and PC markets emerged.


Isn't this a perfect example of how "interoperability" can't possibly be a fair use exception if APIs were to be deemed copyrighted? The easier you make it for existing users to switch to your product, the greater the market effect you'll have on the original owner of the API.


So everybody using laptops and VMs didn't kill workstations?

Typing this from a workstation with 64 GB RAM. Maybe not a lot these days, but this comp is ancient by now and still chugs along nicely.


IBM and Red Hat weren't really in competition so, it makes sense that Linux mostly benefited them. I don't recall ever seeing anyone use an IBM workstation. The only places I've seen them used is for kiosks and POS.


I used one a bit in the mid 90s when I worked for the Forestry Commission here in the UK. It was running a specialist scientific application I can't remember anything about. I only worked on it to get it connected to our network, access NFS shares and connect to printers. It was incredibly expensive compared to the Suns. It had an administration interface that was just a set of bash script menus for running various commands to do basic admin activities. I ripped it off and converted it to C Shell to simplify some stuff on the Suns for some of the less technical staff. That was one of my first moderately complex shell scripts, so thanks IBM.


A workstation is not a desktop PC. I can guarantee you've never seen a workstation used as a kiosk or a POS.


So.. I guess this really is nostalgia for a time when you could go out and spend $10-100k, and buy a workstation that had capabilities simply not available on a consumer pc.

Thats kind of a hard thing to do these days.. and there doesn't seem to be a market for it.

If you want to do that now you just look at “server“ class equipment, or its "workstation“ smaller siblings, to get ”truck” features... multiple cpu sockets, more ram, more pcie lanes etc. Add pcie cards to as required for the workload. All the old-school unix workstation benefits, and it will still be able to run excel and outlook, which matters in lots and lots of environments.


> So.. I guess this really is nostalgia for a time when you could go out and spend $10-100k, and buy a workstation that had capabilities simply not available on a consumer pc.

Yeah, at the end of the day, commodity GPUs with commodity CPUs make up a workstation. But that's more to do with PCs "catching up", and becoming very similar to a small workstation. After all, absurdly huge SIMD units are useful for 3d video games, and everyone wants to play video games.

You can still buy the high-end GPU (A100) and/or CPUs (EPYC) if you wanted to build your own high-end workstation. But its all commodity parts these days. I think that's a net benefit.


There is still a gap of sorts. Support for ECC memory in desktops is pretty hard to find, and not well documented.


All the big PC vendors make well supported high-end desktop computers running Intel Xeons or AMD Ryzen Threadripper processors supporting ECC Memory.

Noteworthy recent models include:

- HP Z4 (Single Socket CPU Intel Xeon) - HP's biggest selling workstation

- Lenovo ThinkStation P620 - First workstation shipping AMD Ryzen Threadripper CPUs

- Apple Mac Pro 2019 - perhaps the most "bespoke" rack mountable workstation on the market, with a neat integrated GPU/thunderbolt implementation thanks to a seriously custom PCI-E implementation

One thing also of note is that a lot of these machines can be rack mounted with a rail kit from the manufacturer, either as a factory option or an after-sales accessory


https://www.apple.com/uk/mac-pro/specs/

‘Memory Configure up to 1.5TB of DDR4 ECC memory in 12 user-accessible DIMM slots’

And it can even cost over £10k just like the good old days.


Well SuperMicro sells workstation motherboards that you can use to assemble a workstation and they can go even higher than 1.5TB.

The dumb thing is that most people compare the Mac pro (a workstation) to a consumer PC, not to workstation-class PC (ThinkStations from Lenovo or HP Z8xx from HP).

Example: The HP Z8 can be equipped with 3TB of memory: https://www8.hp.com/us/en/workstations/z8.html?jumpid=in_r12...


Every ryzen 3000 chip with an x570 board supports unregistered ecc. And earlier ryzens all have some support, but motherboard compatability is patchy. These are the most popular desktop chips on the market right now, so I wouldn't call it hard to find. Not terribly well documented is fair, as I'd say the majority of the market for these chips don't want or need ecc.


That seems to fit my description, thanks!


https://www.dell.com/en-us/work/shop/workstations-isv-certif...

It's reachable in about two clicks from Dell's homepage. Not that I would buy those over building myself, but it's readily available for those who want it.


Not sure what you mean but if you type in "ECC motherboard" into your favourite parts supplier website then you get plenty of results.


Yes, I can build my own. That's not comparable to the workstation heyday.


Building your own was not an option back then, but seems to the more popular option today. It seems logical - who likes to loose warranty because of a component swap?


It's been a while, but Supermicro has/had plenty of options for workstation or server mainboards with ECC, special power requirements, etc.


The trick I used for my last two workstations has been to buy a lower-end tower server to get ECC support for a low cost.


> and it will still be able to run excel and outlook

You can run those in a browser, so they're poor examples. The only packages that are really lacking are, annoyingly enough, precisely those that required workstations in the past, such as CAD tools. Even Fusion runs very poorly in Wine for some reason.


I read it all the way to the bottom, then had to scroll back to the top to check the date. Then do a ^F and look for the word Linux, not there. Obviously an alternate universe. In the this one one, the use cases of the old workstations (personal experience: Apollo DN4500, a couple of Suns) have been replaced by commmodity PCs running Linux. Works just as well, end of story.


Another piece of the puzzle is that PCs got a lot better.

Once PCs got the PCI bus they were no longer completely inferior to a SUN Sparc while under load. Before that, ISA was a serios bottleneck.


Well, today a cheap gaming PC runs in circles around even the most advanced workstations of yesteryear. My cheap as chips consumer laptop easily accumulates an uptime of a month or more before I decide a kernel update would be a good idea and I have to reboot. Also, the computers in use by professionals today _are_ specialized. As a web developer, I don't need the expensive GPU(s) required by an AI researcher and thus my computer doesn't have one. If I was doing video editing I might have a different storage solution than my current SSD-only setup, etc.

I think the downside of the demise of workstations is mainly a psychological one: we no longer have the ability to marvel at and dream about highly specialized computers offering a significantly different user experience than that of our ordinary home computers. A small text about this: https://datagubbe.se/coolcomp.html


> we no longer have the ability to marvel at and dream about highly specialized computers offering a significantly different user experience than that of our ordinary home computers

I still dream of Nvidia-style personal supercomputers stuffed full of GPUs and RAM. (Not to mention other accelerators like Tensorflow, FPGAs, etc..) And quantum machines.


I went from an SGI workstation that fully decked out cost more than my car would have cost new to a Linux based PC that cost much less than $5K at the time and ended up with twice the computing power. Graphics took a while to catch up though, and it never felt quite as integrated but there is no way that justified the price difference.


At one point, 1990 or so, I used a high-end Sun workstation that cost about the same as the flat I lived in at the time. When the RAM was upgraded an engineer flew from London to Edinburgh to install the upgrade!

A few years later I downloaded Slackware onto multiple boxes of floppy disks and installed it on my PC at home and got a comparable development for a tiny fraction of the cost. Mind you, the screen wasn't quite as nice...


> Mind you, the screen wasn't quite as nice...

Yeah, I remember as a kid when I visited my father's work. They had Sun workstations with massive 21" razor sharp (for the time) grayscale monitors, compared to the tiny 14" monitor we had at home for our PC. And they had magical preemptive multitasking where a misbehaving application didn't bring the whole system down! ZOMG!


Sun pioneered the "3M" model (were M meant "mega"):

  * 1 MB of RAM
  * 1 megapixels on the screen
  * 1 MHz CPU
In the day, it seemed like unimaginable power! :)


You might be misremembering. The 3M (megapixel/MiB/MIPS) was a set goal of project Athena. http://web.mit.edu/acs/athena.html

Edit: 3M was not a set goal of project Athena (they merely meant to use such), but a term coined by Raj Reddy of CMU. The SUN-1 workstation was indeed an early machine meeting that requirement.


The first time I saw a Sun was in '87/'88 and it was a demo during the Vision class in the final year of my CS degree - what completely blew me away was the size and clarity of the screen and the fact it was running Unix.


When an engineer takes a flight to install memory, it is clear that you are being over charged. These companies milked their monopoly position quite well.


This is what it what it was like back then. I used to travel up and down the country installing systems and often wondered if there was somebody driving in the opposite direction to do the same job somewhere else.


Yeah - my first job (in 1988) was mostly travelling about installing new software and fixing problems with various breeds of Unix/Xenix systems. Remote access was pretty rare.


The memory was from a 3rd party supplier, not from Sun.

People on the same project drove from Milan to Edinburgh with their equivalent workstation in van to give one demo....


the joke of it is that apple won the unix wars. i'd be shocked if they haven't shipped more unix workstations than all the other vendors combined and they're certainly the last vertically integrated workstation vendor standing. (they just didn't call them that)

as far as linux on the desktop, that never quite happened... but again i'd wager there are almost certainly more handsets running android on linux than desktop pcs in operating existence.


I’d argue they are not “vertically integrated”, since they’ve effectively killed their server offerings. I don’t know the specifics, but I bet they don’t sell anything like a bank-teller system either, the bread & butter of workstation sellers (Sun had some amazing stuff there).

But Apple are certainly the only large unix-workstation seller left.


You might be thinking of horizontal integration. Apple is most certainly an increasingly vertically-integrated company.


I'll be interested in seeing how their new computers turn out - as they're turning away from commodity hardware, and going back to bespoke silicon for their CPUs, it's like we're going back in time to the more heterogeneous Unix Workstation days.


it's going to be interesting. i think they captured a lot of the developer/systems market because most server infrastructure is x86-64 and folks can run the same vms/containers locally. i wonder how much of that they stand to lose by forcing those folks to maintain separate builds or use slow emulation. granted, a lot happens in higher level languages, but introducing a new supported cpu architecture is painful... there are always native libraries.

i think a lot of apple's comeback, at least in the early days, was fueled by developers and systems people evangelizing them as the usable unix...


> i'd be shocked if they haven't shipped more unix workstations than all the other vendors combined

Not to mention all the unix-based laptops, tablets, phones, and watches. ;-)


This all made sense to me until the end. It sounds like the author wants to just build pc’s that are customized, which one can totally do without trouble. I’m not sure what there is to lament except maybe a lack of processor and OS variety?


Got the same impression. He writes:

> In high-tech domains, an engineer could readily have a toolchest of suitable computers in the same way that a mechanic has different tools for their tasks. This one has an FPGA connected by both PCI-E and JTAG to allow for quick hardware prototyping. This one is connected to a high-throughput GPU for visualisations; that one to a high-capacity GPU for scientific simulations.

Thats just different PCIe cards. You don't need a dedicated vendor for that, just a screwdriver.


Indeed this is how it's done. Especially given how fragile/buggy the "professional" software is, or is limited to DOS/Win98/ME/XP/2000, or need an exotic driver for PCI-to-PCIE converter, hardware license key etc.

I worked in companies where EDA, HDL, EE instrumentation machines were purposefully left running 24*7 just because the lengthy ritual of applying the needed set of hacks to get them running. Or because Windows update was killing a certain Labview function.

My favourite was a PC with a sticker "DO NOT PLUG INTO INTERNET UNDER ANY CIRCUMSTANCES, WINDOWS UPDATE PIRACY DETECTOR WILL TRIGGER CADENCE ANTIPIRACY DETECTOR"


Even USB 3.0 is fast enough for many of those use cases and makes it much easier to mix and match devices to get the right balance of portability, power consumption, and functionality. The only remaining problem there is untangling the cable squid on your desk.


USB 3.0 maybe not, but Thunderbolt 4 is, as it can tunnel PCIe traffic at 32Gb/s.


I think the keyword here is "integration". The theory being, because the hardware and software are being done by the same company, it's easier to make them work together well.

This has been the case sometimes, Apple, Digital (VMS), and NEXT are good examples of this, but from what I've seen, in most cases the user experience was pretty poor anyway.


And while Apple was always great at integrated experience in their cathedral, they took decades to get to external compatibility and competitive horsepower.


Since early nineties the Unix workstation vendors also had this problem - every now and then they managed to get ahead, particularly with floating point, but then quickly lost the lead, not being able to keep up with Intel. Most of them also loved weird hardware solutions - not just internal bus connectors, but also serial ports (eg the round Mac serial port connectors used by SGI) and sometimes just a "generally weird interfaces", like HP MUX boxes, or HIL keyboards/mice.


> "I’m not sure what there is to lament except maybe a lack of processor and OS variety?"

Yeah, but that's something we should lament; we're stuck at a local maxima for both. The ability to try out new ideas and architectures has been curtailed because no competitor can get enough momentum to stay alive long enough to displace the entrenched incumbents.


Nothing killed the workstation market. It's still totally a thing. Lenovo, Dell, HP, Apple ... all of these companies will quite happily sell you a dual CPU machine with masses of cores, oodles of ECC RAM, extremely high performance graphics with certified driver support etc.

All that's changed is that the architectures for workstations and "business class" desktop machines have converged, and the distinctive operating systems that were needed to support those older workstation architectures have gone away.


The author seems to use the word "workstation" specifically to refer to machines running a proprietary Unix variant, usually using a specialty (non-x86) CPU architecture.

What you're describing are workstation-class PC-compatibles, not workstations (as the author defines the term).

What the author is really talking about is the death of systems where the hardware (sometimes all the way down to the CPU architecture) and the OS are produced by the same organization as a single, integrated solution targeted at a specific vertical.


> What the author is really talking about is the death of systems where the hardware (sometimes all the way down to the CPU architecture) and the OS are produced by the same organization as a single, integrated solution targeted at a specific vertical.

Isn't that the iPhone and iPad?

EDIT: Even the camera sensors + DSP for processing is specially built for phones these days.


Arm licensee and not a workstation.


RE the idea of having a toolchest of machines for specific tasks, between my mac and monitor is an eGPU. It's a separate box and I bought it to run blender better.

Maybe it's not that workstations are dead, but that their shape has changed:

At the core of the modern version is a general purpose computer, and then hardware modules (internal cards, or external peripherals via thunderbolt) are added to make task-specific workstation computing happen.

Most people don't need their computers to be workstations, but that's always been the case I suppose.


I find the article slightly confusing, but what I believe the author wants to say is that special-purpose machines allow for better optimization than having to support a wide variety of generic hardware.

So in effect, this is the same argument as game consoles vs. desktop PCs. Every PS5 game can use primitive shaders, because the developers know that every PS5 will have them. For a generic desktop PC game release, though, only 1% of all buyers will have a new enough AMD RX Vega GPU, so it doesn't make sense to invest much resources into supporting primitive shaders there.

That said, pretty much all of the specialized hardware that Workstations used to have in the past is now commonly available everywhere as vertex, compute, or pixel shaders.

So in my opinion, the workstation marked died because suddenly everyone gained access to what was previously reserved only for workstation users.


One of the big selling points of early cloud products (like GMail) was that specialist hardware outperformed local hardware. It just happened to be in the cloud. You got to store your data on a limitless specialized server platform instead of your little hard drive. The irony now is that a terabyte hard drive for a laptop costs very little. A lot of business devices have plenty of space that is completely unused. So we spend our lives downloading from SharePoint/OneDrive/Outlook etc.


Cloud's pitch is elastic scaling and centralized IT.


I used Sun workstations and DEC Alpha early nineties, apart from the responsiveness of the window system (due to the fact they weren't being asked todo much compared with today), what I remember most is the reliability - we didn't shutdown our machines in the evenings or weekends and they'd happily carry on running without apps crashing for over a year, often without air conditioning.


My absolute favourite memories of using Suns were NeWS/HyperNeWS - I've never used anything like it since (pretty much the Lisp of the 2D graphics worlds).


> we didn't shutdown our machines in the evenings or weekends and they'd happily carry on running without apps crashing for over a year, often without air conditioning.

I really don't understand this. Apart from the minor detail that modern (e.g. younger than a decade or so) PCs can go to stand-by mode and wake up within seconds at most, I've yet to come across commodity hard- and software that doesn't do the same.

The current up time of my machine is 4 days 13 hours, 20 minutes - because I ran some system updates last week. Without those, even my mediocre machine running all the software I use on a daily basis (IDEs, web browser, several terminals) just keeps running for weeks on end and never ever crashes.

Am I just lucky or is this "crashes and blue screens"-business just a persistent memory from the late 90s and early 2000s?


There's a difference in weeks and years. I think the main reason was the OS and nearly all hardware was developed and tested by the same company - no 3rd party graphic card kernel drivers etc.

It was also a fact that a lot of these machines were expected to be up 24/7 - running mail/file servers etc. OS updates, including kernel updates wouldn't require rebooting.


Yes a kernel update would require rebooting. But that happened maybe once a year, or even less often. The update was delivered on tape and it was probably a multi-day project for the admins to prepare for the installation.


> Am I just lucky or is this "crashes and blue screens"-business just a persistent memory from the late 90s and early 2000s?

the choice of components determines if a windows machine bluescreenes all the time or accumulates uptime like a pro.

hard to make a informed choice thou because the market is moving very fast and so "luck" is a huge factor.


The current uptime on my PCs is always since the last power outage, currently 275 days. My laptop has been up for 40 days (with a lot of suspending.) I know Windows machines are forced to go down a lot, but I don't think other systems have that problem.


Everybody is required to go down a lot unless they're ignoring security patches. I see the Ubuntu "System restart required" message at least once a week.


If you wanted months of uptime in the early 1990s it had to be some kind of Unix - even Linux was not yet reliable enough.


Actually the Suns had an issue with their harddisks getting stuck after being turned off for 2 weeks, they’d be unable to start again.


I never shut down my 486 Linux box in the 90’s, either. It had 6+ months uptime under 1.x series kernels.


In a way, of course the "workstation" distinction is still here. Gamers aside, if you see a machine decked out in 27" or bigger, high-DPI monitors, double digit gigabytes of RAM and so on, it's probably because it's being used for work, be it graphic design or video editing or electronic design. The unwashed masses have mini laptops and big smartphones and Chromebooks.


... or writing native software in a compiled language.


CAPEX is the key.

I have fond memories of offering an x86+tomcat+mysql solution to clients competing with other solutions based on Sparc+iPlanet+Oracle. The offer was so low compared with previous ones, that some customers ask for confirmation.

This CAPEX was also mirrored in the development workstations. No need to have Sun based hardware, a commodity PC was enough to develop without surprises when deploying.


I came across a quote from Alan Kay that came to mind when reading this:

> It’s worth pondering this. One argument against mine, is that “people need and want ‘appliances’ “ that only have one function.

This was in response to a more widely known quote of his:

> Simple things should be simple, complex things should be possible.

The only people who need specialized personal workstations are people working with large data. But, even with that the cost of the cloud has made workstations a poor investment in most cases. I don't think I'd agree with the notion that the demise of the workstation is "untimely." Maybe in a retro, nostalgic or, cultural way it's demise is upsetting but, in the larger pragmatic sense it's quite welcomed. Workstations always felt like a stop-gap solution for what we have now.


I still like to pour one out for workstations, myself. With the cloud, we’re back to paying a big company for their time share machines in the sky.


For a long time I thought the market would bifurcate again. With consumption devices at one end and creation devices at the other.

Sadly it looks that isn’t happening and we will end up with nothing more than a web browser on our desks and anything complex running in the cloud.

I don’t like this model at all and it isn’t about culture or nostalgia but rather the loss of control and agency.


Hasn't it? I think you're looking up, when the split happened down. Phones and tablets for consumption, laptops and desktops for creation. When you need more oomf, temporary cloud resources. A decent linux box with a good monitor seems just like my old SPARC IPC. I have an optical mouse (wireless and any surface!), ctrl-alt-meta keys on both sides, a compiler suite and and an X server. 1989 is still here if you squint.


Why would you think that for a long time when the trend has clearly been towards where we are today? PC's kept getting more powerful and sophisticated architectures. Workstations never had a future. I remember even back in the 2000's personal workstations were considered legacy. PC's had networking capabilities that were more than enough for what many organizations were doing and they didn't want to support or purchase workstations.


Whether it is called a PC or a Workstation is immaterial, it's the capacity for work that is the distinction.


I feel like I've read this article, every decade.


I got my company's HP9000 workstation after they dumped all that for x86 and Linux. It was fully decked-out, very heavy and was so loud I eventually had to get rid of it. Fun times – I learned assembler on it. Later I got one of the faster UltraSpac-II workstations. It was pretty cool that you could drop into the OpenFirmware and pretend to be a Forth programmer whenever you liked.


I'm not sure workstations did suffer a demise? If workstation vendors like SGI, NeXT, Sun, HP-UX workstations etc. still existed in 2020, what would their products look like? Surely just like a modern Mac or PC? Powerful multi-core CPUs, as much RAM as you have money to spend (e.g. >1 TB), high resolution full-colour graphics (e.g. 5k iMacs or similar PC monitors).


Basically what the workstation companies did was sell capabilities that would be available years down the line at a premium with much better margins. The problem (for them) was that Wintel kept shrinking the gap. In 1981 they were selling stuff that PC's wouldn't have for 10-15 years. In 1991 they were selling stuff that PC's wouldn't have for 5 years. By 2001 they had been largely lapped on their core systems.

Had a company like SGI managed to survive their business model would have likely remained the same[1]: they might have looked like a boutique nVidia selling extreme high-end parallelism solutions (i.e. graphics, neural net hardware etc) that GPU makers wouldn't yet be able to match with commodity hardware. Probably some combination of extreme transistor count and boatloads of FPGA/specialized silicon coupled with extreme power consumption and cooling. They also would have been able to go after smaller verticals than nVidia can. That was always their game: selling solutions that were relatively harder to produce because they had to be engineered and hadn't been commoditized yet. The problem is, those doing to the commoditizing can move faster than those doing the innovating. When those doing the innovating die off, the technology doesn't advance as quickly.

[1] I say this because all of these companies seemed to have a similar fault: they were never able to move down market so that they could scale up their volumes. The same thing is slowly killing Intel right now.


Essentially what IBM are doing with POWER.


I really enjoyed the time I got to spend with a POWER9 server all to myself; “remotely” accessed on the local intranet while at IBMs offices on a neat project they partner with us on, it was like an old school minicomputer. Parallelism out the wazoo, some absolutely silly amount of RAM, and I enjoyed playing with POWER’s vector instructions


There’s still a market for supported, super high end workstations. There just isn’t a market for these weird operating systems these companies were selling. The operating systems may have been the best since sliced bread, they are useless without applications.

And then without the operating system you get commoditized and it’s impossible to charge the amount of money these companies were charging.


A lot of Motif propietary sw and custom tools were recompiled for Linux with nearly no changes in late 90s at a many times lower cost.


Workstations haven't died, they have evolved. Proprietary hardware and software have been mostly replaced by commodity hardware and open source software.

The jobs that once required powerful proprietary workstations and now done with powerful PC based workstations running Linux, BSD, or even Windows.


There’s a number of comments here dancing around the themes of The Innovators Dilemma (book) by Christiansen so I thought I’d mention it specifically.

It describes how and why high-margin companies are unable to pivot to lower margins as technology advances, getting left behind.


I use a repurposed Dell Precision workstation here at home and it's great. The Xeon 6-core rips through applications and the 64GB allows for multiple VMs as the case may be. In my mind, this setup is superior to ordinary desktops.


Proprietary, vertically-constrained systems are a weird thing to be nostalgic about


This is why I put together a business case for DGX's from Nvidia because the vertical integration saved us so much time.


I just build a hackintosh with Ryzen 3900x/32gb/500gbNVME and Saphyre Nitro with expensive case. Total cost: Under 2000 usd. Kicking iMac pro (11 000 usd +) to dust. So thank you Apple. PS. My Trashcan looks at me with silent blackness. But I don't care.


Just curious to try some benchmarks with the very latest Raspberry Pi 400 , the all-in-one Linux-based PC going for $70 from Raspberry. If needed, a spare, old but not spent, supporting external HD is waiting for recall in one of my office drawers already.


When the Raspberry Pi compute module 4 was announced recently, I did a quick update to my Raspberry Pi / Sun E450 comparison: https://twitter.com/fanf/status/1318322060477763584?s=21 - the Pi is about 10x better in most performance metrics, and nearly 1000x less in size, weight, and price!


What counts as a workstation these days? If the goalposts haven't moved, then just about everything anyone uses today counts.

For me back in the mid 90s, the major differentiator was ethernet, and everything has that now.


well windows + intel became a much faster and cheaper computer then anyone else could produce. I think its much better this way.

Only weird sysadmin geeks seems to believe there is any real value in using some "workstation" with a custom cpu and os.


Off topic, but what a horrible font. I had to switch to reader view to actually be able to read the article.


In Firefox Preferences>Fonts and Colors:Advanced uncheck Allow pages to choose their own fonts, instead of your selections above. Problem solved.


Not just off-topic, against the guidelines: https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: