It's not that workstations died, it's that they look different and solve a different problem. Anyone can build a computer with off the shelf parts that has the absolute maximum specs that any vendor can produce. Anytime a new workstation makes the news (see Apple's latest workstation), the "PC Master race" gang is quick to point out that they can build the same system without the Apple/HP/Dell/Lenovo tax. What they somehow always forget about is that, if I'm an ITDM and I need 100 or even 1000 systems and they need to be configured, validated and ready to deploy from day one, custom built computers aren't feasible in any sense of the word. The value add from workstation companies is a mix of scale, availability, validity and uniformity.
They don't need that value add from workstation companies. And heck, they probably welcome excuses to tinker on their workstations.
And this is great. It keeps the open, build-able computer market going -- contrary to the alarming trend of locked down computing devices.
It's never bitten me, worst case I'd have to next working day a part from amazon.
For development workloads you simply can't beat that approach.
Recent example, unit tests on work issued macbook pro, 2 minutes, same tests on my PC, 39s.
There simply isn't a laptop that fits my workloads better than a modern Ryzen with a crap-tonne of RAM
You cannot get those specs, or anything near that warranty, from commodity desktop hardware.
I did that back in 2016 when I needed to upgrade my GPU to play a new game after getting off work
Anecdote: In Japan, SanDisk sells "genuine" SD cards extremely expensive (about x3-x7 for US price). Importing SD cards from US (or buy from local importer, it's common) is makes sense even though it has no warranty.
Generally speaking Amazon is not going to have 1000 of a specific computer part in a 2-hour delivery window. You could order different parts but then you now have increasing numbers of variations of setups and you don‘t want to be fixing lots of small unrelated problems than a widespread issue which has the same fix all the time.
And for corporations it is better to have the cost paid for upfront; it is expensive, and hard to get approvals for unexpected budget items.
The crate itself, a 3700X w/ 64GB of RAM, 1TiB NVMe, 1660 GTX didn't cost much more than the 64GB of RAM for the iMac was going to cost on its own...
But do you know how much of a cost savings that is ballpark?
Also do you have a guide that you follow? I'd like to replace my Macbook Pro but I don't really know where to start.
Also is there a resource for a noob to run unit testing to if my performance is better when I'm done?
That would get you a Ryzen 3 which would be substantially faster than mine and a RTX3070 which would crush my 2080.
With two 27” 4K screens my PC came out about the same as a MacBook Pro but is much faster on the workloads I care about and as nice the MacBook Pro screen is 2x4K is better.
Bit less portable though.
Software wise Fedora is as stable as OSX and has everything I need (I’ve been Linux as a primary dev platform since the turn of the millennium) and in fact things have never been better, pretty much everything supports Linux at least that most devs need in 2020 (Xcode is an exception).
Gnome will feel most like OSX but I prefer Cinnamon.
Prices are substantially cheaper in the US.
This is a different market though, Dell/HP/Lenovo are mostly targeted toward businesses or the common consumer just looking to get a laptop for school.
If you where talking about Workstations from the usual suspects (HPE, Dell) I'd agree but Apple really do put a fantastic markup on their kit.
There are also a lot of people buying those Apple Workstations where they only need one or two of them and someone like Puget could build something faster for much less.
Once you factor in the cost of stuff like support and hardware validation it becomes pretty much a moot point. That's without even considering that you would need to hire a supply chain expert(s) to acquire large quantities of parts if you needed anything more than a few machines. At my work, we have whole departments full of people dedicated to making sure we have the right mix of hardware at the right time to fulfill customer needs.
There are definitely some configurations that really don't make sense (the lowest end config comes to mind) but, at the same time, if you run a business with a team of people training on Macs, the amount of money it would cost in training and lost productivity to switch over to Windows for possibly lower prices makes even less sense.
I'm a custom PC and linux guy myself but this seemed like a good time to remind everyone that there's more to computers than just the cost of making a single machine.
They also pay their workstations with post-tax money, vs a business that can write-off workstations as business expenses.
This suddenly makes the Mac Pro pricing a little more obvious.
It was kinda nice, we could replace any part ourselves same day. As parts got older the machines were reconfigured for people who needed less power (like HR).
That said it wasn't all rosy there. The ticket system was passing sticky notes between people, and active directory/a few other windows management things were replaced with... some sort of lotus product? It replaced the login in screen.
It was totally not worth it for us, it was penny-wise and pound-foolish. We switched to buying some Advantech machines, and while the BOM cost was one $1200 line item compared to a long DIY BOM off pcpartpicker.com that ran closer to $600, all the engineering time we wasted on component selection and ordering and progress bars and BIOS configs and Windows update and cable ties and debugging reliability issues was much harder to quantify and probably significantly more than $600.
I think there's a few inflection points on the quantity/process value curve - Building 4 machines? That's a little one-day project for somebody. Building 400? Hire a technician, set up an assembly station, and develop some work instructions. Building 40? That's going to have one or two that need warranty work, and you're not going to recoup the investment required to develop a good process - just buy them from someone who has. Building 4000? Your process is now multiple technicians, an engineer, a purchasing agent, and some management, and support needs after-hours on-call people, and you need an inventory of spare parts...developing that capability is again more expensive than just buying it. Building 40,000? At that level, you're building a PC-construction business and you can sell your spare capacity to the 40-unit guys.
In my experience, there is no uniformity. If you buy 10 machines in the same order with the same SKU, you may end up with 10 different combinations of components between them...
I went to a seminar by an Open Source hacker at a Unix user group conference in the late 90s, where he talked about the Sun funded work he'd done on getting the Linux kernel to run well on Sun workstations. He was absolutely clear that he thought Linux would kill Sun because it completely undermined their software license business, but that this was their problem. They just had no good options.
I suspect the reason IBM was able to reconcile with Linux and even go all-in on it was that it complemented their lucrative very high end server and mainframe businesses. Their workstation businesses benefited from a halo effect from the super high-end mega-systems that protected it from becoming total road kill, but Sun and the other vendors never really managed to establish themselves at the super high-end, so when Intel ditched Itanium and PCs caught up with RISC they were left without a market.
Around 1999 SGI Octanes were about 50K, at the time NT machines (think Intergraph) appeared that were, for many, many purposes, just as capable. They were 10K.
In addition the MIPS chip was considerably worse than a dual proc x86 machine of the same vintage. I was working at a company where there was a C++ API. Compilation time on the SGI was 2 hours. It was 10 minutes on the Intergraph.
When cards like the Nvidia Quadro and FireGL cards came they were better than SGI machines and cost a few thousand dollars.
This guy worked for NEXT, who were eaten early because they produced massively overpriced workstations that there was a small market for. They didn't even survive the workstation market, let alone the coming wave of better hardware from very large development budgets for mass market parts.
Exactly. As much as I liked the environment the premium you paid for what was sub-par performance really killed the market. And when SGI ended up selling intel Windows boxes the writing was on the wall for them as well.
I disagree that next got 'eaten early' though, effectively every Mac you've used since those days was a Next workstation marketed under a different brand. Next morphed into Apple much more than that Apple acquired Next.
Be’s TCP stack was never very good. It never had big money applications like Frame or Illustrator, and lacked the robust dev platform (and paying customers) that Next had.
And I'm using "business" in a generic sense. This was an important quality for schools, as well, where Macs were incredibly popular (for a number of reasons, including favorable pricing from Apple and historical support for schools since the 80s). Single user OSes just don't fit within any organization trying to at least pretend to have decent IT.
How has OS X's "multi-userness" helped organizations?
Do you literally mean two or more people using the same Mac?
Has it been useful for organizations that users can ssh into Macs? Or is Apple Remote Desktop involved? (Does Apple Remote Desktop even allow 2 people two use two instance of OS X's GUI at the same time?)
Or by "multi-user" do you refer to the services that can be enabled using the "Sharing" pane of System Preferences (e.g., file sharing, printer sharing, remote management, internet sharing)?
It’s not about simultaneous use, but standard user management, access control, and permissions.
Though yes, technically this also allows multiple to access it simultaneously via ssh and other things that’s not the important part for IT in this instance.
- BeOS was not a great choice, though certain aspects were much better than NeXT, primarily the filesystem. There were a lot of things missing, but BeOS's lack of printer support was the headline issue; seems ridiculous today, but "desktop publishing" still mattered to Apple at the time, and it spoke to immaturity of the BeOS graphics stack.
- Aside from WebObjects, the vaunted NeXT software stack was bitrotting. It was buggy and unstable on commodity PC hardware, and it was never polished enough for non-Unix-savvy end users. Apple spent something like six years bringing it up to a consumer-ready spec before they felt comfortable dropping classic Mac OS.
- The original iPod used no NeXT-derived software.
- But without the NeXT purchase, Apple wouldn't have gotten Steve back.
- Without Steve back at Apple, the iPod wouldn't have existed.
- Without the iPod, you don't get the iPhone.
This is not: “NeXT was just a way to be rid of a legacy business”
Apple needed an OS, they bought NeXT which was a superb investment given how critical the OS has been.
They also needed cashflow, and more importantly to appeal to consumers and differentiate themselves from Windows PC. The iPod was a successful part of that strategy.
The iPod served its purpose and now is gone. NeXT became the most commercially successfully operating system of all time.
Also, according to Avie Tevanian, Steve Jobs lost interest in NeXT when it shed its hardware business and shifted to a software company primarily selling to the enterprise business community instead of education, its intended market. Jobs focused his attention on Pixar, leaving management to the VPs of sales, engineering, and finance.
With Windows NT and Linux being viable operating systems for the sorts of applications people used workstations for, it was only a matter of time before workstations died out. The main thing holding back Intel machines by 1996 or so was software support for workstation type applications such as EDA and CAD tools, but this was sorted out within a few years.
The last holdout was SGI, which was used in the visual effects industry, but NVIDIA caught up to them in about 1999 and SGI gave up on its performance leadership, and the applications followed.
Today they're mainly used for low-end cheap routers, because high-end market is occupied from ARMs.
What Linux + x86 did kill was the entry- and mid-range Unix server market. Redhat grew very rich largely by offering a cheaper platform (x86 + RHEL) to run Oracle RDBMS. And when the web started taking off, Linux + x86 was the default platform (still is). Similarly, the technical computing market (clusters, supercomputers, etc.) more or less completely migrated to x86 + Linux.
IBM sort of survived this carnage by retreating into the high end, and by having a lot of other business (consulting, mainframes, software, etc.) that they could leverage.
Sometimes I wonder if Sun could have survived if they had gone for the x86 commodity hardware route. They already had built the first generation (https://en.wikipedia.org/wiki/Sun386i ) but instead decided to go all-in on SPARC and bespoke hardware.
When x86 & NT became good enough for whatever engineering application they were working with, there was a large incentive to switch that one to NT and consolidate everything on one machine.
They did try again in the late '00s: https://en.wikipedia.org/wiki/Sun_Ultra_series#x86 I always wondered why that didn't work out for them; seemed like a natural pivot.
By the late 00's, Linux was much more solid and Sun didn't stand a chance.
They did a Hail Mary effort with ZFS & dtrace, but in the end few people cared enough to switch.
I don't buy this entirely. I'd agree it ate into the market share, yes of course it did. And the blurring of commodity server and desktop hardware ate further.
What Linux didn't replace was accountability: there were plenty of shops which needed hardware vendor supplied and supported OS. It wasn't a technical decision, but a checkbox their customers demanded as some sort of due diligence checkbox. Then Redhat arrived on cheap hardware and ate into that share further, because they checked some of the boxes.
Sparc and Solaris was still technically superior for some things and still had all the throats to choke.
What changed for Sun was Oracle buying them: jacking prices; buying all the competitors to control them like Sleepycat, TimesTen, and MySQL; suing customers; suing security researchers; suing benchmarkers; and charging triple (not an exaggeration) to reinstate lapsed support contracts etc etc.
"We're a Sun Shop" was not longer the appeal it once was on the server, and of course nobody would need a desktop Sun with those downsides.
SUN were basically forced to sell or face bankruptcy in a year or so. Their market was already gone by then. They were completely uncompetitive against Redhat and commodity x86, and they knew it - hence the big opensource push, which was too little too late, and the overall feel of confusion they were projecting. There was just no reason left to buy SUN for anyone but the most hardcore Solaris geek.
In hindsight, they should have tried to pivot to services, instead of focusing on releasing free software in the hope it would drive hardware sales.
There are many reasons for the demise of workstations, mentioned in other posts, but I don't think that Linux was one of them.
Depends on where you look. In academia and scientific computing, the landscape is completely different.
Most of the software is written Linux-first. Because it's both easier to move on to clusters and open source philosophy (GPL & MIT licenses) is more prominent than ever.
A lot of the cutting edge tools are open source and there's a strange model in some: Software is free, code is open, but for research you need a license. You need to give your license number in your publication otherwise it might get rejected and/or retracted and you'd be fined by the company developing the software.
Forking is not feasible since both the software is well known and developing that kind of scientific software is very hard (to put it mildly).
Yes, MATLAB works everywhere but, MATLAB is not the peak scientific software. It's generally the base camp where you start. If you want MPI even for local runs, you're in *NIX land, squarely.
There are also other interesting software, like Singularity which can run containers as non-privileged users and some software (like OpenFOAM) is also available as a prebuilt container.
Java is (was? still is?) free for non-commercial use only. Even if OpenJDK is present.
There were some other software which I was using but, forgot their names (since I don't use them anymore).
IIRC OpenJDK had to change/reimplement some image processing algorithms from Kodak et. al but, I'm not sure whether Oracle is using older closed source libraries or OpenJDK's re-implementation in its version.
I have been designing semiconductors since 1997. All of the EDA (Electronic Design Automation) tools from Cadence, Synopsys, Mentor, and others are written for Unix/X11. Back then everyone had a Sun SPARC or HP PA-RISC on their desk. A few had an IBM RS/6000 or maybe a DEC Alpha. In the server room we had some Sun Enterprise E4000 class machines with 12 CPUs and 16GB RAM.
We started using Linux on the desktop as X terminals to the Suns in the closet and got rid of most of our Sun desktops.
From around 2002-2012 all of the EDA vendors ported their tools to Linux and we ran them straight on our Linux desktops. For multiple simulation runs we would send those jobs to the server cluster machines through LSF.
Since around 2013 the 3 companies I have worked for have used virtual desktop sessions using NX or X2Go running on the remote servers. All of the remote servers run Linux with a GNOME/KDE/XFCE desktop. Then we run the client on whatever machine and OS you want. Disconnect your session at work, go home, reconnect, and you have the exact desktop with everything open just as you left it.
I used to work for a company that did this called MSI. It developed an application for designing mobile telephone radio networks. You would load up a terrain map, specify the location, height and antenna type of your transmitters, and it would calculate signal strength, interference, traffic capture, etc. we'd sell the software, workstations, backup systems, storage arrays, etc as a turn key system.
The second pillar was as development systems for server or mainframe applications. You wanted to be developing on a system with the same architecture and software as the target system. To an extent you can lump in sysadmin workstations into this category. When I was a sysadmin I'd develop scripts on my workstation, and often compile and test open source software on it before deploying it to our environment, such as GhostScript, Apache and early versions of Python. Yep, back then these didn't come along with the OS.
The third pillar was academia and scientific institutions, where they were used for analytical work, or to develop custom software needed for particular research projects, or tools needed by the institutions.
Most of the vertically integrated ISV stuff moved to Windows, but some such as CAD and VFX is still rooted in Linux. The other two niches, dev workstations and scientific applications, also either went to windows or to Linux. So yes it didn't all go to Linux, but the slices of the pies that didn't got to Windows mostly went to Linux.
It did in certain industries like VFX/CG: it used to be SGI workstations running IRIX, now almost all large high-end VFX companies are running Linux on x86, and have been since ~2003 (there were one or two stragglers for a bit, but some companies like DD moved to Linux (and NT for a bit) in the late 90s on x86).
Main reasons were the Linux and GNU development environment slowly overtook the Solaris one in convenience - in the end even the SUNs ran mostly gcc and the whole open source stack.
Also getting drivers and (patch) installation was fairly easy for Linux and a major headache with Solaris.
Another one does the same in Sun's former offices.
Whereas the current inhabitants are both pure advertising companies, who just happen to use tech as part of their operations.
That is not to say "real" innovation is not taking place, but I get the sense it's just optimization and refinement of internal processes. Maybe VR or high(er) speed networking, but with all the recent IP lawsuits (a copyright on APIs?!) it seems to me that the collaborative spirit of truly new discovery is gone. To use the cliche, I can't shake the feeling that we've discovered everything already.
But what do I know; it all happened before I was even born. I'm just amazed every time I read about all the discovery and experimentation that named it Silicon Valley in the first place.
Once you saw the applications ported to NT (CAD, GIS, 3D graphics), people realized they could run them on much cheaper PC's. You would be able to buy 4 Pentium II's running NT for what a single Sun/SGI would cost. Hell you could buy two PC workstations for what a graphics upgrade cost on the SGI machine.
Given how baffling Solaris x86 was as a product (is it supported? is anyone at Sun taking it seriously? is hardware coming? are they all breathing swamp gas over there?) over the years to those of us who really wanted to like Solaris x86, this seems like a pretty generous comment.
I think you're right about IBM. I don't know if they were the least poorly managed of the Unix vendors, but they had some real customers because of their mainframe business. They definitely deserve some credit for stretching Linux to that hardware.
IBM had some Unix workstations, but that was never their core. Their core was mainframes. IBM could lose the Unix workstation business without blinking. Whereas Sun, SGI, etc., if they lost the Unix workstations, they were dead, because that was who they were.
That was never really true of SGI.
I remember that some guy at SGI wrote an article titled "Pecked to death by ducks", that argued that SGI would never be done in by commodity hardware - that commodity hardware wasn't powerful enough, so it would be as absurd as being pecked to death by ducks. Then dual CPU machines came out, and so he wrote a sequel: "Pecked to death by ducks with two bills".
In the end, though, SGI got pecked to death by 700-pound, 15-foot-tall ducks - the commodity hardware got better faster than SGI's did.
But in the mid-to-late 90s, I loved SGI machines. Not by 2005, maybe, but in their day, they were great.
However the market for workstations that start at 50k is really tiny and saturated quickly. When the PC workstations with comparable I/O performance appeared and sold for half to a quarter of what SGI wanted, SGI was doomed.
It's funny, there was a brief and very painful period where you could point to things SGI's hardware could still do better than everyone else, but it was a rapidly shrinking category of things and everyone could see they were doomed. It was made more painful by the totally ineffectual ways they attempted to save themselves. Proprietary NT hardware! Itanium! Totally random Linux stuff!
A lot of replacement equipment was ordered from large manufacturers of commodity computing products. Are those highly capable devices workstations? PCs? I think they're both.
Linux also gave IBM for the first time an OS that ran on all its hardware, from low-end x86 to POWER workstations to xSeries/iSeries minis to mainframes to supercomputer clusters, a goal it achieved with System/360 in the 1960s then soon had to abandon when the minicomputer and PC markets emerged.
Typing this from a workstation with 64 GB RAM. Maybe not a lot these days, but this comp is ancient by now and still chugs along nicely.
Thats kind of a hard thing to do these days.. and there doesn't seem to be a market for it.
If you want to do that now you just look at “server“ class equipment, or its "workstation“ smaller siblings, to get ”truck” features... multiple cpu sockets, more ram, more pcie lanes etc. Add pcie cards to as required for the workload. All the old-school unix workstation benefits, and it will still be able to run excel and outlook, which matters in lots and lots of environments.
Yeah, at the end of the day, commodity GPUs with commodity CPUs make up a workstation. But that's more to do with PCs "catching up", and becoming very similar to a small workstation. After all, absurdly huge SIMD units are useful for 3d video games, and everyone wants to play video games.
You can still buy the high-end GPU (A100) and/or CPUs (EPYC) if you wanted to build your own high-end workstation. But its all commodity parts these days. I think that's a net benefit.
Noteworthy recent models include:
- HP Z4 (Single Socket CPU Intel Xeon) - HP's biggest selling workstation
- Lenovo ThinkStation P620 - First workstation shipping AMD Ryzen Threadripper CPUs
- Apple Mac Pro 2019 - perhaps the most "bespoke" rack mountable workstation on the market, with a neat integrated GPU/thunderbolt implementation thanks to a seriously custom PCI-E implementation
One thing also of note is that a lot of these machines can be rack mounted with a rail kit from the manufacturer, either as a factory option or an after-sales accessory
‘Memory Configure up to 1.5TB of DDR4 ECC memory in 12 user-accessible DIMM slots’
And it can even cost over £10k just like the good old days.
The dumb thing is that most people compare the Mac pro (a workstation) to a consumer PC, not to workstation-class PC (ThinkStations from Lenovo or HP Z8xx from HP).
Example: The HP Z8 can be equipped with 3TB of memory: https://www8.hp.com/us/en/workstations/z8.html?jumpid=in_r12...
It's reachable in about two clicks from Dell's homepage. Not that I would buy those over building myself, but it's readily available for those who want it.
You can run those in a browser, so they're poor examples. The only packages that are really lacking are, annoyingly enough, precisely those that required workstations in the past, such as CAD tools. Even Fusion runs very poorly in Wine for some reason.
Once PCs got the PCI bus they were no longer completely inferior to a SUN Sparc while under load. Before that, ISA was a serios bottleneck.
I think the downside of the demise of workstations is mainly a psychological one: we no longer have the ability to marvel at and dream about highly specialized computers offering a significantly different user experience than that of our ordinary home computers. A small text about this: https://datagubbe.se/coolcomp.html
I still dream of Nvidia-style personal supercomputers stuffed full of GPUs and RAM. (Not to mention other accelerators like Tensorflow, FPGAs, etc..) And quantum machines.
A few years later I downloaded Slackware onto multiple boxes of floppy disks and installed it on my PC at home and got a comparable development for a tiny fraction of the cost. Mind you, the screen wasn't quite as nice...
Yeah, I remember as a kid when I visited my father's work. They had Sun workstations with massive 21" razor sharp (for the time) grayscale monitors, compared to the tiny 14" monitor we had at home for our PC. And they had magical preemptive multitasking where a misbehaving application didn't bring the whole system down! ZOMG!
* 1 MB of RAM
* 1 megapixels on the screen
* 1 MHz CPU
Edit: 3M was not a set goal of project Athena (they merely meant to use such), but a term coined by Raj Reddy of CMU. The SUN-1 workstation was indeed an early machine meeting that requirement.
People on the same project drove from Milan to Edinburgh with their equivalent workstation in van to give one demo....
as far as linux on the desktop, that never quite happened... but again i'd wager there are almost certainly more handsets running android on linux than desktop pcs in operating existence.
But Apple are certainly the only large unix-workstation seller left.
i think a lot of apple's comeback, at least in the early days, was fueled by developers and systems people evangelizing them as the usable unix...
Not to mention all the unix-based laptops, tablets, phones, and watches. ;-)
> In high-tech domains, an engineer could readily have a toolchest of suitable computers in the same way that a mechanic has different tools for their tasks. This one has an FPGA connected by both PCI-E and JTAG to allow for quick hardware prototyping. This one is connected to a high-throughput GPU for visualisations; that one to a high-capacity GPU for scientific simulations.
Thats just different PCIe cards. You don't need a dedicated vendor for that, just a screwdriver.
I worked in companies where EDA, HDL, EE instrumentation machines were purposefully left running 24*7 just because the lengthy ritual of applying the needed set of hacks to get them running. Or because Windows update was killing a certain Labview function.
My favourite was a PC with a sticker "DO NOT PLUG INTO INTERNET UNDER ANY CIRCUMSTANCES, WINDOWS UPDATE PIRACY DETECTOR WILL TRIGGER CADENCE ANTIPIRACY DETECTOR"
This has been the case sometimes, Apple, Digital (VMS), and NEXT are good examples of this, but from what I've seen, in most cases the user experience was pretty poor anyway.
Yeah, but that's something we should lament; we're stuck at a local maxima for both. The ability to try out new ideas and architectures has been curtailed because no competitor can get enough momentum to stay alive long enough to displace the entrenched incumbents.
All that's changed is that the architectures for workstations and "business class" desktop machines have converged, and the distinctive operating systems that were needed to support those older workstation architectures have gone away.
What you're describing are workstation-class PC-compatibles, not workstations (as the author defines the term).
What the author is really talking about is the death of systems where the hardware (sometimes all the way down to the CPU architecture) and the OS are produced by the same organization as a single, integrated solution targeted at a specific vertical.
Isn't that the iPhone and iPad?
EDIT: Even the camera sensors + DSP for processing is specially built for phones these days.
Maybe it's not that workstations are dead, but that their shape has changed:
At the core of the modern version is a general purpose computer, and then hardware modules (internal cards, or external peripherals via thunderbolt) are added to make task-specific workstation computing happen.
Most people don't need their computers to be workstations, but that's always been the case I suppose.
So in effect, this is the same argument as game consoles vs. desktop PCs. Every PS5 game can use primitive shaders, because the developers know that every PS5 will have them. For a generic desktop PC game release, though, only 1% of all buyers will have a new enough AMD RX Vega GPU, so it doesn't make sense to invest much resources into supporting primitive shaders there.
That said, pretty much all of the specialized hardware that Workstations used to have in the past is now commonly available everywhere as vertex, compute, or pixel shaders.
So in my opinion, the workstation marked died because suddenly everyone gained access to what was previously reserved only for workstation users.
I really don't understand this. Apart from the minor detail that modern (e.g. younger than a decade or so) PCs can go to stand-by mode and wake up within seconds at most, I've yet to come across commodity hard- and software that doesn't do the same.
The current up time of my machine is 4 days 13 hours, 20 minutes - because I ran some system updates last week. Without those, even my mediocre machine running all the software I use on a daily basis (IDEs, web browser, several terminals) just keeps running for weeks on end and never ever crashes.
Am I just lucky or is this "crashes and blue screens"-business just a persistent memory from the late 90s and early 2000s?
It was also a fact that a lot of these machines were expected to be up 24/7 - running mail/file servers etc. OS updates, including kernel updates wouldn't require rebooting.
the choice of components determines if a windows machine bluescreenes all the time or accumulates uptime like a pro.
hard to make a informed choice thou because the market is moving very fast and so "luck" is a huge factor.
I have fond memories of offering an x86+tomcat+mysql solution to clients competing with other solutions based on Sparc+iPlanet+Oracle. The offer was so low compared with previous ones, that some customers ask for confirmation.
This CAPEX was also mirrored in the development workstations. No need to have Sun based hardware, a commodity PC was enough to develop without surprises when deploying.
> It’s worth pondering this. One argument against mine, is that “people need and want ‘appliances’ “ that only have one function.
This was in response to a more widely known quote of his:
> Simple things should be simple, complex things should be possible.
The only people who need specialized personal workstations are people working with large data. But, even with that the cost of the cloud has made workstations a poor investment in most cases. I don't think I'd agree with the notion that the demise of the workstation is "untimely." Maybe in a retro, nostalgic or, cultural way it's demise is upsetting but, in the larger pragmatic sense it's quite welcomed. Workstations always felt like a stop-gap solution for what we have now.
Sadly it looks that isn’t happening and we will end up with nothing more than a web browser on our desks and anything complex running in the cloud.
I don’t like this model at all and it isn’t about culture or nostalgia but rather the loss of control and agency.
Had a company like SGI managed to survive their business model would have likely remained the same: they might have looked like a boutique nVidia selling extreme high-end parallelism solutions (i.e. graphics, neural net hardware etc) that GPU makers wouldn't yet be able to match with commodity hardware. Probably some combination of extreme transistor count and boatloads of FPGA/specialized silicon coupled with extreme power consumption and cooling. They also would have been able to go after smaller verticals than nVidia can. That was always their game: selling solutions that were relatively harder to produce because they had to be engineered and hadn't been commoditized yet. The problem is, those doing to the commoditizing can move faster than those doing the innovating. When those doing the innovating die off, the technology doesn't advance as quickly.
 I say this because all of these companies seemed to have a similar fault: they were never able to move down market so that they could scale up their volumes. The same thing is slowly killing Intel right now.
And then without the operating system you get commoditized and it’s impossible to charge the amount of money these companies were charging.
The jobs that once required powerful proprietary workstations and now done with powerful PC based workstations running Linux, BSD, or even Windows.
It describes how and why high-margin companies are unable to pivot to lower margins as technology advances, getting left behind.
For me back in the mid 90s, the major differentiator was ethernet, and everything has that now.
Only weird sysadmin geeks seems to believe there is any real value in using some "workstation" with a custom cpu and os.