Hacker News new | past | comments | ask | show | jobs | submit login

Oh, it's a lot longer story than that. I worked as SGI from just around its peak, to its downfall, seeing the company shrink to a tenth of its size while cutting products.

At the time, I was a fairly junior employee doing research in AGD, the advanced graphics division. I saw funny things, which should have led me to resign, but I didn't know better at the time. Starting in the late 90's, SGI was feeling competitive pressure from 3DFx, NVIDIA, 3DLabs, Evans and Sutherland (dying, but big), and they hadn't released a new graphics architecture in years. They were selling Infinite Reality 2's (which were just a clock increase over IR1), and some tired Impact graphics on Octanes. The O2 was long in the tooth. Internally, engineering was working on next gen graphics for both, and they were both dying of creeping featureitis. Nothing ever made a deadline, they kept slipping by months. The high end graphics pipes to replace infinite reality never shipped due to this, and the "VPro" graphics for Octane were fatally broken on a fundamental level, where fixing it would mean going back to the algorithmic drawing board, not just some Verilog tweak, basically, taping out a new chip. Why was it so broken? Because some engineers decided to implement a cool theory and were allowed to do it (no clipping, recursive rasterization, hilbert space memory organization).

At the same time, NVIDIA was shipping the GeForce, 3DFx was dying, and these consumer cards processed many times more triangles than SGI's flagship Infinite Reality 2, which was the size of a refrigerator and pulled kilowatts. SGI kept saying that anti-aliasing is the killer feature of SGI and that this is why we continue to sell into visual simulation and oil and gas sector. The line rendering quality on SGI hardware was far better as well. However, given SGI wasn't able to ship a new graphics system in perhaps 6 years at that point, and NVIDIA was launching a new architecture every two years, the reason to use SGI at big money customers quickly disappeared.

As for Rick Beluzzo, man, the was a buffoon. My first week at SGI was the week he became CEO, and in my very first allhands ever, someone asked something along the lines of, "We are hemmoraging a lot of money, what are you going to do about it"? He replied with, "Yeah, we are, but HP, E&S, etc, are hemmoraging a lot more and they have less in the bank, so we'll pick up their business". I should have quit my first week.




Trying to be both sell a seller of very high end computer products while also doing your own chips and graphics at the same time is quite the lift. And at the same time their market was massively attacked from the low end.

The area where companies could do all that and do it successfully kind of ended in the late 90s. IBM survived but nothing can kill them, I assume they suffered too.

What do you think, going back to your first day, if you were CEO could have been done?

I always thought for Sun OpenSource Solaris, embracing x86, being RedHat and eventually Cloud could have been the winning combination.


> What do you think, going back to your first day, if you were CEO could have been done?

Not quite sure. You correctly pointed out SGI (HP, Sun, everyone else in the workstation segment) was suffering with Windows NT eating it from below. To counter that, SGI would need something to compete in price. IRIX always had excellent multiprocessor support and, with transistors getting smaller, adding more CPUs could give it some breathing room without doing any microarchitectural changes. For visualization hardware the same also applies - more dumb hardware with wider buses on a smaller node cost about the same while delivering better performance. To survive, they needed to offer something that's different enough from Windows NT boxes (on x86, MIPS and Alpha back then) while maintaining a better cost/benefit (and compatibility with software already created). I'd focus in low-end entry-level systems that could compete with the puny x86's by way of superior hardware-software integration. The kind of what Apple does, when you open the M1-based Air and it's out of hibernation before the lid is fully opened.

> I always thought for Sun OpenSource Solaris, embracing x86, being RedHat and eventually Cloud could have been the winning combination.

I think embracing x86 was a huge mistake by Sun - it helped legitimize it as a server platform. OpenSolaris was a step in the right direction, however, but their entry level systems were all x86 and, if you are building on x86, why would you want to deploy on much more expensive SPARC hardware?

Sun never even tried to make a workstation based on Niagara (first gen would suck, second gen not so much), and OpenSolaris was too little, too late - by then the ship had sailed and technical workstations were all x86 boxes running Linux.


> IRIX always had excellent multiprocessor support and, with transistors getting smaller, adding more CPUs could give it some breathing room without doing any microarchitectural changes.

That kind of exactly what Sun did and likely gave them legs. This might not have made it out of the 90s otherwise.

> I think embracing x86 was a huge mistake by Sun - it helped legitimize it as a server platform.

x86 was simple better on performance. I think it would have happened anyway.

> OpenSolaris was a step in the right direction, however, but their entry level systems were all x86 and, if you are building on x86, why would you want to deploy on much more expensive SPARC hardware?

That's why I am saying they should have dropped Sparc already in the very early 2000s. They waste so much money on machines that were casually owned by x86.


> That's why I am saying they should have dropped Sparc already in the very early 2000s

They had two SPARC architectures - the big core, high-performance one and Niagara, the many wimpy core one and Sun never thought about combining both on the same machine, which is more or less what x86 is doing now because they are being forced to do it by Apple and its M1. Sun was there in the early 2000's.

There's no fundamental reason why x86 has to be faster than SPARC, in fact, SPARC machines trounced x86 ones.

Another thing that killed Sun was that it could never decide whether they were Apple or Microsoft - they never decided whether they wanted to make integrated hardware or a become plain software company.


> There's no fundamental reason why x86 has to be faster than SPARC, in fact, SPARC machines trounced x86 ones.

Other then Intel having 100x more money and more architects with better nodes ...

SPARC was already worse by 1998.

Its really only continued to make money because companies couldn't figure out how to scale vertically and rather paid absolutely absurd amounts for these multi-core machines.

Some of the Supercomputer people showed how they could totally destroy Crey/Sun and so on with simple clusters of x86.

Sure had they perfectly executed SPARC and hit on every investment they might have done a lot better, but that just wasn't the reality. Intel just executed much better. SPARC had all kinds of development failure in the 90s and in the late 90s Intel just had better nodes in addition to better architecture.

> Another thing that killed Sun was that it could never decide whether they were Apple or Microsoft - they never decided whether they wanted to make integrated hardware or a become plain software company.

I think they did want to be Apple but they simply were not that god at making products. They were actually better at making software, but then didn't do very well at making products based on that.

The did some good stuff like Fishworks, AMD x86 Servers and so on.

They should really have turned into Open Apple, RedHat and AWS. With OpenSolaris and Zones on x86 costume machines they could have dominated the Cloud space (they even had products going into that direction early in the 2000s).


SGI also offered x86 based machines, of all things running NT or WIN 2K. That was when the writing really was on the wall.


Precisely. When you start offering x86 boxes with Windows, it's obvious your own architecture and OS are dead.

But I don't remember those. I remember Intergraph did (and they were quite good, but died quickly)



Oh wow... In my brain I was confusing it with the Intergraph one.

I also was not surprised to see Rick Belluzzo's name in the press release... The devastation that man caused has no parallel in the history of personal computing. It's comparable to what Steven Elop did in the mobile space.

I use to joke Microsoft had perfected the use of executive outplacement as an offensive weapon.


I think some kind of discipline around releasing products in a timely way by cutting features would have done wonders. However, the kinds of computers SGI built were on the way out, so they couldn't have survived without moving in the direction that people wanted. Maybe it was a company whose time had come. SGI wasn't set up to compete with the likes of NVIDIA and Intel.


Why couldn't they compete with NVIDIA? Were the not just as big?


The PC market grew bottom up to be 10x the size of the workstation market during the 90s. Even with thinner margins, eventually workstation makers couldn't compete any longer on R&D spend.

The book The Innovator's Dilemma describes the process.


^meant thinner margins of PC industry.


Engineering culture. SGI was not pragmatic in building hardware, more of an outlet for brilliant engineers to ship experiments.


I can see how that was your view if you came in on the tail end but it definitely wasn't always so. I've owned quite a few of them and if you had the workload they delivered - at a price. But for what they could do they would be 3 to 4 years ahead of the curve for a long time, and then in the space of a few short years it all went to pieces. Between NVIDIA and the incredible clock speed improvements on the x86 SGI was pretty much a walking zombie that did not manage to change course fast enough. But CPU, graphics pipeline, machine and software to go with it is an expensive game to play if the number of units is smaller than any of your competitors that have specialized.

I'm grateful they had their day, fondly remember IRIX and have gotten many productive years out of SGI hardware, my career would definitely not have taken off the way it did without them, in fact the whole 'webcam/camarades.com/ww.com' saga would have never happened if the SGI Indy did not ship with a camera out of the box.


I wasn't familiar, so I searched and found your great account of the history! https://jacquesmattheij.com/story-behind-wwcom-camaradescom/


Fun times! Also frustrating but an excellent school and all is well that ends well.


Do you know anything about the rumor that an O2 successor was prototyped that used NVIDIA graphics? (I think I read that on Nekochan long ago).

The slow pace and poor execution of CPU and graphics architectures after ~1997 is crazy to think about. The R10000 kept getting warmed over, same for IR, and VPro, and the O2.

The Onyx4 just being an Origin 350 with ATI FireGL graphics (and running XFree86 on IRIX) was the final sign that they were just milking existing customers rather than delivering anything innovative.


>NVIDIA was launching a new architecture every two years

at the peak of nvidia 3dfx war new chips were coming out every 6-9 months

Riva 128 (April 1997) to TNT (June 15, 1998) took 14 months, TNT2 (March 15, 1999) 8 month, GF256 (October 11, 1999) 7 months, GF2 (April 26, 2000) 6 months, | 3dfx dies here |, GF3 (February 27, 2001) 9 months, GF4 (February 6, 2002) 12 months, FX (March 2003) 13 months, etc ...


In many cases, an executive's behavior makes sense after you figure out what job he wants next.


Thank you so much for your inside story. Hilbert space memory organization sounds great :)


Texture memory is still stored like that in modern chips (presuming they meant Hilbert curve organization). It's so that you can access 2D areas of memory but still have them close by in 1D layout to make it work with caching.


Is it really Hilbert?

A project I worked on a couple of decades ago had interleaved the bits of the x and y indexes to get that effect “for free”, I imagine a Hilbert curve decode would take quite a bit of silicon.


Ah right, youre right it's probably the interleaved bits (aka Morton code) in actuality. Or more likely, something tuned to the specific cache sizes used in the GPU.


I have no clue what hilbert space memory organization could possibly be - arbitrarily deep hardware support for indirect addressing? - but it sounds simultaneously very cool and like an absolutely terrible idea.


the framebuffer had a recursive rasterizer which followed a hilbert curve through memory, the thinking being that you bottom out the recursion instead performing triangle clipping, which was really expensive for the hardware at the time.

The problem was that when you take some polygons which come close to W=0 after perspective correction, their unclipped coordinates get humongous and you run out of interpolator precision. So, imagine you draw one polygon for the sky, another for the ground, and the damn things Z-fight each other!

SGI even came out with an extension to "hint" to the driver whether you want fast or accurate clipping on Octane. When set to fast, it was fast and wrong. When set to accurate, we did it on the CPU [1]

1 - https://www.khronos.org/registry/OpenGL/extensions/SGIX/SGIX...


Nowadays all GPUs implement something similar (not necessarily Hilbert but maybe Morton order or similar) to achieve high rate of cache hits when spatially close pixels are accessed.

3D graphics would have terrible performance without that technique.


Got it. I was imagining something else entirely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: