Great memories of a small team taking on huge challenges.
If it wasn't for the Digital and AMD engineers we'd all be suffering with IA-64 etc... I think it's more likely PPC would have remained a force to be reckoned with in the Desktop space.
I think the brief affordability of relatively recent vintage non-Intel hardware was a combination of the sudden growth of Internet 1.0 funding fierce competition between hardware companies (Sun was taking Java to the bank) plus Microsoft and IBM trying hard to get out from under Intel’s desktop hardware monopoly.
I miss those days! It felt like CPU hardware was this amazing frontier that just kept expanding every day. On the other hand, now you can buy a $10 embedded ARM board that’s faster than a high end AlphaStation from the late 1990s, so that’s definitely just 100% pure nostalgia for my misspent youth.
When I got to the Media Lab as a 21 year old grad student in 1996 I had never actually physically seen a Sun or DEC box. Lots of my new colleagues had worked at tech companies between undergrad and grad school, so they sometimes had old workstations and terminals in their apartments. I remember seeing a headless SPARCStation 10 under a coffee table (presumably running running mail, DNS, and Usenet for a few folks) and being blown away. I mean, you could put 512 MB of RAM in that thing.
If the 486 was the final nail in the m68k's coffin the pentium's was the same for the high-end RISCs.
I have fond memories of building Gnome 1.x from sources on it. It felt really fast compared to the Pentium 2 and 3's of the day.
The initial PowerPC port was done by three people: Jes Wills, Andy Giles and Tony Sumpter. When they traveled to Austin, TX to present the work, the AIX software manager who had been in-charge of the ThinkPad Woodfield (6020) effort wanted to know where the rest of the team was. Her team was over 40 people, and they'd taken over 18 months to get to a port.
The Tadpole team for the initial port was 3 people, and had taken six months.
There was a ritual on the engineering team that you were to drink a "gallon of Abbot" on your 30th (birthday). The gallon, of course, was an Imperial gallon (8 20oz pints, or 4.54 litres. The Abbot was Greene King's Abbot Ale, which is 5% ABC.
You have to start early to get it done, and it still wrecks you.
The US Navy was very upset that the Tadpole case was made of AZ91D magnesium (alloy). Wouldn't let it on-board anything because Mg + 2H2O → Mg(OH)2 + H2.
Tadpole used it because it has similar density (1.81 g/cm^3, 1.046 lb/in^3) to ABS/PC blend with 20% glass fiber (1.25 g/cm^3), and it has high heat conductivity, so we used the case as a heat sink. AZ91D has an ignition temperature of approximately 468°C (875°F), which is difficult to reach and maintain due to magnesium’s high heat conductivity.
A magazine (I forget which one) wanted to do a story on how the laptop would stop bullets. So we took a couple dead systems to the local gun range (Red's Indoor Range in Austin, TX) and tried it out. It wouldn't stop a .22, but that didn't stop us from blowing the hell out of the rest with larger calibre firearms. (Several of us were good friends with the people at Red's back then.) Whenever the Brits came over, they wanted to go to Red's, and we were all too willing to take them. The engineering area in Cambridge (England) was decorated with the silhouette targets they took back to show their efforts.
George Grey (CEO) and Bob Booth (CFO) went on to found and run GeoFox who had a Psion 5 work-alike. (http://www.ericlindsay.com/epoc/geofox.htm). It didn't work out, so George became the President of Psion USA, and then the President and CEO of SavaJe, which became JavaFX Mobile after it was acquired by Sun. Savaje apparently inspired Android. https://www.bizjournals.com/boston/blog/mass-high-tech/2010/...
George and Bob were most recently at Linaro, and are now at foundries.io. Both in the positions they had at Tadpole.
Wow, I forgot about that machine until this post.
Really? The PowerBook in 1997 supported up to 160 MB: https://en.wikipedia.org/wiki/PowerBook_G3#Models
I don't think this was particularly unusual. HP's laptop of the day supported up to 160 MB, as well: http://www.computinghistory.org.uk/det/37569/HP-OmniBook-570...
Having the sockets to support up to 160Mb was one thing, having long enough arms and deep enough pockets to pay for it was something else. Hence 'unimaginable'.
Nowadays you get a couple of sockets on your laptop, back then whether laptop or something else, a workstation would have lots and lots of sockets for memory, usually with very few of them filled.
In the June 1998 MacWorld, you could order a 128 MB upgrade for your brand new PowerBook G3 for $389 . Not cheap, but not that extravagant.
Sure the chips were commodity but the modules would have a different pinout. They would be advertised as better.t
Another problem was that the more memory you bought the more it cost per megabyte. A set of chips to fill the sockets with 128Mb would cost three times as much as a set of chips that filled the sockets to give 64Mb.
Remember that back then, hardware evolved really fast; I had a 128 MB laptop in 2000. So it went from "unheard of" to "standard" in 5 years or so.
My current work machines have had 8 GB for 7 years :)
I was using a TI TravelMate 4000M (made by Acer) from 1995 to 1998, it was a graduation gift from my estranged father. It propelled me into modern (for the time) computing and set me on the course to the career I have today. I never upgraded beyond 8MB of RAM (it came with 4MB on board and supported a 4MB or 16MB additional module) but that was enough to do what I needed at the time.
I would have loved to have an advanced workstation laptop like in the OP article, but I didn't have $21k laying around for something like that.
> Really? The PowerBook in 1997 supported up to 160 MB
I think it would have been pretty unusual to have that much RAM installed at that time. I remember my parents upgrading our PC from 8MB to 16MB at about that time (or maybe 16MB to 32MB), and that was more than any of my friends had.
I had a Toshiba Tecra that was loaded from a memory perspective and that device retailed around $4k.
Might have been something as crazy as installing ram with a custom address line soldered on etc...
The 3GX from 1994 also supported 128MB but the simms had to be low profile and fast enough... so a bit hard to source.
That brought back a memory. I was working at a place in the 90s that got one of those SPARCbooks that we were setting up for a customer, but we didn't have a SCSI CD-ROM drive to install with. I actually went to my previous employer (a Sun workstation support team in the computer center of a university) who kindly let us use one of theirs on site.
Netscape 4.76 on Solaris 8. It took a while to find a website that still rendered.
I think he did.
That might give you a glimpse into why it's useful to split off different datasets. I might want to tune my database (/var/lib/postgres) for throughput, while my home directory is tuned for maximum compression & encrypted, while my public fileshares are unencrypted, etc.
Also it's often useful to have different partitions if you need different filesystems, not every filesystem is suited for every particular usecase. Also sometimes you're constrained by other software: for instance many older bootloaders would typically have very limited filesystem support; even today your EFI system partition needs to be FAT formatted, which should serve as an obvious reason why you'd want to segregate `/boot/efi` from the rest of your system.
Modern smartphones do this, on Android for example you are able to find a filesystem for system binaries generally mounted readonly, a filesystem for the base OS and system apps, another filesystem for user installed apps. If a rogue app fills the entire filesystem it's resided, the system apps can still function.
For the most part however people have decided that slicing up your disk into multiple partitions isn't worth the hassle anymore, and almost all distros just dump everything on a giant /.
If / or /var run out of space (even now) then lots of daemons get into quite a bit of trouble. Anything from hanging to consuming a lot of CPU in the disk allocator.
It's often not possible to ssh into a machine where / or /var has filled up.
There was also the problem of file system robustness. Whilst things were a lot better than the non-UNIX platforms of the day, things weren't quite as good as they are today. These filesystems often were not journalled. That meant that if you had a power failure you could lose the entire volume. (I've personally had at least one / partition be destroyed by fsck after a power failure. I was so, so glad I didn't lose /home. /opt and /usr/local as well!)
Depending on whether you were the user or the administrator dictates which partition you prefer to survive, but at least it adds some robustness. It's nice to have things separated based on how you will restore them. / and /usr will come from the vendor. /opt will probably be a whole load of different media from all over the place.
*Of course it was possible to have root and swap on NFS as well (for truly diskless).
The functional equivalent to this is iSCSI, which as far as I understand is similar to if not literally what AWS uses under the covers for EC2 block devices.
Back when disk-SSD hybrid storage was still a novel idea (~2006), one of the guys at Sun had an interesting demo with ZFS.
He first created a bunch of storage pools on a server on the east coast from iSCSI volumes on the west coast, install Postgres, and did a benchmark. He next created a pool with the same iSCSI pools but added a local SSD as a ZIL/zlog device (for caching), and ran the same benchmark.
The SSD-enabled pool has quite close to local-disk performance of a SAN.
Depending on your compute needs, it could have made more sense to do fully-remote X.
So if many people were running intensive like MATLAB or Mathematica, then root/swap-over-NFS made sense so that you could use your local workstations local CPU all to yourself(s) and not bog down the central server. If most people were doing modest things (xterm, mail, browsing), then running everything on the central server and displaying it on dumb X11 terminals wasn't a bad idea.
Caveat - run a Unix machine you don't care about (i.e. a disposable VM) out of disk space.
It's much easier to recover a machine if the storage is segmented. Yeah, a single partition is more convenient, but multiple partitions are more resilient.
Sun was hugely into NFS, which was both a blessing and a curse -- you'd spend 4-5 figures each on a fleet of workstations with only 105 MB local drives, and they'd tend to all end up hanging frequently, because of a server burp or network burp. It didn't even have to be a burp on a filesystem you were using -- it only had to be (stale) mounted.
(I'm not this old, but I had the excessively good fortune to have access to super-cool computers and Internet as a kid.)
A modern Windows install has something like 4 partitions these days for the c drive. Ones labeled restore, c, and I have yet to care to figure out the other two. On boot though you see just c.
Please tell me someone else has a screw loose and wants to do this too.