
The IBM Pentium 4 64-Bit CPU - sohkamyung
http://www.cpushack.com/2019/10/01/the-story-of-the-ibm-pentium-4-64-bit-cpu/
======
hapless
You may wonder, what was in this for IBM? The answer is fairly
straightforward. IBM used to make proprietary chipsets for Intel chips!

The pride of the xServer/xSeries systems were "complex" setups -- multiple
chassis -- with up to 32 sockets, and 512 GB of RAM. These required a lot of
IBM internal engineering, where they made pin-compatible sockets for what
Intel was offering at the time, and glued those chips into _hugely_ different
topologies than Intel had in mind.

These systems sound small today, but back in 2001, this was a really big deal
for x86. The IBM-proprietary chipsets were much more expensive than off-the-
shelf systems, but still a fair bit cheaper than going with NCR or Unisys,
competing vendors with proprietary x86 MP designs.

IBM had a lot at stake when that socket changed. Achieving pin-compatibility
is hard! Intel is very jealous of their documentation, and prefers to offer
paper only, with water-marked copies. It's like owning a gutenberg bible.
Engineering something pin-compatible with an Intel x86 CPU has never been
easy.

It was no doubt worth it to their "big" x86 server business to ask Intel to
make a special run of chips with the old socket layout, but the new emt64
extensions. I bet it was a complete no-brainer compared to the costs of
integrating a new socket!

~~~
walrus01
In the era of dual socket, single core xeons, ServerWorks also produced non
Intel chipsets, North and South bridge, for motherboard makers to use. There
were tyan and supermicro boards that used them. Common at the time in 1ru and
2ru size servers. Also found on big quad socket supermicro boards.

From 2002: [https://www.extremetech.com/extreme/73498-serverworks-
ships-...](https://www.extremetech.com/extreme/73498-serverworks-ships-grand-
champion-chipset)

I actually think the ibm systems used the grandchampion chipset.

[https://www.supermicro.com/products/motherboard/Xeon/GC-
HE/P...](https://www.supermicro.com/products/motherboard/Xeon/GC-HE/P4QH6.cfm)

Serverworks was later acquired by broadcom.

~~~
jumpingmice
IBM was locked in a brutal fight with HP and Dell to take share in the early
x86 server market. Their chipsets were neat but they looked silly next to an
HP Opteron box. Integrated memory control ended the chipset race.

Still their chipsets had neat features worth remembering, such as the ability
to use local main memory as a last-level cache of memory read from remote
nodes. And of course they went to 64 sockets which was respectable.

~~~
hapless
These are pretty bold words, my friend.

Opteron had vastly better architecture than the Intel chipsets of the day, but
it topped out at four or eight sockets, I forget.

IBM, at the time, could offer you Opteron-like architecture and performance,
with up to 32 sockets, using Intel chips. That was worthwhile to some
customers. "Intel" wasn't the selling position. It was x86 or x86-64, with
"big" as the selling position.

I'm not here to apologize for Intel. I'm just saying, those IBM proprietary
chipsets had their nice bits.

~~~
p_l
Early Opteron (with Socket 940) topped at 8 sockets _glueless_ \- with the
same chipset one could drive a socket 754 chip.

However, with custom glue logic, one could expand it very far - AMD offering
in that space was "Horus" chipset which connected 4 sockets with external
fabric (infiniband, iirc) to create 64 socket systems. Similar tactic was (and
still is) offered by SGI in their UltraViolet systems which utilize the same
principle using NUMAlink fabric and Xeon cpus.

------
peter_d_sherman
Excerpt:

"Having such a unique processor at your disposal, it’s absurd not to build a
powerful x64-retro system on it. One of the options for using such a system in
general can be to build a universal “PC-harvester” that supports all Microsoft
operating systems from DOS to Windows 10."

All Microsoft OS'es on the same PC? Cool! Also, if that's indeed the case, my
guess is that most versions of x86 Linux and other x86 OS'es, historic to
present, would work too... which is no small feat for a single PC...

------
mmansoor78
I really admire all the lengths that the author went for this. Hard to see
types of efforts now a days.

~~~
calmworm
Right. I imagined this as a loose script to a Netflix special.

------
klingonopera
I was once looking for a used Intel Core2Duo processor on eBay, not sure, I
think it was the E6300, 7x266Mhz, great FSB overclocking potential, and then
settled on buying a CPU that was advertised as such, but its lid had " _Intel
Confidential_ " on it.

IDK, did the seller delid and relid it with something exotic? Or did I snag
one of the first prototypes? What was up with that lid? Does anyone have a
clue?

MB identified an E6300, and it had amazing overclocking potential, it went
7x333Mhz without any voltage increase. Not sure what the max was, but
considering only a few people were bidding on it (I'm guessing most were
turned away due to that lid), I was quite lucky to get a CPU with a lot of
potential for very little cash.

~~~
planteen
I've had Intel engineering samples from work (we got them under NDA and such).
They were tossing some Sandy Bridge engineering samples at some point and let
us take them home. The hardware was buggy and didn't get microcode updates
IIRC. The case and mobo was leaf blower loud and very unwieldy. I could run
Linux for a few hours before it would segfault. I ended up trashing it. So I
don't think an Intel engineering sample is better, I think it is worse.

~~~
klingonopera
Did those have "Intel Confidential" on them?

The CPU I bought was working fine though, the seller guaranteed it and he had
the reputation on eBay to back it.

My (possibly) naive logic then, was that an early sample was likely to be made
from the best silicon, which often correlates with good OC potential...

------
earenndil
> Having such a unique processor at your disposal, it’s absurd not to build a
> powerful x64-retro system on it. One of the options for using such a system
> in general can be to build a universal “PC-harvester” that supports all
> Microsoft operating systems from DOS to Windows 10.

I thought all modern intel/amd cpus are backwards compatible back to the 8086,
and so capable of doing that?

~~~
userbinator
Unfortunately not, the biggest change being "pure" UEFI-without-legacy-BIOS
firmware that a lot of motherboards already have.

That and the question of drivers for OSs newer than DOS (which is not as big
of a problem, since they can still be written and doing so is easier than
changing the BIOS. The existence of USB drivers for DOS, and HD Audio for
Windows 3.1x[1] are examples of that.)

Modern x86 CPUs are _theoretically_ still backwards compatible, but I suspect
they don't test things like 16-bit mode and VME[2] much anymore.

[1]
[http://www.vcfed.org/forum/showthread.php?50867-Windows-3-1-...](http://www.vcfed.org/forum/showthread.php?50867-Windows-3-1-drivers-
for-newer-hardware)

[2]
[https://news.ycombinator.com/item?id=14328237](https://news.ycombinator.com/item?id=14328237)

~~~
mshook
Intel is supposedly dropping the support to boot in 16 bits mode next year.

[https://arstechnica.com/gadgets/2017/11/intel-to-kill-off-
th...](https://arstechnica.com/gadgets/2017/11/intel-to-kill-off-the-last-
vestiges-of-the-ancient-pc-bios-by-2020/)

[https://www.zdnet.com/article/intel-were-ending-all-
legacy-b...](https://www.zdnet.com/article/intel-were-ending-all-legacy-bios-
support-by-2020/)

~~~
userbinator
Those disturbingly happy articles should really be titled something like "the
openness of the PC is about to end forever" \--- because that was likely the
end-goal all along.

~~~
core-questions
These new machines are not PCs. It's not out of the question for some company
to build a new BIOS-architecture machine in the future, though it will
probably eventually require sourcing x86 chips from vendors other than the Big
Two as crypto keys will probably be required to even boot the chip if they
aren't already.

------
Animats
What's striking is that 3.4GHz 64 bit Intel CPUs were shipping 15 years ago.
That's still about where we are, but with more cores per package.

~~~
jcranmer
Frequency stalled because we stopped being able to efficiently cool the CPUs
at that point. Heat output is proportional to power consumption, which is
proportional roughly to the cube of frequency (power is normally proportional
to the square of frequency, but to drive higher frequencies, you often need to
drive up the voltage--which also has an effect on power).

~~~
aeyes
That can't be the only reason, with every shrink power requirements have been
reduced. P4 had up to 115W TDP. The same frequency and raw power could
probably be achieved on 15W today. But you can't get most current CPUs to run
stable beyond 4GHz base clock, even with liquid cooling.

~~~
fivefive55
Not sure what cpus you're looking at but everything in the desktop space is
over 4ghz nowadays. My last cpu was at 4.6ghz its entire life and my 3900x
stays at over 4ghz on all 12 cores when doing a render.

~~~
aeyes
Your CPU has 3.8GHz base clock (guaranteed), it can't sustain 4.6GHz turbo
clock on all cores.

~~~
fivefive55
Yes, I know, but it never actually goes below 4.0 for me on all cores when
doing a render. The 4.6 was referring to my 4770k which was at 4.6 on all
cores all the time.

------
icodestuff
Prescott was 64-bit from the first stepping (C0), but EM64T was disabled with
fuses. It wasn't just introduced for E0, and it was disabled in the G
steppings that were 32-bit the same way. That big a change is too large to do
in a mask change (the letter incrementing in a stepping indicates a mask
change, the number a metal change) rather than a microarchitectural one. There
are probably engineering samples of C and D steppings floating around out
there that don't have those fuses blown.

The same is true for the LGA775-based 32-bit Prescotts. 64-bit disabled by
fuses.

------
joering2
Was actually pretty good read. How far are we from 128-bit architecture ?

~~~
p_l
IBM's AS/400 (and all of its renames) is a 128bit architecture. The huge
address space is beneficial for implementing capability security on memory
itself, plus using single level store for the whole system (addresses span RAM
and secondary storage like disks, NVMe, etc)

~~~
qubex
One of my mentors is one of the IBM engineers who developed the original
AS/400’s capability-based security architecture way back in the early
eighties. I can confirm that (according to her) the 128-bit addressing was
indeed a very convenient manner of implementing the system. However, nobody
ever expected (nor expects, I suspect) that those addresses will ever be used
to actually access that amount of memory. It’s a truly astronomical amount of
memory, on the order of grains-of-sand-on-countless-planets...

~~~
simonh
To put it another way it's not just enough to count the grains on sand on a
beach, it's enough to count all the atoms in all the grains of sand on planet
Earth. Give or take a few orders of magnitude[1].

[1][https://www.explainxkcd.com/wiki/index.php/2205:_Types_of_Ap...](https://www.explainxkcd.com/wiki/index.php/2205:_Types_of_Approximation)

~~~
exikyut
$ echo '2^128' | bc | rev | sed 's/.../&,/g;s/,$//' | rev

340,282,366,920,938,463,463,374,607,431,768,211,456

...rrright.

Actually let me line up the
[https://en.wikipedia.org/wiki/Orders_of_magnitude_(data)](https://en.wikipedia.org/wiki/Orders_of_magnitude_\(data\))

    
    
      340,282,366,920,938,463,463,374,607,431,768,211,456
              ..  ??  YB  ZB  EB  PB  TB  GB  MB  KB
    

(There's no meaningful notation of size here - the denotations are just to
show just how much data you can fit in 128 bits of space.)

 _Blinks a few times_

 _Ultimately fails to mentally grasp and make useful sense of the number due
to its sheer size_

As an aside, apparently DNA can store a few TB/PB (I don't remember which).
The age of optimizing for individual bytes as a routine part of "good
programming" is definitely over, I guess. (I realize this discussion is about
address space and not capacity, but still)

------
walrus01
This really isn't anything special, at the time of release it was a low end
single socket 1ru server. It overlapped with available 64bit lga775 versions
of the same product.

