Hacker News new | past | comments | ask | show | jobs | submit login
AT&T Hobbit (wikipedia.org)
92 points by kick on Feb 5, 2020 | hide | past | favorite | 21 comments

> They found, as RISC designers would have expected, that without a load-store design it was difficult to improve the instruction pipeline and thereby operate at higher speeds. They felt that all future processors would thus move to a load-store design, and built Inferno to reflect this. In contrast, Java and .NET virtual machines are stack based, a side effect of being designed by language programmers as opposed to chip designers

That's some solid trash talk for Wikipedia.

> Java's virtual machine and compiler are many times larger and slower than the Dis virtual machine and the Limbo (the most common language compiled for Dis) compiler. Android's Dalvik virtual machine, the Parrot virtual machine and Lua virtual machine are also register-based.

And the rant continue...

Fun facts from the See Also.. https://en.m.wikipedia.org/wiki/Jazelle

"allows some ARM processors to execute Java bytecode in hardware"!

I wrote about the EO recently for Input (https://www.inputmag.com/features/fax-on-the-beach-the-story...), so I actually have a couple of Hobbit chips in my house at the moment, strangely enough. (No, the EO units don’t work, unfortunately … something about 27-year-old batteries.)

It’s a really fascinating chip that reflects AT&T’s tech ambitions of the time—which flopped, hard, almost immediately after they shut down the chip’s production. If the EO situation played out differently, who knows how much bigger the Hobbit would've been.

This. Given the Wired advocacy for newish techs at the time and AT&Ts market and economic dominance we could have seen this as the pager or the late 1990s. Dominate and then reduced to a niche like Blackberry as the smartphone took over. Who'd a thunk?

Was AT&T that dominant at the time? They had been broken up just a few years earlier and would go on to be acquired by one of their spinoffs a few later

One factor here worth keeping in mind: In 1991, they bought NCR, so they were trying to expand into the tech market beyond their traditional telecom base. (Bell Labs was also owned by AT&T during the period.) They were still very cash-rich in the ’90s and trying to use that leverage in other parts of telecom, even if the Baby Bells owned local telephone service.

"Hobbit was also used in the earliest (unreleased) versions of the BeBox."

This is an unfortunate dead end for this technology.

> It was based on the company's CRISP (C-language Reduced Instruction Set Processor) design. CRISP and Hobbit were optimized for running the C programming language.

Couldn't the same be said of any modern CPU short of the Java processors that have mostly failed?

I took a university assembly language course (UIUC CS225) on the WE32100, which was the predecessor to the Hobbit. We worked on a stack of AT&T 3B2s donated to the college.

I distinctly remember the STRCPY opcode, which did exactly what you expected it to do.


I have fond memories of that instruction set, but looking back now it's like looking at a loaded weapon.

Tons of classic CISC have strcpy memcpy, memmove, etc implemented as instructions. S/360 and x86 do too. The reason is that the early incarnations didn't have instruction caches, so you could get a ton of perf on these common operations by executing the move almost entirely of out of ucode, therby keeping the instruction fetches from conflicting with the data accsesses and stealing bus bandwidth. Like 2x-3x increased bandwidth typically.

That's still true with REP MOVS/REP STOS today in x86, it runs in an internal loop (and reads/writes cacheline-sized blocks if it can) while the rest of the CPU can continue executing other instructions in parallel. You can achieve similar results with vector instructions, but those tend to take up a lot of icache and also fetch bandwidth.

The WE32100 was very emphatically not-RISC, much more VAXish if anything with x86 style call-gates stuffed on top

Huh, interesting. I don't know a lot about the various architectures, but with instructions like that, my intuition tells me you could do some really interesting stuff.

Coming from an embedded background, I'm wondering, could you not, for example, do a machine where you offload bulk memory operations to the memory controller itself? Copying memory, zeroing memory, moving memory around, all these things could happen at the memory controller level and you'd never need to put them on the actual bus. Perhaps there is something like this for some exotic architectures?

Something like the 1985 Amiga blitter chip?

I've seen obscenely complicated DMA controllers that can do that and more. Check the NXP iMX6 SDMA peripveral for instance.

It would still put it in the bus, tho, no?

I'm wondering now how these processors/MCU's handle bus access since you have so many peripherals that probably want to access it as well as the CPU acceses that need to happen. Certainly you have to have a priority settling mechanism and some way of arbitrating the resource. The CPU probably doesn't use it all the time, but if you have a pipeline, it probably uses it in the background anyway.

On ARM there's a whole series of controller blocks that sit between the CPU caches and the outside world. Start looking up the AMBA (Advanced Microcontroller Bus Architecture) and you'll see how complicated it gets:


Modern CPUs can be efficient target for C, but that's mostly because C standard is full of "undefined behaviour" for a ton of stuff - so that compilers can do weird things to optimize for hardware.

Not like the CRISP! It implemented the C call stack in hardware, with no registers—almost every instruction directly addressed memory, or the register-like arguments on top of the stack.

No, certainly not. The ones that weren't mostly failed, though, yes, although the ones that are aren't optimized in the same way.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact