
AT&T Hobbit - kick
https://en.wikipedia.org/wiki/AT%26T_Hobbit
======
lallysingh
> They found, as RISC designers would have expected, that without a load-store
> design it was difficult to improve the instruction pipeline and thereby
> operate at higher speeds. They felt that all future processors would thus
> move to a load-store design, and built Inferno to reflect this. In contrast,
> Java and .NET virtual machines are stack based, a side effect of being
> designed by language programmers as opposed to chip designers

That's some solid trash talk for Wikipedia.

~~~
luismedel
> _Java 's virtual machine and compiler are many times larger and slower than
> the Dis virtual machine and the Limbo (the most common language compiled for
> Dis) compiler. Android's Dalvik virtual machine, the Parrot virtual machine
> and Lua virtual machine are also register-based._

And the rant continue...

------
weerd
Fun facts from the See Also..
[https://en.m.wikipedia.org/wiki/Jazelle](https://en.m.wikipedia.org/wiki/Jazelle)

"allows some ARM processors to execute Java bytecode in hardware"!

------
shortformblog
I wrote about the EO recently for Input
([https://www.inputmag.com/features/fax-on-the-beach-the-
story...](https://www.inputmag.com/features/fax-on-the-beach-the-story-of-
atts-eo-communicator-90s-ipad-flop)), so I actually have a couple of Hobbit
chips in my house at the moment, strangely enough. (No, the EO units don’t
work, unfortunately … something about 27-year-old batteries.)

It’s a really fascinating chip that reflects AT&T’s tech ambitions of the
time—which flopped, hard, almost immediately after they shut down the chip’s
production. If the EO situation played out differently, who knows how much
bigger the Hobbit would've been.

~~~
jdkee
This. Given the Wired advocacy for newish techs at the time and AT&Ts market
and economic dominance we could have seen this as the pager or the late 1990s.
Dominate and then reduced to a niche like Blackberry as the smartphone took
over. Who'd a thunk?

~~~
unlinked_dll
Was AT&T that dominant at the time? They had been broken up just a few years
earlier and would go on to be acquired by one of their spinoffs a few later

~~~
shortformblog
One factor here worth keeping in mind: In 1991, they bought NCR, so they were
trying to expand into the tech market beyond their traditional telecom base.
(Bell Labs was also owned by AT&T during the period.) They were still very
cash-rich in the ’90s and trying to use that leverage in other parts of
telecom, even if the Baby Bells owned local telephone service.

------
jdkee
"Hobbit was also used in the earliest (unreleased) versions of the BeBox."

This is an unfortunate dead end for this technology.

------
slacka
> It was based on the company's CRISP (C-language Reduced Instruction Set
> Processor) design. CRISP and Hobbit were optimized for running the C
> programming language.

Couldn't the same be said of any modern CPU short of the Java processors that
have mostly failed?

~~~
joezydeco
I took a university assembly language course (UIUC CS225) on the WE32100,
which was the predecessor to the Hobbit. We worked on a stack of AT&T 3B2s
donated to the college.

I distinctly remember the STRCPY opcode, which did exactly what you expected
it to do.

[https://imgur.com/a/auZlxwG](https://imgur.com/a/auZlxwG)

I have fond memories of that instruction set, but looking back now it's like
looking at a loaded weapon.

~~~
RealityVoid
Huh, interesting. I don't know a lot about the various architectures, but with
instructions like that, my intuition tells me you could do some really
interesting stuff.

Coming from an embedded background, I'm wondering, could you not, for example,
do a machine where you offload bulk memory operations to the memory controller
itself? Copying memory, zeroing memory, moving memory around, all these things
could happen at the memory controller level and you'd never need to put them
on the actual bus. Perhaps there is something like this for some exotic
architectures?

~~~
rvense
I've seen obscenely complicated DMA controllers that can do that and more.
Check the NXP iMX6 SDMA peripveral for instance.

~~~
RealityVoid
It would still put it in the bus, tho, no?

I'm wondering now how these processors/MCU's handle bus access since you have
so many peripherals that probably want to access it as well as the CPU acceses
that need to happen. Certainly you have to have a priority settling mechanism
and some way of arbitrating the resource. The CPU probably doesn't use it all
the time, but if you have a pipeline, it probably uses it in the background
anyway.

~~~
joezydeco
On ARM there's a whole series of controller blocks that sit between the CPU
caches and the outside world. Start looking up the AMBA (Advanced
Microcontroller Bus Architecture) and you'll see how complicated it gets:

[https://developer.arm.com/architectures/system-
architectures...](https://developer.arm.com/architectures/system-
architectures/amba)

