
MIPS Prospects (1992) - luu
https://yarchive.net/comp/mips_prospects.html
======
verisimilitudes
A higher clock rate means little. What each instruction does and how fast it
does it is the real question; MIPS fails horribly here, as its instructions
are both obscenely large and also do very little. RISC in general is misguided
and MIPS is a nice monument to how a ''reduced'' instruction set eventually
bloats from various reasons, such as increasing the register size.

From what I've seen, SuperH is much nicer, with multiple addressing modes and
sixteen bit instructions. RISC-V has nice qualities, but being among the nicer
RISC hardly means much.

It's unfortunate there's a dearth of high-level machines nowadays. Where are
the hardware implementations of abstract machines that aren't the JVM? Where's
the attribute memory that stores things such as array bounds and types? For
that matter, where's the real notion of an array instead of offset memory?

That's all in the past now, unfortunately. MIPS is hardly something worth
crying over. The only reason I give it any attention is because I own some
exotic MIPS hardware.

~~~
ernst_klim
>Where are the hardware implementations of abstract machines that aren't the
JVM?

Why would anyone like to have such a CPU? CPUs should be generic, being able
to run any language and any runtime, being easy target for compilation, giving
the way for various optimizations (pipelines, branch prediction).

RISC is the best here. Abstract high level machines are slow, clumsy, useless
in real world applications.

~~~
verisimilitudes
>Why would anyone like to have such a CPU?

Well, the common theme is, once processors won't become faster due to hardware
improvements, they'll become faster through specialization. That's tangential,
however, because it's clearly better to execute, say, Lisp on a machine with
hardware support for lists or Forth on a machine with a hardware stack and so
on and so forth.

>CPUs should be generic, being able to run any language and any runtime, being
easy target for compilation, giving the way for various optimizations
(pipelines, branch prediction).

Right, I forgot that if it runs C and UNIX, that automatically qualifies it as
''generic''. Firstly, anything that can emulate a Turing machine, sans memory,
can run ''any language and any runtime'', in general. I know Genera had
compilers for C and other several other languages; ironically, it's my
understanding the C compiler had to use arrays of small integers to represent
many things, as C ties itself irreparably to such details, no matter how
tangential to the program.

Secondly, if you have an operation you'll be performing a great deal of, such
as bounds checking, list manipulation, or abitrary-length arithmetic, it's
worthwhile to have it supported in the hardware. Having smart instructions is
leagues better than working around dumb instructions with pipelines and
whatnot and, apparently, is easier to get right, knowing the recent Intel
flaws and all of that.

~~~
ernst_klim
> they'll become faster through specialization

Do you have any evidences of that? Because from my naive understanding, simple
processors are easier to optimize: you could have hierarchical memory,
pipelines, speculative execution, branch prediction, additional instructions,
like vectors and so on.

Doesn't the specialization make such things harder to implement due to higher
complexity. Do you have an example of specialized hardware that beats the
contemporary highly optimized CPUs like ARM?

>Secondly, if you have an operation you'll be performing a great deal of, such
as bounds checking, list manipulation, or abitrary-length arithmetic, it's
worthwhile to have it supported in the hardware

Why would you like to have bounds checking or list manipulation in hardware?
It's a waste of transistors.

A code like

    
    
       x := array[1]
    

should have one bounds check, yet

    
    
       for x in 0..n:
          sum := sum + array[x]
    

should also have one bounds check, not many. Thus you have to either use an
inefficient instruction or implement a hardware iterator bloating your CPU
beyond reasonable limits.

And why lists? Why not trees, hashtables, ephemerons?

~~~
verisimilitudes
>Do you have any evidences of that?

If you have a circuit that implements an algorithm, it can beat a program that
implements the algorithm for a machine that lacks such a circuit. This is
clear.

>Because from my naive understanding, simple processors are easier to optimize

I've already reasonably explained why that's not the case, but I want to note
that these machines aren't necessarily simpler. The Lisp machine processors
are far simpler than anything Intel, ARM, or MIPS have been putting out, and
yet did more, just not as fast due to major hardware production advancements.

>Do you have an example of specialized hardware that beats the contemporary
highly optimized CPUs like ARM?

I don't have an example in speed, no, but I can tell you that there are
simpler processors that avoid entire classes of programming errors these
machines are still vulnerable to. If you're only going to judge a machine by
speed, I wonder if you own a car and, if so, if it has air bags, seat belts,
and other such things.

>Why would you like to have bounds checking or list manipulation in hardware?
It's a waste of transistors.

Having a large amount of a processor dedicated to fast memory to store groups
of instructions for things such as bounds checking and list manipulation
isn't?

    
    
        x := array[1]
    

This could likely be determined at compilation.

    
    
        for x in 0..n: sum := sum + array[x]
    

It's my understanding there were implementations of hardware, or at least
descriptions thereof, explaining how such a bound could easily be cached and
compared in tandem with other operations, for each new iteration, which would
be equivalent to checking when the loop should terminate.

>And why lists? Why not trees, hashtables, ephemerons?

Why not? It's not unreasonable to expect the machine to have canonical and
supported representations for commonly used forms of data and operations on
such.

~~~
ernst_klim
> If you have a circuit that implements an algorithm, it can beat a program
> that implements the algorithm for a machine that lacks such a circuit. This
> is clear.

You can't have all the algorithms implemented in hardware. How well would your
CPU work for any algorithm unknown in advance? Would it beat a generic RISC
CPU in some generic task, a video codec, for example?

>The Lisp machine processors are far simpler than anything Intel, ARM, or MIPS
have been putting out

Do you have any benchmarks comparing Lisp processors to ARMs and MIPS in day-
to-day tasks like compression algorithms, codecs, 3d rendering?

> It's my understanding there were implementations of hardware, or at least
> descriptions thereof, explaining how such a bound could easily be cached and
> compared in tandem with other operations

Do you have any resources on that? Would it work for complex convolutions
(e.g. arrays of arrays of complex nums convoluted with arbitrary kernels)? In
my statically typed programming language I know the matrix sizes in advance
and could easily omit the runtime checks. Letting the CPU do so, I'm not sure
if it would be as efficient as in compile time.

>Why not? It's not unreasonable to expect the machine to have canonical and
supported representations for commonly used forms of data and operations on
such.

Lists are not the main data structure anywhere beyond Lisp. And lists are
frequently used in Lisp because of how Lisp is inexpressive in terms of data
structures.

Even in functional languages like ML or Haskell we use other data structures a
lot: tuples, maps, hash tables, arrays, some ad-hoc inductive data structures.
All of these are more effective when are not implemented in terms of lists.
Architecture supporting at least tuples (arbitrary cons cells) would be
bloated beyond any reason already. At least if you want them be as effective
as cons cells on lisp machines.

~~~
verisimilitudes
>You can't have all the algorithms implemented in hardware. How well would
your CPU work for any algorithm unknown in advance?

That depends. There are, clearly, however, some algorithms that are common
enough to warrant inclusion in hardware. I feel you're not arguing in good
faith if you believe basic arithmetic is warranted, but something extremely
common, such as support for arbitrary-length arithmetic or exception systems
aren't.

>Do you have any benchmarks comparing Lisp processors to ARMs and MIPS in day-
to-day tasks like compression algorithms, codecs, 3d rendering?

No.

>Do you have any resources on that?

I dislike Wikipedia, but here:
[https://en.wikipedia.org/wiki/Lisp_machine](https://en.wikipedia.org/wiki/Lisp_machine)

>Would it work for complex convolutions (e.g. arrays of arrays of complex nums
convoluted with arbitrary kernels)?

If the hardware supported it. If not, you'd likely build it out of the
hardware support for simpler arrays. Having arrays of arrays is an easy
solution, since then each axis could be checked in turn.

>Letting the CPU do so, I'm not sure if it would be as efficient as in compile
time.

Again, it's not just about efficiency, but safety. This is an entire class of
error prevented in hardware. That's useful.

------
Nursie
I did some work on a MIPS platform relatively recently (2014), for a payment
processing application. I don't know if the last 5 years has finally killed
it, but there still seemed to be enough of it about back then that it wasn't
massively exotic.

With the appropriate RAM (which we didn't have) it was even capable of running
linux. It had various features that were useful to us in keeping our code
compact (we had 256K of flash memory to play with, and 128k of RAM). Mixed-
mode 16 and 32 bit instructions was one of the neater ones I remember.

MIPS new owner as of last year appear to have announced an Open and Royalty-
free model, which is pretty cool.

------
kijiki
The office postal addresses (and dates) on each of these usenet posts tell a
story of their own.

