
Do Object-Oriented Languages Need Special Hardware Support? (1995) [pdf] - luu
https://www.cs.ucsb.edu/~urs/oocsb/papers/ecoop95-arch.pdf
======
wsxcde
Processors over the last 10 or so years have included more and more
sophisticated indirect branch predictors. (I think the trend started with
Banias which debuted in 2003.) This was primarily motivated by the
proliferation of virtual function calls in object oriented code.

ed: If I remember my numbers right, these predictors buy a few percentage of
performance improvement across entire benchmark suites, which is quite hard to
get through a purely microarchitectural feature.

~~~
cokernel_hacker
What you are describing is a Branch Target Buffer (BTB).

While local and global predictors describe which direction a direct branch may
take, they do little to help indirect branches.

Take the example jmp [rax]. While condition codes for direct branches can be
speculated, the value contained within rax isn't easily speculated either.

However, we can make note of _where_ jmp [rax] took us the last time we
executed. This mapping from branch instruction address to target address is
precisely what a BTB is.

As for when they were invented, I believe the first paper on them is Lee83;
I'm pretty sure the original Pentium had one.

Lee83
[http://www.eecs.berkeley.edu/Pubs/TechRpts/1983/CSD-83-121.p...](http://www.eecs.berkeley.edu/Pubs/TechRpts/1983/CSD-83-121.pdf)

~~~
wsxcde
No, I know quite well what a BTB is and it's only somewhat useful for indirect
branches. Modern processors have specific predictors for indirect branches.
This paper:
[http://dl.acm.org/citation.cfm?id=279380](http://dl.acm.org/citation.cfm?id=279380)
describes one such structure. It also explains why such structures outperform
BTBs.

------
fractallyte
The article doesn't mention Rekursiv, the object oriented processor designed
in the 1980s by David Harland
([http://en.wikipedia.org/wiki/Rekursiv](http://en.wikipedia.org/wiki/Rekursiv)).

I was in touch with him recently, partly to find out why a professor of
computer science at St Andrews University would suddenly drop out of computing
entirely, and move to writing about space exploration...

~~~
rbanffy
I think I can imagine, but can you share your explanation?

Space exploration looks like an interesting endeavor.

~~~
fractallyte
(If he reads this, I hope he doesn't mind my presumptuousness!) It was mainly
due to frustration at the narrow-mindedness of the computing community.
Harland not only designed an alternative computing architecture, but also
presented heretical ideas about programming language design, formalized into
his 'principle of abstraction':

 _" Realising that calculus has actually got nothing at all to do with either
differentiation or integration, but is actually a set of rules for
lexicographical transformations, with differential calculus and integral
calculus being two examples, I came to the conclusion that there is a
fundamental principle for the design of a programming language based on an
interpretation of the process of abstraction in terms of a calculus of
semantics-preserving transformations reflexive over the domain of syntactic
clauses."_

He went on:

 _" The problem for most languages, of course, is that not all the clauses in
the grammar are abstractable over, and for many that are the semantics are not
strictly preserved when abstraction is performed, which sort of ruins the
point."_

It is summarized thusly in one line:

 _Abstraction is a calculus of semantics-preserving transformations reflexive
over the domain of syntactic clauses._

I'm working my way through his book, 'Concurrency & Programming Languages', so
I can try to better understand this statement...

I came across one interesting paragraph on page 151: _" There is no need for a
macro facility if a language permits the meaning of application to be defined
differently for different types of applicable value, and if its
parameterisation mechanism can be defined differently for different types of
data (such as for the functions and operations above), or for values of the
same type but with different characteristics (such as for functions with
eager/lazy attributes)."_

Hah! So could it be that Lisp has deficiencies at a fundamental level...?

Harland replied to my query: _" The first thing of significance that I did was
to make types manipulable... Then I made the environments manipulable so that
I could abstract over declarations. This comes from having access to the
'current environment' whose meaning is defined dynamically, so that it can be
passed as a parameter. Next came access to 'current process' so that it
networks of processes can be dynamically established. Long after that, when I
fully appreciated the simplicity of abstraction as being over the clauses of
the grammar, I made the grammar a data structure that the program can modify
in real time, to add new grammatical forms (i.e. abstractions) and expand the
means of expression..."_

There are interesting ideas here that should be pondered further...

------
payne92
The lesson here (I think) is that the collective (and MASSIVE) R&D effort
general purpose CPUs, and optimizing compilers (and now, GPUs) almost always
trumps specialized computing architectures.

It's really hard to beat the raw performance of a modern Intel or AMD
processor.

I co-authored a paper in 1992 on this same issue with DSP chips, see:
[http://www.hpl.hp.com/techreports/Compaq-
DEC/CRL-92-10.pdf](http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-92-10.pdf)
We predicted the shift of "signal processing" applications into the main CPU
(e.g. the (in)famous winmodem).

And we were widely criticized :) e.g.,
[https://groups.google.com/forum/#!topic/comp.dsp/FWJlNGMbN9Y](https://groups.google.com/forum/#!topic/comp.dsp/FWJlNGMbN9Y)

I also believe the market is good at pulling in functionality at the
instruction-level when there's demand (e.g. encryption instructions,
population count, etc.)

------
taspeotis
TL;DR

> compiler optimizations can almost entirely eliminate the large semantic
> difference between a pure object- oriented language and C. As a result, we
> found little opportunity for object-oriented hardware

~~~
DiabloD3
"If an article headline asks a question, the answer is always no."

~~~
grndn
[https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines](https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines)

------
hcarvalhoalves
I wonder what kind of processors would we have today if the market went into
another direction, like LISP? Tagged registers, optimizations for deep stacks,
optimizations for GC...

Why isn't the hardware moving into that direction nowadays, wouldn't it
greatly benefit the modern languages that ship with a VM to have hardware
support?

------
chriswarbo
"Need"? No. "Benefit from"? Yes. For example, see Ian Piumarta's work on
[http://www.viewpointsresearch.org/html/writings.php](http://www.viewpointsresearch.org/html/writings.php)
, eg. "An association-based model of dynamic behaviour"

