
Piranha: A Scalable Architecture Based on Single-Chip Multiprocessing (2000) [pdf] - mpweiher
http://barroso.org/publications/isca00.pdf
======
mpweiher
One thing I found remarkable was the following passage:

"Meanwhile, commercial workloads such as databases and Web applications have
surpassed technical workloads to become the largest and fastest-growing market
segment for high-performance servers. A number of recent studies have
underscored the radically different behavior of commercial workloads such as
on-line transaction processing (OLTP) relative to technical workloads [4,7,
8,21,28,34,36]. First, commercial workloads often lead to ineffi- cient
executions dominated by a large memory stall component. This behavior arises
from large instruction and data footprints and high communication miss rates
which are characteristic for such workloads [4]. Second, multiple instruction
issue and out-of-order execution provide only small gains for workloads such
as OLTP due to the data-dependent nature of the computation and the lack of
instruction-level parallelism"

To me this says: "computers don't compute" (Feynmann) has become true.

And that raises a question: are our basic abstractions (subroutine, procedure,
method, function), which all have the basic idea of call/return, calling
something and returning a result, still appropriate?

