
Chip Industry's Fundamental Shifts in 2018 - Lind5
https://semiengineering.com/fundamental-shifts-in-2018/
======
xvilka
Speaking about design challenges - it would be nice if industry will switch to
LLVM-like low-level HDL (or RTL) language, that will allow easier cross-tool
integration. So far every project implements everything itself. There is an
idea[1] to adapt FIRRTL[2][3] for this needs. Here is a good paper[4]
describing its features. Apparently Intel also using it, at least in the
research labs. This will help to integrate all parts of FPGA and ASIC design
pipeline together. For now chip-design feels like programming in 1990 -
quality of tooling is very bad.

[1]
[https://github.com/SymbiFlow/ideas/issues/19](https://github.com/SymbiFlow/ideas/issues/19)

[2]
[https://bar.eecs.berkeley.edu/projects/firrtl.html](https://bar.eecs.berkeley.edu/projects/firrtl.html)

[3]
[https://github.com/freechipsproject/FIRRTL](https://github.com/freechipsproject/FIRRTL)

[4] [https://aspire.eecs.berkeley.edu/wp/wp-
content/uploads/2017/...](https://aspire.eecs.berkeley.edu/wp/wp-
content/uploads/2017/11/Reusability-is-FIRRTL-Ground-Izraelevitz.pdf)

~~~
buckysock
It will stay like this for the foreseeable future. Chip design IP is too
obscure for the mainstream oss people to force an oss ecosystem. There’s also
too much proprietary foundry information needed to even have an oss ecosystem
that actually works properly.

Ironically the closedness of chip design tools and foundry information is also
what prevents it from truly growing. Entrenched players (cad companies)
benefit from this, but the losers are the actual chip design companies. They
are bleeding generation of engineers who have forsaken this climate for the
more cozy cs oss climate.

------
danmaz74
Fairly interesting article. I'm a bit baffled by the reference to a supposed
attempt at abandoning the von Neumann architecture (full citation below) -
what would these alternative architectures be? It doesn't seem at all likely
to me, and I'm wondering if this is more about some way to market rather minor
changes as some kind of radical shift.

\-------- "And the drive to put AI on the edge is causing new design
architectures. “There has been a rethink on compute architectures for cache
coherency, for heterogenous computing,” points out Synopsys’ Nandra. “In 2018,
different approaches have been attempted to solve ML inference challenge. They
are all trying to work out how to do inferencing on an edge device and being
able to quickly process data so that they don’t have to upload information
from the Cloud and to do things in real time. That has opened a big debate
about von Neumann compute architectures to different approaches where you are
separating memory, accelerator chips—be they dedicated FPGAs, dedicated GPUs,
application specific processors—and new chips that have blocks that talk with
each other.”"

~~~
PhantomGremlin
_what would these alternative architectures be?_

A traditional von Newmann computer reads instructions and data from memory,
into something that does arithmetic, logic and control (call it a CPU).

But consider that the "memory" nowadays is composed of multiple instances of 8
Gib DRAM. That's a lot of bits to be sitting there, read out perhaps 64 at a
time by the CPU.

Instead, can some logic be embedded within each memory chip? Some way to
bypass the "von Neumann bottleneck"?[1]

Ideas like that are the essence of "alternative architecture". Can a system
composed of these, let's call them "non-von Neumann" elements, be faster or
cheaper than the computer architecture we've been using since Johnny's 1945
seminal paper.[2]

[1]
[https://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_N...](https://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_Neumann_bottleneck)

[2]
[https://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the...](https://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the_EDVAC)

