
News of Nvidia’s Pascal tapeout and silicon is important - nkurz
https://semiaccurate.com/2016/02/01/news-of-nvidias-pascal-tapeout-and-silicon-is-important/
======
svensken
I highly doubt that Nvidia dropped the ball this hard with Pascal.

A much more obvious and sensible conclusion is that Nvidia is currently
developing their next chip, called Volta. We already know that the Department
of Energy contracted Nvidia and IBM (lots and lots of money) to provide a good
Volta GPU + POWER9 CPU combo for the new Summit and Sierra supercomputers set
for completion in 2017.[1] This means that Nvidia knew since 2014 (at least)
that they'd have very little time between their Pascal release and the more
pressing Volta release. It's been their roadmap for a while now.

The Fermi, Kepler, and Maxwell architectures each had two or three years
between them. Pascal and Volta are set to have a year or less.

1: [http://www.anandtech.com/show/8727/nvidia-ibm-
supercomputers](http://www.anandtech.com/show/8727/nvidia-ibm-supercomputers)

------
ChuckMcM
Short version: nVidia's PASCAL might see Christmas this year, maybe not.

The tenacity of fans to spin far reaching narratives out of small disconnected
events to support their dreams and fantasies never fails to amaze.

~~~
rl3
> _Short version: nVidia 's PASCAL might see Christmas this year, maybe not._

How do you figure?

The article's two main claims are:

a) Had Pascal taped out in June 2015 as everyone had reported, it'd have
easily already made it to market by now.

b) At the time of CES 2016, Pascal hadn't yet taped out. Nvidia had only
received "bring up tools" in the last few days of 2015; actual silicon
typically arrives a few weeks after the tools.

Going by the article, Pascal probably taped out for real in late January or
early February. If anything, it seems on track for probably a late Q2 2016
release, maybe early Q3. No way it'll be Christmas unless something goes
catastrophically wrong.

------
cwyers
I have no idea about the author's sources, but even if you take every piece of
evidence he presents as true and his sources as accurate he doesn't come
within a country mile of having enough evidence to claim with certainty that
an Nvidia executive flat out lied about anything.

~~~
jjoonathan
Wait, is a CEO lying (Jen-Hsun in particular) supposed to be some sort of
scandal?

Maybe if you interpret "lie" using the courtroom definition, sure, it would be
a scandal, but that's not what the author was going for.

~~~
DiabloD3
Nvidia is a public company. What the CEO has said is enough for shareholders
and other investors to sue the company.

------
scriptdevil
Semi-accurate all right. The accurate part is not original and the original
part is mere fanciful speculation.

------
PascLeRasc
/r/hardware discussion seems to doubt this site's reliability:
[https://www.reddit.com/r/hardware/comments/43q5oa/news_of_nv...](https://www.reddit.com/r/hardware/comments/43q5oa/news_of_nvidias_pascal_tapeout_and_silicon_is/)

~~~
trsohmers
I can't believe how negatively everyone is viewing SemiAccurate... as a chip
designer and very involved in the business of the industry, SemiAccurate is
one of the best news sources I've got. Everyone in the industry shits on
NVIDIA's process because they have time and time again lied about benchmarks,
tapeout dates, et cetera.

I only use NVIDIA GPUs, and think that most of the time they are decent
products (except their Linux driver support), but I take every statement from
Jen-Hsun with a HUGE grain of salt, and wait until I talk to a friend at
NVIDIA, which almost always states the teams displeasure at Jen-Hsun
bullshitting.

~~~
corysama
Nvidia earns a lot of flak, and SemiA might be a great site overall. But, the
only times I see SemiA linked is when Charlie has written an article that
gives the distinct impression that at some point Nvidia ran over his dog. That
is my entire experience with the site over many years.

------
SideburnsOfDoom
Sorry, what does this story mean? Is Nvidia doing retrocomputing - writing
code in Pascal and it's out on tape? And also Silicon? I speak geek but not
this dialect of geek.

~~~
trsohmers
So there are two sources of "tape" in the word "tape out". Back in the good
'ol days (Pre 1980s), chip designs were done on paper by the engineers, and
then transferred onto rubylith tape
([http://tingilinde.typepad.com/.a/6a00d83451b54669e2017ee846b...](http://tingilinde.typepad.com/.a/6a00d83451b54669e2017ee846b452970d-400wi))
by mostly women. The rubylith was then moved to the fab, where it was used as
the mask for photolithography (start of manufacturing of the chip).

The other source of the name was that starting in the 80s through 90s, when
EDA tools started being used in the industry, designs were all done on
computers, and then the final file containing the information for the fab
(.GDS2) was put onto a storage media (tape) and sent to the fab.

In both of these cases, it is basically the final design step, before you wait
however long for the silicon to get back from the fab. It is also a huge
stressor as the fabrication runs are prepaid for, so when you are approaching
tape out date, it is typically extreme overtime for everyone involved.

I should probably get back to work; 2 1/2 months to tape out myself...
hopefully will get some sleep between now and then.

~~~
SideburnsOfDoom
Thanks for the reply; it both answers the question and is interesting in its
own right.

------
vivekian2
Charlie is known to be anti-Nvidia for some reason. Best to ignore his
articles on Nvidia.

~~~
rasz_pl
Nvidia is anti-consumer(cheating in tests, pushing proprietary tech and
fighting against interoperability), so I dont blame him.

~~~
andromeduck
Two wrongs don't make a right.

------
rincebrain
It seems like he has a reasonable chain of logic for every component but one -
nobody has any evidence that the "BGA" component in here was specifically for
Pascal, Volta, or anything at all.

So it certainly could indicate they don't have real silicon yet for whatever
the component involved is, but nobody I've seen has presented a compelling
argument for it being Pascal in particular.

------
rasz_pl
Nvidia cheating, lying, and using dirty tactics? Nothing new for AMD, just a
repeat of Intels 2000-2006 phase.

~~~
beedogs
AMD doesn't own Nvidia.

------
beedogs
Nvidia would do well to cozy up to Intel at this point, and pray for an
acquisition. I can't see such an inept company surviving for much longer on
its own.

~~~
vegabook
It's valued at 14 billion dollars. Hardly distressed or inept.

~~~
beedogs
If you've kept up with their history at all, they're one of the most inept
companies out there. They seem to succeed in spite of themselves.

~~~
vegabook
I have been using Nvidia cards since the original geforce 256 in 2000. I
purchased one of the first ATI 8514 Ultra cards in the early 90s. I own two
R9-290s, and at work I use Tesla K40s. I follow GPU computing avidly as I do
machine learning using mainly openCL (for my sins), precisely because I want
choice in the market. It would be much easier for me to do CUDA. I resist for
reasons of rejecting proprietary APIs.

I know _exactly_ what I am talking about. I don't forgive Nvidia its sins, I
actually prefer AMD, but I am not about to lie to myself that Nvidia is
somehow an unsuccessful company, when it is worth 7x more than AMD, and AMD
also includes an X86 line. It's a no-contest scenario in the eyes of the
market, much to my chagrin, but I cannot deny the reality:

Nvidia is crushing AMD.

