
Groq, a Stealthy Startup Founded by Google’s TPU Team, Is Raising $60M - exotree
https://news.crunchbase.com/news/groq-a-stealthy-startup-founded-by-googles-tpu-team-is-raising-60m/
======
writepub
Hardware is hard. For Gorq to be successful -

\- The boards using their chips need to fit into commodity interfaces (PCIE?
DIMM? ) in Open Compute hardware

\- Someone needs to buy hundreds of thousands of these to even minutely impact
their bottom line

\- Multiple such big volume wins need to consistently happen.

\- Their IP needs to actually be defensible. Otherwise, an Intel/Samsung whose
manufacturing prowess & channel reach is multiple times that of Gorq will
undercut the pricing with _almost_ the same performance/watt. Oh, and they'll
happily play nice with Open Compute, other standards bodies.

\- Most of all, their product needs to work as advertised, at scale, in a
reliable fashion. This is easier said than done in semiconductors, especially
for the kind of performance gains they're marketing.

 _If_ they'd gotten here with ~$30M capital, and demonstrated traction in the
marketplace, I'd give them a chance, but expecting Google's pay while working
at an independent chip startup, with $0 in revenues portends financial doom.
It's not the founder's fault though - hardware is capital intensive, not
compatible with agile development & an MVP just won't cut it - it needs to be
fully functional & reliable right out of the bat. I sincerely hope I'm wrong,
as I'd like to see the silicon put back in silicon valley - having been in the
semiconductor industry for 20 years, it just seems unlikely.

~~~
deepnotderp
> The boards using their chips need to fit into commodity interfaces (PCIE?
> DIMM? ) in Open Compute hardware

I mean, it's not exactly like pcie interfaces are ultra hard, must be done in
house, cutting edge technology anymore. And why the huge emphasis on OCP?
Nothing there necessitates any kind of extreme innovation.

> Their IP needs to actually be defensible. Otherwise, an Intel/Samsung whose
> manufacturing prowess & channel reach is multiple times that of Gorq will
> undercut the pricing with almost the same performance/watt. Oh, and they'll
> happily play nice with Open Compute, other standards bodies.

This has been true historically, but TSMC currently can out manufacture both
intel and Samsung wrt 7nm. In addition, the steamroller of Moore's law no
longer really holds with dennard scaling dead and wire/MOL rc and variation
soaking up performance gains.

Btw, most of these criticisms could have been levied against nvidia way back
when.

~~~
writepub
Not a criticism, merely truths about the industry.

Also, from Nvidia's founding to now, things have radically changed. It's
depressing how winner-take-all semi has become. Nvidia could tape-out at
$300k, today it's $2M. Number of semi players has reduced drastically even
within the last 5 years

------
nicodjimenez
Delivering an incremental speed boost vs nvidia chips seems like a tough way
to win. Even if they can be 2x faster, people will still stick to cuda. They
either have to be 5-10x faster or find a new market. Definitely possible.
Would be pretty embarrassing for nvidia if that happened.

~~~
sytelus
Nope. People aren't addicted to CUDA. In fact its hell to work with CUDA. Just
to even download the binaries you have to go through registration process,
docs are shit and version dependency is nightmare. The only reason CUDA is in
use is because in old days it was the only game in town and Caffee framework
integrated it from some very early code researchers wrote. Then people kept
using that baseline code all the way to TF and PyTorch. Thanks to TPU,
frameworks are already being to forced to be agnostic and new alternatives
will be much easier to integrate.

If Groq chip delivers what its promising then you can bet that it would be
integrated within few months in most frameworks and people will soon forget
about CUDA. Most people who work with deep learning neither write code
specific to cuda nor do they care that cuda is being used under the hood as
long as things are being massively parallelized.

~~~
sanxiyn
> If Groq chip delivers what its promising then you can bet that it would be
> integrated within few months in most frameworks and people will soon forget
> about CUDA.

Groq seems careful not to promise any price point. Even if Groq delivers every
promise, if it's expensive its adoption will be chancy.

------
sytelus
The numbers on their website is quite stunning if they are true:

\- 16X more power efficient than TitanX

\- 3X more ops than TitanX

\- 25K images/sec vs 5K images/sec inference on nVidia

I'm completely bewildered why NVidia hasn't came up with deep learning
specific chips yet that doesn't have crud of massive rendering pipeline.

[https://blog.groq.com/2017/11/09/69/](https://blog.groq.com/2017/11/09/69/)

~~~
sanxiyn
NVIDIA already created such a chip: [http://nvdla.org/](http://nvdla.org/)

------
scottlegrand2
If it's FP32 or FP16/32, it's interesting. If it's INT8/32 it's incrementally
better than a 2080TI GPU, and if it's INT4/32, it's stillborn.

~~~
monocasa
It depends. If the chips are for training, I agree with you. If they're for
inference, I think the jury's still out.

~~~
sanxiyn
It's an inference chip.

------
syntaxing
I'm kind of curious how many electrical engineers and talent that can design
ASICs are out there? Most electrical engineers that I have met that designed
ASICs at one point or another were for mainly military use or in the Semi
industry. But the team that designed the chips were always super small
compared to the mechanical or electrical team.

~~~
deepnotderp
If you mean full custom, very few, and most are in i/o.

If you mean synthesis, a lot.

~~~
syntaxing
Yeah it seems like full custom ASICs kind of died off a little when affordable
robust MCUs came out. Kind of nice to see it picking up again!

------
woodrowbarlow
founded by _former members_ (albeit founding members) of Google's TPU team.

i wonder how they negotiated this from a legal standpoint. every employment
contract i've ever signed certainly would not allow for starting a project
this similar to my employer's core business.

~~~
ttul
You can’t - in California anyhow - really stop employees from quitting and
then competing with you. Most non compete forms have been made unlawful, which
is a really good thing for competitiveness.

In any case, I don’t see any confirmation that this startup is pursuing
something that would be competitive with Google. The whole stealth thing
leaves us with little to go on.

~~~
ryandrake
But companies can (and Google does) prevent _current_ employees from working
on side projects—even on their own time and equipment. So I suppose the
relevant question is: did these guys leave and then start with a completely
blank slate? Presumably yes! Any IP ownership ambiguity that would arise from
moonlighting would have been flushed out during funding due diligence.

~~~
kornish
Is this true? Moonlighting is legal in CA [1].

[1]: [https://danashultz.com/2016/05/31/moonlighting-employees-
pro...](https://danashultz.com/2016/05/31/moonlighting-employees-protected-
california-labor-code/)

~~~
williamscales
Depending on the IP assignment clause in one's employment agreement, even if
you are allowed to moonlight on your startup your employer may own the IP. I
just checked mine and if I make any inventions related to my employer's
business, then per the terms, I automatically grant the company the right to
that invention.

I've heard rumors that one can negotiate these clauses. But this is what I
expect would be relevant for these engineers.

~~~
ttul
They can try to enforce that, but my understanding is that your brain is
generally yours outside of working hours, and stuff you dream up is also
yours.

But, obviously it’s easier to quit and then do your inventing.

~~~
ryandrake
It seems less risky to just not moonlight, especially if that moonlighting has
the potential to turn into a high-$$$ business. Day zero IP litigation is last
thing you need to be involved with when you're trying to bootstrap a
technology startup. Even if you know with 100% certainly you would win, you'd
be bled dry fighting BigCompany's hundreds of lawyers.

------
Soundest
This looks interesting. Whilst I agree with other commenters that it's hard to
compete in hardware I think there's a good niche for this product. Google
isn't going to start selling TPUs so something off-the-shelf for machine
learning might make some real traction. Best case, lot's of sales to cloud
providers (amazon, microsoft etc.) and lots of custom houses. Worst case would
be acquisition by MS or Amazon.

Having said that, it's certainly true that Nvidia are tough to beat. But right
now we're in a bubble, VC will throw millions at companies and big
corporations will throw billions at acquisitions. So I think it's probably a
very profitably move in general.

~~~
boulos
Disclosure: I work on Google Cloud.

Local inference can be important and even a requirement, so at NEXT we
announced our intent to start shipping our Edge TPUs:
[https://cloud.google.com/edge-tpu/](https://cloud.google.com/edge-tpu/)

~~~
perrohunter
I’m still waiting patiently for those to arrive so I can order a couple

------
jmunjr
This is troubling:

[https://seekingalpha.com/article/4206948-nvidias-
inference-p...](https://seekingalpha.com/article/4206948-nvidias-inference-
problem-alarming-sell-side-ai-iq)

------
HNNewer
I believe they got so much funding because they are coming from Google, not
for the product itself. They could have sold even crap hardware.

------
lquist
Is this competitive to NVIDIA's chips? How worried should they be?

~~~
writepub
not the least bit worried. It'snot about performance, it's reliability. When
you're a big cloud provider, only a miracle can get you to use a chip from a
no-name company that no one else has dog food-ed. Then there's the actual
number being marketed, there's been no independent verification. Right now,
this is a fancy P.R hit piece

~~~
sytelus
No, NVidia should be very worried. There is a huge uproar in the community
with some of the practices NVidia has forced like you must use 3X expensive
version of same GPUs in data center. Even consumer GPUs are in short supply.
Most of the people are not using massive complex rendering pipelines that
these GPUs have but they are paying in terms of price and wattage. There is a
huge demand for consumer version of TPU like chips and the market is going to
eat up any similarly performing alternatives. Lot of gain in NVidia's revenue
comes from blockchain and deep learning segment. Much of these gain is at risk
by chips like TPU or Groq. It is quite surprising that NVidia hasn't announced
any competing product and I hope they don't get sleeping at the wheel while
this big wave is about to hit the market.

~~~
twtw
> Most of the people are not using massive complex rendering pipelines that
> these GPUs have but they are paying in terms of price and wattage.

This is not how compute on GPUs works now, or since G80 was released in 2006.
The "massive complex rendering pipeline" doesn't even light up.

------
nobrains
Are they planning to create cryptocurrency miners?

------
person_of_color
Are they hiring?

~~~
jasondrowley
Author of the article here. Yes, they are apparently hiring, or so suggests
some of my internet research. One of the founders' linkedin profiles said
they're mostly building in Haskell.

~~~
person_of_color
Thanks. They must be using www.clash-lang.org/

~~~
sanxiyn
It's probably Bluespec instead.

------
css
> a company with a very spartan website

What does the word "spartan" mean in this context? "Serious?" "Utilitarian?" I
do not know this usage of the word.

Edit (from Webster):

> 2 b _often not capitalized_ : marked by simplicity, frugality, or avoidance
> of luxury and comfort. "A spartan room"

~~~
jasondrowley
Author of the article here. If you look at the company's website, it is very,
very plain.

~~~
css
Yeah, I like it! I just have never seen that word used in that context.

~~~
charmides
Are you a native speaker of English?

~~~
css
English and Mandarin, learned in parallel.

------
anonymous5133
Definetly seems interesting.

