I wonder if consumer VR would have faired better if gamers didn’t have to compete with miners and data centres for chips and had more reasonably priced cards a few years back.
But instead of stepping up to the plate and igniting a Red Queen's Race that would have benefited everyone, INTC first tried to discredit the technology repeatedly, then they built an absolutely dreadful series of decelerators that demonstrated how badly they didn't understand manycore. Eventually, they gave up, and now they're playing catch-up by buying companies that get within striking distance of NVDA rather than building really cool technology from within.
Now if someone threw a large pile of money at AMD again, things could get really interesting IMO. But the piles of stupid money seem biased towards throwing ~$5M per layer at the pets.com of AI companies these days.
It's not that Intel doesn't have people that recognize the issues, but rather the people who do have that foresight are drowned out by people who don't realize the game has changed. Intel, to be fair, does have the best autovectorizer--but designing vector code from scratch in a purpose-built vector language is still going to produce better results, as shown when ispc beat the vectorizer.
But Nvidia can also get drunk on its kool-aid, just as Intel has been. Nvidia's marketing would have you believe that switching to GPUs magically makes you gain performance, but if your code isn't really amenable to a vector programming style, then GPUs aren't going to speed your code up, and the shift from CPU-based supercomputers to GPU-based supercomputers are going to leave you happy. There's still room for third-way architectures that is anyone's game.
Bad management is when your company evaporates because you make one bad call.
Management gets points for surviving and then fighting back despite being wrong. And when you look at Intel's history, there are few companies on the planet who have managed to do that multiple times. They have a good mix of people who know what they are doing technically AND people who do whatever it takes to keep the company from sinking when those bad technical calls happen.
If Nvidia survives whatever their next bad call maybe expect them to start looking more and more like Intel.
That's a pretty low bar!
AMD could never made the same gamble as Intel did for Itanium. There's a long technical argument as to whether the world is better off in a technical CPU design sense because of that, but I disagree that it's necessarily good management of Intel that's allowed it to recover from disaster.
The best management can play the hand they're dealt perfectly and still lose. However bad management can play the best hand poorly and still win.
CUDA's sweet spot lies between embarrassingly parallel (for which ASICs and FPGAs rule the world because these are generally pure compute with low memory bandwidth overhead) and serial (for which CPUs are still best), a place I call "annoyingly parallel." There are a lot of workloads in this space in my experience.
But if you don't satisfy both of the aforementioned requirements and/or you insist on doing this all from someone else's code interfaced through a weak-typed garbage collected global interpreter locked language, your mileage will vary greatly cough deep learning frameworks cough.
Finally, it doesn't matter who's doing it, marchitecturing(tm) drives me nuts too.
Controlling the language certainly helps Nvidia's economic moat.
Arguably one of the things that Nvidia really got right was learning from those past failures at other companies and making it easier for developers to utilize the platform starting from a standpoint that they were familiar with and helpfully nudging them towards what would run fast in parallel.
I think the big disconnect is thinking they had to make people rewrite code. CUDA often targets entirely new codebases, and in some cases new types applications.
The "rewriting of code" is mostly for things like AV processing and codecs where there was such a sellable benefit in performance it would have been insane for them not to invest the effort.
Intel was doubly hindered here, because they wanted everything to use x86. Intel had trade secret and patent protections from competitors, and a critical mass of marketshare.
Parallel programming was something that had to fit into that "x86 for everything" mindset rather than being a separate/competing technology to x86. The company that pushed winmodems and software sound cards wasn't going to be able to lead the disruption there.
Only now, more than a decade later they realize their mistake and try to correct the juggernaut's course. Such glacial mistakes in this industry can cast death blows to even the largest entities.
They drowned out Pat Gelsinger late 00s, then Justin Rattner and many others Retired in early 10s, the rest is history.
Reminds me of "First they ignore you, then they laugh at you, then they fight you, then you win".
So many companies have this reaction (e.g. RIM with BlackBerry). Wondering if this is some kind of "corporate instinct".
There is a sort of survivors bias in focusing in the 2% and assuming that's the norm. Corporations act this way because it generally works.
Like the OP said though, it does lead to arrogance over time, and that's when a fall happens.
Corporations can't really be as successful at innovating as a startup. A startup is free to build or reshape itself into anything, focus on a single thing, and pivot on a dime. In a corporation the same structures that hold it up and moving are the ones that resist it changing direction or promoting something new. Easy to lose focus and get lost in the red tape.
And that's before you consider the risk a CEO sees in potentially cannibalizing their own (currently successful) business or just throwing money down the drain at 98 losing ideas. Like you said, 2% of ideas may be successful so a corporation would rather let the startup play it out and then buy it if it has potential. Easier to justify to investors.
So corporations innovate when they have nothing to lose and any risk is worth taking. See MS trying to somewhat successfully reinventing themselves after seeing mobile and FOSS booming. Private companies also have an easier time innovating because they have no investor pressure. They may be behemoths but at least they can avoid suffering from the "too many cooks in the kitchen" syndrome.
I know this to be true, however I cannot understand for the life of me why this is the case.
If I was the CEO or CTO and had, say, 5k people under me, you had better believe there would be dozens of little 3-4 person teams doing hard research on threats and coming up with recommendations to get in front of them.
I mean this is basic 1st year MBA SWOT Analysis stuff.
And having lots of teams "innovating" is also not that great. You'll just end up with a stack of 100 great ideas on your desk but only 2 that might make money. Your job is to guess which 2. Any decision you take will be heavily scrutinized by everyone in the company and shareholders. You may just go the safe way, that worked over the past few years and put a bonus on the table.
A 10-20-100 person startup with everybody in the same office and a very flat structure will be a lot more agile. The people are all there for that one single purpose, and the dynamic is quite different. Once the goal is reached many just move on. This provides a very different motivation vs. the typical corporate employee.
holding the place of an incumbent has advantages and disadvantages. Sometimes you can't leverage the advantages, and that's when a company can get buried by the upstart the worst.
That somebody might be Nvidia. I believe that Nvidia is still battling and has not yet paid the 1.06 billion Euro fine to AMD imposed on it by the UK courts in 2009. Hearsay claims that it was the similar US fine that basically paid for Zen R&D...
It seems gratuitously confusing for readers, and doesn’t seem to have any benefit I can see.
As far as I can tell, the hardware development continued and the next wave of VR will have much higher quality and more performant GPUs to make a good first impression.
It would look differently if the entire VR field would’ve collapsed due to low sales but tbh it looks like it’s maturing slow and steady and that’s how it should be
So essentially high gpu prices might have given the field just enough time to mature with patient early adopters before it goes main stream. At least that’s how I hope it will be
Though afterwards what got me to uninstall everything was a combination of Facebook/Oculus being untrustable and the low playerbase/match making problem in Echo Arena.
I had much better luck getting people to try things like Daydream, which have extremely limited processing power but could do cartoony graphics just fine.
I think the industry really missed an opportunity to start with lightweight AR. Let the users walk around by showing them where they are and overlay things on top. There are plenty of useful applications; many require precision sensing that phones don't have, but there are others that can be done with just an accelerometer and a camera.
The problem with lightweight AR is that with current technology, it's not that useful. If you've ever tried Google Glass or Focals by North, there's not much you can do with them that couldn't be done better with a smartwatch .
And if you try to pack a larger FOV display or more processing power into the glasses, you end up with something like HoloLens, and then you've got a similar problem to the VR headsets -- it's probably not something that you'll want to wear for long periods of time (you can't anyway due to battery life), and certainly not something you'd wear walking down the street.
It's not that these are missed opportunities. On the contrary, there are a lot of people who have been working on them for years with billions of dollars spent in the process. But they're hard, and it will take time to get there.
 The one exception was being able to capture quick moments with the Glass camera -- but of course, that raised privacy concerns.
This is part of why I find Facebook's Quest so compelling.
And then CUDA emerged.
So it's not like they magically thought of GPGPU -- there were people working on it before CUDA, but yeah, they had the vision and invested the money and hours of work to make it happen.
I don't think VR changed the GPU demand landscape at all. Most of that market has high overlap to those already doing PC gaming. And if you're already doing PC gaming, you probably already have a GPU perfectly capable of doing VR.
Consumer VR would fair better if there was a halo game to justify the investment. Entry-level VR still starts at $400, and there's not a lot of games where you go "yes I must try this", and fewer still that are then also VR exclusive to really force you into the VR experience.
Those non-gamers are still want to going to want all sorts of things going on in their VR scenes with framerates high enough that 10% of the population doesn't get motion sick.
there is no such thing as a 'killer app' "ATM" - a 'killer app' by definition is one that transforms technology/society and that afterwords, we 'cannot do without' (or at least think so) - eg. personal computers+spreadsheet, smartphones+mobile internet, etc.
If a 'killer app' is 'at the moment', it isn't a killer app because there is a 'next' moment, implying that the previous 'killer app' wasn't 'killer' because we can now do without it
As for crypto and AI: I don’t think anyone saw that coming, but it was a safe bet that, one day, a technology would emerge that could use this kind of calculation power. And that promoting and investing in GPU computing in general was thus a smart thing to do. IOW: making your own luck.
Couple years back when I was researching for stock investment I looked into NVDA's product offering. I used to thought NVDA was only good at video cards but was surprised to find they had already fanned out to a number of different areas with their GPU technology. Basically anything that needs massive parallel computing is a candidate to apply the GPU. From memory, they were into super computing, cloud computing, animation farm, CAD, visualization, simulation, automative (lots of cars have Nvida chips), and of course VR/AI. Crypto I don't think was on their product roadmap. It just happened.
Not really. Team Green is very careful after the crypto kerfuffle to jump on minute trends like "AI," and very reasonably so. The first wave of cookie cutter "AI" companies are already beginning to offload their GPUs.
Except in the short term, doesn't increased demand for chips generally decrease unit prices, rather than increases them? Do you really think there was such a large shock, and prices rose so fast, that this permanently hobbled VR uptake even years later when chips are cheaper than they would be in the absence of crypto and DL?
I'm starting to come around to the idea that it's here now and the size of the market is just pretty much what it's going to be.
They come out amazed and wondering why these aren't everywhere. Then they ask how much it costs (~$4000) and their excitement vanishes.
You can get a setup for much less than that now, but it's blurrier, slower, uglier, more nauseating, and lacks important features like finger tracking. It won't be smartphone-level for a long time, if ever, but once the tech reaches affordable levels, I'm convinced there will be a larger audience.
Vive trackers $300
2080 Ti $1200 (I suppose I should count this by its new "low" price, but I got it early)
Overclocked i7-6700k and high end motherboard, closed loop cooler with better fans, ssds, psu, case, ram etc ~$1400
The important parts are the Index, the 2080 Ti, and a CPU with high single-thread performance. If you lose the trackers (more trouble than they're worth, really) and go more budget on the other parts you can put together something equivalent for under $3000, but not by much.
If they can get rid of the headset requirement, then I think the potential is almost limitless.
It will make users go "wow" at first, then they toy a little with it, then recognize that there just isn't that much great stuff you can actually do with it (as a recreational user) especially considering how clumsy and annoying the gear is and will remain for the foreseeable future.
Sure, it will still have a following, and there still will be current and new special purposes where the technology actually makes sense, but I cannot imagine it will see true wide adoption on smartphone or even TV scale.
Glass & 3D TV never added anything meaningful to the mix. They were just extensions of technology that already existed. VR entirely changes the paradigm of how we interact with computers.
Even if it's "only" VR gaming that takes off, that is an entirely new medium of storytelling for artists to explore. We don't see those often.
Remember that DOOM was more popular than Windows - games are often all the system seller you need.
Yeah, it's clumsy and annoying now, but I don't think it'll be for much longer. The Quest is super usable already.
Still the number of units sold are probably reported in 10Ks or 100Ks not in millions let alone billions of units.
Besides, they haven't exactly replaced their supply chain and brought everything in house. Much of their hardware is coming from the same sources as their competitors.
This a reasonable thing to outsource: let someone qualified build their own.
What about building silicon and have the project zero level security experts in house?
Disclosure: also work at G.
Samsung has a project zero team that secures their smart TVs for obvious reasons.
A friend of mine cracked all their security with his ass on home couch and now works for them as an external security expert.
I can cyte one of those "senior security expert architect" from Samsung during their meeting with my friend:
"It's not possible you found a level zero vulnerability in our software, we are the best of the best".
Few minutes later that security architect was fired..
TL;DR inhouse experts often dont mean much, there is a whole world of even better experts who sit on couch at home and hate corporate world..
I sincerely doubt your entire story, especially the "few minutes later that security architect was fired" portion.
The security of Chrome is astounding (and certainly far better than Apple or Microsoft).
The only black stain is Android... (Edit: disclaimer: I use Android)
Google has a very fundamental conflict of interest w.r.t. privacy that stems both from revenue (advertising) and product development (e.g., labelling personal data acquired from consumers). Google will almost certainly act against their data product producers (individuals) best interests based on its very deep requirements to obtain private benefits from not protecting them.
How would you rate Google's use of Location History to tell advertisers when you visit their store in terms of "very good privacy controls"?
"The AP learned of the issue from K. Shankari, a graduate researcher at UC Berkeley who studies the commuting patterns of volunteers in order to help urban planners. She noticed that her Android phone prompted her to rate a shopping trip to Kohl’s, even though she had turned Location History off.
“So how did Google Maps know where I was?” she asked in a blog post ."
"At a Google Marketing Live summit in July, Google executives unveiled a new tool called “local campaigns” that dynamically uses ads to boost in-person store visits. It says it can measure how well a campaign drove foot traffic with data pulled from Google users’ location histories."
"Google also says location records stored in My Activity are used to target ads. Ad buyers can target ads to specific locations — say, a mile radius around a particular landmark — and typically have to pay more to reach this narrower audience." 
It is virtually impossible to certify that any given disclosure, no matter how grouped, anonymized, fuzzed, etc. is incapable of being re-associated given other data and time.
I also do not subscribe to the rather expedient definition that your privacy hasn't been violated as long as no human has seen the data. That's an unsupportable claim. As long as my privacy is only a millisecond away from anyone's view on whim, mistake, or trivial disclosure, or some automated system has made some decision that affects me based on my private data I have been violated.
* I have no way of knowing if they actually delete any personal data when I ask them. I suspect that they just don't show it to me anymore.
This is the kind of thing that you go over laborious detail in over privacy design docs at Google to make sure you have it covered.
Source: I'm a Xoogler.
Unfortunately, not a lot of it is public.
I'm really disappointed, but I have hopes Kurian will shake it all up and move things up and to the right.
I just correct it when i see it, and don't worry about it otherwise.
I know they lay down their own network cables.
I cant readily find a link, but their rack-mount servers are I believe "custom" designs just for them (presumably done in-house too), although from what I know they use off-the-shelf CPUs from Intel and AMD (there was an announcement very recently that they were using Epyc now for example) & GPUs etc.
It would not surprise me if only the x86 CPUs & GPUs were the only external things they use, and I bet they're looking at their own custom ARM chips for certain workloads (like they use their custom TPUs for certain workloads).
1 - https://www.wired.com/2015/06/google-reveals-secret-gear-con...
2 - https://www.theregister.co.uk/2018/07/18/google_dunant_cable...
But they probably use standard chipsets.
This statement holds water only to a handful of companies that can actually afford to build their own silicon.
OTOH, the intended message could be “you can't fully understand the issues without the insight gained by building your own silicon”, in which case it would be just as true of companies without the resources to build their own silicon.
Building your own silicon is the last thing you do to ensure security even if you are Google, you only do that when it’s the last broad viable attack vector left for your adversaries to exploit.
As I’ve stated already it’s the last thing you do when you think about security not the first.
When the security posture of your code, configuration, facilities, supply chain etc. is so good that the only attack vector left for adversaries to exploit to compromise your organization in a sufficiently broad manner is the hardware is when you start thinking about building your own silicon.
Or when ofc there are actual business needs for this e.g. you want to be independent of other SoC/ASIC designers or there aren’t any solutions that meet your needs.
Which also comes to the actual point other than the Google key/hsm solutions that look more of a rebadge than a home grown design Google is focusing on things like the TPU due to business reasons aka $$$ not security.
Google isn’t going away from Xeon/EPYC or x86 any time soon, and I find it very questionable if anyone would make a security argument against NVIDIA GPUs as far as Google goes since Google even wrote their own driver stack and CUDA compiler.
But that was not the message, is about understanding risks.
As in relying on logical separation of client traffic rather than physical one so they don’t need as many physical ports, switches and most importantly network cables(often the highest actual cost in many data centers as far as networking goes)?
Buying gold plated cables?
I’m not sure if anyone actually uses VXLAN yet.
You didn’t actually made a point because you haven’t provided proof that what Amazon did for AWS wasn’t done because of operational requirements.
It's not to say that Apple doesn't matter to nvidia as a customer or potential customer, just that it has nothing to do with this article because Apple doesn't have as much datacenter footprint... yet.
But just as a point of scale, Apple spending $10B on datacenters globally over 5 years is less than Google spends in a single year in just the US.
However, I agree that eGPUs aren’t a solution. I had a bad experience with Nvidia on my 2011 MBP but my 2017 MPB with an AMD chip is as good as the 2011 in its day.
What I can say about that and the current thing that's happening to computation is that we will all be doing code for vector processors soon. It's a sea change and the effect of the throughput is an edge for anyone who can use it now and in the near future. That's what this article is about. Google is the only other company making vector processors for compute at scale and they have the BEST tools for people making use of them in their cloud service even though generally they are provisioning Nvidia GPUs.
AWS doesn't ship computers and is lagging behind in their compute services. Nvidia with CUDA changes the landscape in a crazy way. You might not care about it now, but having an understanding of it is, IMO, critical to anyone that plans to be working on computers in the next 2-10 years unless you have a really untouchable position. Even if you think your position is untouchable, you might be in for a shock when someone blows the doors off your business logic with CUDA and you can't catch up.
Regardless of GPU choice, "write once" (or close to it) cross platform compute shaders are coming in 2020 and there's no way anyone is going to bump CUDA out of being at the front of that.
Was his field /r/bitcoin? I'm pretty sure the entire world saw that one coming.
Easy, it's luck.
Google mentioned open source drivers as one of the reasons for picking AMD GPUs for Stadia.
The cogs do turn, albeit slower than people prefer.
Microsoft's also been developing and using ASICs in Azure. I guess they're just not contracting NVIDIA for any part of the process.
Nvidia doesn't own fabs.
The article says nothing about Google contracting Nvidia to make chips for them, only that Nvidia is not particularly concerned about competition from Google.