Hacker News new | past | comments | ask | show | jobs | submit login

“Years of success” is not a moat.

“Years of success” means zero in the technology world.

Something better with zero years success can win instantly.




I'd beg to differ, migrations are extraordinarily expensive in Tech. If you have a sky scraper, you don't tear it down and rebuild it when materials become 10% stronger. Big tech firms generally maintain market position for decades. Cisco still remains the networking winner, and IBM still dominates mainframes, Oracle is going strong.

AI compute isn't something that snuck up on NVidia, they've built the market.


> migrations are extraordinarily expensive in Tech

Is that really the case with Deep Learning? You write a new model architecture in a single file and use a new acceleration card by changing device name from 'cuda' to 'mygpu' in your preferred DL framework (such as PyTorch). You obtain the dataset for training without NVIDIA. You train with NVIDIA to get the model parameters and do inference on whatever platform you want. Once an NVIDIA competitor builds a training framework that works out of the box, how would migrations be expensive?


“Builds a training framework which works out of the box”.

This is the hard part. Nvidia has built thousands of optimizations into cudnn/cuda. They contribute to all of the major frameworks and perform substantial research internally.

It’s very difficult to replicate an ecosystem of hundreds to thousands of individual contributors working across 10+ years. In theory you could use google/AMD offerings for DL, but for unmysterious reasons no one does.


Yep just look at SGI and Sun and all that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: