Hacker News new | past | comments | ask | show | jobs | submit login

If anyone wants to port this over to ROCm / AMD MI300x, reach out to me: hello@hotaisle.xyz (we won't ever spam you).

Happy to donate the compute time for this work.




You're the AMD accelerator server company! Such cool work, hope someone takes you up :)


Not trying to be rude but what is the thinking behind this offer? Why would someone do this port for…free save for access to the hardware? What’s the upside for them?


Not a rude question. I'm building public HPC super computers, currently focused on AMD hardware. The one I'm about to deploy is Top 150, which is a pretty good start.

The goal is to encourage a developer flywheel. The more developers working with AMD hardware, the more hardware that is needed, the more hardware I can justify buying, the bigger my super computers get.

Nvidia has been doing the flywheel for years and it has clearly worked. Why not do the same for AMD? As I said in another thread, anyone who thinks that there should be a single provider for all of AI compute needs, will be on the wrong side of history.


Perhaps I phrased my question wrong; I think you answered what are you getting out of this? My question is what the person writing code for you is getting out of it.


FA is open source. They aren't writing code "for me" and you would have to ask them.


No one person or one company SHOULD have huge control over humanity, I agree.

But practically speaking this is a unique time in history of technology because there are quick feedback loops that cause that flywheel you mentioned to be a insurmountable first mover advantage.


> But practically speaking this is a unique time in history of technology because there are quick feedback loops that cause that flywheel you mentioned to be a insurmountable first mover advantage.

I'm staking my career and business on you being wrong about the insurmountable part. This is just the beginning of a long road and I'm not the only one who believes this. My partnership with Dell, Advizex and a huge soon to be announced datacenter company, isn't small beans.

Much like how I didn't know how the internet would look when I first joined in 1991. But, what I can see very clearly from my decades of experience in the tech field, is that history is repeating itself with what is happening in AI.

As I'm also prone to say... this isn't a football match where one team needs to "beat" the other. It really is enough to have multiple players in the market and nothing more than that. In fact, I'm more than happy to deploy any type of compute that my customers want me to deploy for them, including Nvidia.

Even Lamini, whom were previously AMD only, just announced [0] that they are partnering with Nvidia. Their software will run equally well on any system. Why? Because it builds a simple bridge from one platform to the next. Reminds me of the Java "write once, run anywhere" slogan. It actually worked pretty well.

[0] https://x.com/realsharonzhou/status/1811439958277927294


I'm not saying it is impossible for other companies to build good and profitable products. Google, AMD, Tesla all have good AI systems.

I'm saying NVDA uses their own chips to help build more chips, AND they are intricately involved in the buildout of the 100B data centers and intricately involved in TSMC roadmaps. That with the combination of huge profits that are increasing create even more advantages over competitors.

Obviously this doesn't go on forever, NVDA will never have 100T of profit in a quarter. Years from now the feedback loops will have diminishing returns and there will be commodity AI systems eventually.


I did not use the word impossible. Nobody is arguing that Nvidia won't be the dominate player for a long time. That does not mean that there isn't a good business in being in the game.

> Years from now the feedback loops will have diminishing returns and there will be commodity AI systems eventually.

Maybe, but the cat is out of the bag. Before it was a question of Moore's law and speed, but nobody talks about that anymore... all they talk about is that the need for raw compute (not even the fastest), is officially boundless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: