Hacker News new | past | comments | ask | show | jobs | submit | oelang's comments login

AMD is already competitive on inference


Their problem is that the ecosystem is still very CUDA-centric as a whole.


Any sensor that captures a ton of data that needs realtime processing to 'compress' the data before the data can be forwarded to data accumulator. Think MRI or CT scanners but industrially there are thousands of applications.

If you need a lot of realtime processing to drive motors (think industrial robots of all kinds), FPGAs are preferred of micro-controllers.

All kinds of industrial sorting systems are driven by fpgas because the moment of measurement (typically with a camera) & the sorting decision are less than a milisecond apart.

There are many more, it's a very 'industrial' product nowadays, but sometimes an FPGA will pop up in a high-end smartphone or TV because they allow to add certain features late in the design cycle.


If you're looking for fair comparisons don't ask nVidias marketing department, those guys are worse than Intel.

What AMD did was a true comparison, while nvidia is applying their transformer engine which modifies & optimizes some of the computation to FP8 & they claim no measurable change in output. So yes, nvidia has some software tricks left up on their sleeve and that makes comparisons hard, but the fact remains that their best hardware can't match the mi300x in raw power. Given some time, AMD can apply the same software optimizations, or one of their partners will.

I think AMD will likely hold the hardware advantage for a while, nVidia doesn't have any product that uses chiplets while AMD has been developing this technology for years. If the trend continues to have these huge AI chips, AMD has a better hand to economically scale their AI chips.


Not my area, but isn't a lot of NVIDIA's edge over AMD precisely software? NVIDIA seem to employ a lot of software dev (for a hardware company) & made CUDA into the de facto standard for much ML work. Do you know if AMD are closing that gap?


They have improved their software significantly in the last year, but there is a movement that's broader than AMD that wants to get rid of CUDA.

The entire industry is motivated to break the nvidia monopoly. The cloud providers, various startups & established players like intel are building their own AI solutions. Simultaneously, CUDA is rarely used directly, typically a higher level (Python) API that can target any low-level API like cuda, PTX or rocm.

What AMD is lacking right now is decent support for rocm on their customer cards on all platforms. Right now if you don't have one of these MI cards or a rx7900 & you're not running linux you're not going to have a nice time. I believe the reason for this is that they have 2 different architectures, CDNA (the MI cards) and RDNA (the customer hardware).


> Right now if you don't have one of these MI cards or a rx7900 & you're not running linux you're not going to have a nice time.

Are you saying that having rx7900 + linux = happy path for ML? This is news to me, can you tell more?

I would love to escape cuda & high prices for nvidia gpus.


That's what I have (RX 7900XT on Arch), and ROCm with pytorch has been reasonably stable so far. Certainly more than good enough for my experimentation. Pytorch itself has official support and things are pretty much plug & play.


> Given some time, AMD can apply the same software optimizations, or one of their partners will.

Except they have been given time, lots of it, and yet AMD is not anywhere close to parity with CUDA. It's almost like you can't just snap your figures and willy-nilly replicate the billions of dollars and decades investment that went into CUDA.


That was a year ago. AMD is changing their software ecosystem at a rapid pace with AI software as a #1 priority. Experienced engineers have been reassigned from legacy projects to focus on AI software. They've bought a number of software startups that were already developing software in this space. It also looks like they replaced the previous AMD top level management with directors from Xilinx to reenergize the team.

To get a picture of the current state which has changed a lot, this MS Ignite presentation from three weeks ago may be of interest. The slides show the drop in compatibility they have for higher levels of the stack and the tools for translation at the lower levels. Finally there's a live demo at the end.

https://youtu.be/7jqZBTduhAQ?t=61


The transformer engine is a fairly recent development (april this year I think) so I don't think they're very far behind.


> refers to scalability of the syntax

And the flexibility of the type system and the advanced type inference.


Well you're wrong, you can't run VS Code on the cloud.


It's not full VS Code, but the shell certainly runs in the browser, along with the editor, and auto-complete, etc. https://news.ycombinator.com/item?id=14259463

I have a comment elsewhere in this thread imagining a headless VS Code with a remote UI in the browser. Would get you something close to Theia with the full VS Code ecosystem.


Vscode's architecture is not fit for that, as it runs additional node processes (e.g. the extension host process) that are chatting with the renderer process on a very fine-grained level.


Interesting. This is possibly too late to get a response, but can you give me any more detail? What constitutes chatty? What IPC mechanism? Is it just that the API for extensions is so wide open?

And still, do those things prevent something else from synchronizing UI state? Completely let VS Code run normally, even with a fake X server if I need... and then another extension that just syncs the current state of each UI panel to an emulated one in browser?

Anyway, sounds like you've thought this through or see obvious holes... I'd really like to understand more of why it's unworkable, or how it could be doable with other compromises. Thanks in advance.



Theia is using monaco. But there's other reasons for Theia too like the governance of the project, vscode is controlled by Microsoft and they may have conflicts when lets say some feature would compete with their visual studio offering.

Theia wants to be free from those problems.


Not "may", it's already happened.

There's two versions of VS Code, the "official" one and "Visual Studio Code - Open Source", and the former contains proprietary bits that are kept that way so other people don't use it to compete with VS.

Project Rider from JetBrains accidentally ended up including one of these bits unaware it was part of a proprietary license and had to remove the functionality that depended on it (CoreCLR debugging), later writing a new implementation to put the feature back in.

This blog post by JetBrains includes a few more details: https://blog.jetbrains.com/dotnet/2017/02/15/rider-eap-17-nu...


Although it's not a core component of VS Code (although, one time it was bundled, probably) rather it's something provided through proprietary C# plugin and additional download.


Woa that's an interesting blog post on licensing issues, thanks!


This is a very old (I think > 15 years ago) competition, the results say nothing about the situation today, both language have changed a lot.

It was a PR stunt by the Synopsys guys who a that time wanted to kill VHDL. The VHDL guys had to work with slow & broken VHDL simulators. The the problem was devised by verilog enthusiasts. All VHDL engineers who showed up (in much smaller numbers than verilog engineers) felt like they they didn't have a fair chance.


TL;DR:

1. Guy worked for synopsis

2. Unlike competition synopsis didn't have a vhdl product


Let me guess, you were downcasting everything to std_logic_vector? If you have to cast things all the time in VHDL you're not using it correctly.

But please enjoy (System)Verilog with it's random often undefined coercion, implicit wire declarations, non-deterministic simulation and lack of any meaningful compile-time checks. Honestly, as a huge Haskell fan, I can't believe you're a Haskell fan.


Geesh, does reporting my experience with the two languages really warrant this kind of snarky ad-hominem attack?


The verbosity of VHDL isn't 2x it's more like 20% bigger on average and since VHDL 2008 it's pretty much the same. VHDL can be wordy but it also reads a lot easier & it looks more structured.

Verilog isn't C, it's C-ish, just different enough to make me make mistakes all the time like 'not' in verilog is '~' instead of '!', or the lack of overloading, the weird rules with signed & unsigned numbers and the implicit wire declarations etc. Verilog is full of surprises.

Do you like determinism? Have you ever tried running the same (System)Verilog design on multiple simulators? Almost every time you get different results, VHDL doesn't have this issue.


You seem to be taking this personally. What I described was my reasoning for choosing Verilog. It worked fine for me. I am not going to debate minutiae.

Also, who said Verilog is C? It feels like C but it isn't. It's simply a lot easier to context switch between C and Verilog than between C and VHDL. That's my opinion. You don't have to agree with it. There is no implicit obligation to agree with anything at all.

I have shipped millions of dollars in product very successfully using Verilog. Others have done so using VHDL. In the end it is a choice.

My intent was to give the OP one criteria he or she might be able to use in some way in making a similar choice.


No they haven't and they are not even close.

OpenCL is a bad language for synthesis. The strength of an FPGA is deeply-pipeline-able sequential algorithms. C-based languages just aren't good at expressing/controlling pipelining, so the synthesizer/compiler has to infer this but that problem is too hard. You could change 1 line of code, a minor fixup and the clever optimizer fails and your design is suddenly 10x as big.


VHDL is an old version of ADA combined with a build-in discrete event simulator. The syntax, type system, general semantics are all copied from ADA.

This is actually a good thing, Verilog (and even more SystemVerilog) is designed by people who don't have a clue about language design resulting in an incredible mess of a language.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: