Hacker News new | past | comments | ask | show | jobs | submit | StressedDev's comments login

The title, "Nvidia is moving to build AI data centers" is misleading.

I read the press release, and it doesn't sound like NVIDIA is building data centers. It looks like NVIDIA is helping other companies build and design equipment for data centers. This is a very different thing.


I'll update it thanks for pointing

Really? Sam Bankman-Fried is a crook. None of the other family members have been convicted and I have seen no evidence that they knew SBF was a crook or any evidence they knew of FTX’s fraud. I have seen people who have lobbed baseless accusations again SBF’s parents. Lots of them. None can ever justify their demands for punishment. My own guess is his parents did not know he was committing fraud, and they have suffered because of his crimes. I do not see any reason to punish two people who were misled and lied to.


>the Debtors allege that Joseph Bankman directed “donations of more than $5.5 million” to Stanford University in an attempt to “curry favor with and enrich his employer at the FTX Group’s expense.”


What? They were heavily involved in the whole thing, including helping themselves to FTX funds. It's almost impossible that they were unaware of the shady behaviour of SBF, even if they may not have been aware of the full extent to which he was screwing up.


If you want them jailed then identify the criminal code they broke. If it doesn’t exist - lobby for it to be changed.


NVIDIA is the best GPU manufacturer in the world. They have been an excellent company for decades and have produced most of the best (in terms of performance or performance per watt) GPUs. They started spreading out from graphics when they realized GPUs could be used for massively parallel floating-point or integer computations. They have been producing data center GPUs for years. NVIDIA's financial performance is not luck. It came from hard work and from producing the best products.


The question is what is the performance per watt of AMD's and Intel's products. My guess is both have significantly worse performance per watt. Energy and cooling are huge data center expenses and paying less for a product which requires more energy and cool is not a good idea because it costs more.


Nope. Not for LLM workloads at least. They’re competitive across the board.


If people take bribes, the deserved to be fired. Not much to say other than you are 100% right about this webs site being the wrong solution to the problem. I think people need to stop using software and services with awful support. It's legal and it sends the right message.


Here is the conclusion from the article: "Across 71 benchmarks ran across all three Scaleway Elastic Metal instance types, the EM-RV1 was much slower than even the aging Intel and AMD x86_64 instances. The EM-A315X that is using a 10 year old Ivy Bridge Xeon with 4 cores was around 7.4x the performance of this RISC-V cloud server. Or the half-decade old Ryzen 5 PRO 3600 within the EM-A210R instance was 18.3x the performance of the EM-RV1. Comparing to the very latest Intel and AMD CPUs would be even more mind-boggling advantages for latest x86_64 performance over RISC-V. See all the benchmarks in full here.

The Scaleway Labs EM-RV1 is interesting for a number of reasons as noted and a great way to dabble with RISC-V cheaply in the cloud, but do so with realistic performance expectations."

Basically, the reviewed RISC-V chip is not competitive because it is a about seven times slower than a 10 year old Intel chips. My guess is its performance per WATT is lower than current chips from AMD, Intel, and ARM.


I don't know why they pay so much attention to when things were made, and no attention to the microarchitecture, which they so often like to analyse.

The EM-RV1 (which is made up of Sipeed LicheePi 4A Clusters) uses 1.85 GHz THead C910 cores, with similar uarch to the Arm A72 (used in the first Amazon Graviton servers) or to something maybe early low end Core 2 e.g. in the first MacBook Air.

OF COURSE if all you want is the maximum performance and the ISA doesn't matter then you're much better off using something else for now.

But if you specifically want run RISC-V code, for example to get your software ready for the coming much faster RISC-V machines, then this is a good way to do it, without having to spend $180 up front for the equivalent SBC.

RISC-V machines with cores 1.3x faster will be out in a few months, and 2x faster around the end of the year. By around 2026 there will be RISC-V around Zen2 or Apple M1 performance. This is all in the pipeline.


> But if you specifically want run RISC-V code, for example to get your software ready for the coming much faster RISC-V machines, then this is a good way to do it, without having to spend $180 up front for the equivalent SBC.

If that's all you're using it for, isn't qemu good enough?

> RISC-V machines with cores 1.3x faster will be out in a few months, and 2x faster around the end of the year. By around 2016 there will be RISC-V around Zen2 or Apple M1 performance. This is all in the pipeline.

That's a great argument for prepping your software now, but also for not buying anything now. Like, textbook https://en.wikipedia.org/wiki/Osborne_effect


> If that's all you're using it for, isn't qemu good enough?

You can never quite be sure qemu correctly represents real hardware. There have been a number of cases where qemu was more permissive and programs developed on it didn't work on real hardware. The emphasis is on running valid code, not catching invalid code.

Qemu-user on recent PCs is a bit faster than this board for long-running scalar programs with a small amount of code e.g. my primes benchmark:

     5.052 sec i9-13900HX, qemu-riscv64 @ 5.1 GHz
    10.430 sec Sipeed LM4A TH1520 4x C910 @1.848 GHz
https://hoult.org/primes.txt

However, this changes if you run a lot of short programs in quick succession (as e.g. autoconf does) as you get dominated by JIT time. Or if you run full-system emulation using qemu-system, which gets hit by MMU emulation on every load/store.

Also, this CPU core implements draft 0.7.1 of the RISC-V Vector spec with 256 bit ALU. Qemu emulates such SIMD code element by element, which is far slower than real hardware. NB there are significant differences between draft 0.7.1 and the final Vector 1.0, but the style is the same, most code doesn't hit the differences, and some code (e.g. memcpy/move/cmp, strcpy/cmp/len) is even binary compatible. Moreover, if you write code using the C intrinsics for the vector instructions then the newly-released GCC 14 can generate code for either 0.7.1 (as "XTHeadVector") or 1.0 from exactly the same source code with just a compiler flags change.

> Osborne effect

I was at university at the time. Osborne computers cost as much as a new car. RISC-V SBCs in this class cost as much as a dinner date.

That makes a HUGE difference to people's willingness to buy something to throw away after a year.

All the more so as the engineer using it is probably on a salary sufficient to earn enough to pay for one before morning tea.


You cannot self-diagnose, and only a trained medical professional can diagnose autism. By trained medical professional, I mean an autism expert who is either a therapist (PHD, PsyD, LMFT, etc.) or a psychiatrist.

Most people who use terms like autistic, bi-polar, sociopath, narcist, etc. are using the terms incorrectly.

Also, I suspect you are correct that most ASD diagnoses do not use brain scans and rely on a trained professional's judgement and observations. That fact does not mean that autism does not exist or that some autistic people may have physical differences from neurotypical people.


You absolutely can self diagnose. Your self diagnosis is not as reliable, thorough, or trustworthy as a professional diagnosis.

Self diagnosis is often the first step towards a professional diagnosis.

You can choose to believe self diagnosis is low or zero value, but that's your own value judgement, which is separate from "can" and "cannot".


I’m not sure you think the word “cannot” means what most people think it means.


My experience matches yours. I have had very few bad bosses and almost all teams I have worked on have been healthy. I have been on a few bad teams and groups. They usually failed. They were not bad because they failed but because lying was rewarded, political skill was rewarded, and solving the customer's problem was not valued.

When I see questions like "where are the healthy companies?", I think either the poster has been very unlucky, or the poster might be the problem. When I say the poster is the problem, I mean they typically fall into one of the following buckets:

1. The person is very critical and cannot accept humans for what they are. They demand perfection, demand their coworkers are the best in the field, etc. They may also minimize the positives.

2. They have a very cynical or negative outlook.

3. They do not like their field (computers, sales, accounting, medicine, etc.). As a result, they are always unhappy.

The main point is something inside the person causes them to view every organization as screwed up and awful. This includes organizations which are OK, good, or even outstanding.


4. For myself, I was incompetent as a developer. Therefore I always landed in dysfunctional organizations. I was invested a lot, but in learning the wrong things.

Since I became competent (it was 2006, there were no Youtube tutorials for everything), I work with awesome people. It also means my managers at the time didn’t coach me properly (unsurprising for dysfunctional orgs). Life is much easier when you’re on top of things, and much harder when you’re unlucky. Unluckiness compounds.


Understand that luck works in the same way as "unluck".


Your experience is not a scientific experiment so you also have to consider that you might have been lucky. Perhaps other people have been unlucky.

For example, I could paint your post in a negative light: "A poster who blames the victim perhaps wants to feel good about the company they work in and ignore the experiences of others, or perhaps they are now in a responsible position and don't want to think that they might be part of a problem."

This would be unempathetic but so is trying to blame the people who describe their bad experiences.


Nope - Well run companies want to know the truth, expect mistakes, and want people to learn from them. In most of the teams I have been on, being a yes person is no t rewarded because yes people cannot deliver.


Most large companies aren't particularly well run.


>Most large companies aren't particularly well run.

What does it mean to be "well run" if our largest and most successful companies are not that?


They were well run when they were small, then as they grew large they could live on just being large and dominant and started to accrue organizational debt like this. After a couple of decades almost no company is particularly well run, they grow until they are no longer well run or until they captured the entire market.


Well, I'd suggest you start by reviewing your idea of "success" and how companies achieve it.


They deliver pressure, crunch and obedience. There are no well run companies. The defunct is part of the process as a load bearing pillar.


What is the size of your company?


Mine has 1.2k people and is the same. Very unionized, it's the IT part of a big energy company.


Side channels are a huge danger. An example is cryptographic functions have been cracked because of timing differences based on the key or data being encrypted. This is why cryptographic ciphers are implemented in constant time code (i.e. code that always runs in the same amount of time regardless of its input).


I'm talking about the specific side channel attack mentioned in their report. Not side channels generally ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: