Hacker News new | past | comments | ask | show | jobs | submit | elorant's comments login

I see it the other way around. They’re sabotaging AI in order to maintain their dominance on search. If AI was to advance in some future form it might even substitute search altogether. By making it less reliable you keep your cash cow secure.

I can't replicate it. I asked Gemini Pro 1.5 and it answered just fine with detailed instructions for various download scenarios.

I replicated it on Gemini Advanced just now

https://g.co/gemini/share/f60206ce657a


Why is the capital letter o as in Oklahoma used in the 1080p? Is that part of the issue?

Edit: It looks like it don't care if it's an o or a zero.

Here is the same example replicated to work on chat gpt, with an o:

https://chatgpt.com/share/491a74fe-a0d1-4346-b5f7-60e7d76a8d...


I am able to ask any questions and get sufficient answers about yt-dlp from ChatGPT and Claude. But Gemini straight up refuses.


Someone replied to my tweet saying that the censor is not active in AI Studio, only on the website. I've confirmed this myself as well.

You know that the best way to get support from Google is to make it to the HN front page don't you? :)

The problem with the current situation is that AI was oversold. Everyone expected to see Cortana and all they got was SpongeBob. If the product wasn’t overhyped then perhaps people would pay more attention on how to utilize it in their everyday workflow without expecting spectacular results. Even a 5% boost in productivity is good long term.

They used to drop up until last year and then they went 50% up. Something about Samsung raising the prices in some part that is universal in most ssds these days.

A 32TB nvme costs around 3.5k.

Dell lists their read intensive drives for about $1/GB, $3,000 for a 3.2TB model. Price point holds up to the 15TB models they have listed.

I see the Micron drives on CDW for the prices you state though:

https://www.cdw.com/product/micron-6500-ion-ssd-enterprise-3...


Someone will just pack this into a product and sell it to marketers.

And use it to market the shit out of it. If marketing finally collapses under the weight of its own bullshit, I'll be celebrating.

Even if the community provides support it could take years to reach the maturity of CUDA. So while it's good to have some competition, I doubt it will make any difference in the immediate future. Unless some of the big corporations in the market lean in heavily and support the framework.


If, and that's a big if, AMD can get ROCm working well for this chip, I don't think this will be a big problem.

ROCm can be spotty, especially on consumer cards, but for many models it does seem to work on their more expensive models. It may be worth it spending a few hours/days/weeks to work around the peculiarities of ROCm given the cost difference between AMD and Nvidia in this market segment.

This all stands or falls with how well AMD can get ROCm to work. As this article states, it's nowhere near ready yet, but one or two updates can turn AMD's accelerators from "maybe in 5-10 years" to "we must consider this next time we order hardware".

I also wonder if AMD is going to put any effort into ROCm (or a similar framework) as a response to Qualcomm and other ARM manufacturers creaming them on AI stuff. If these Copilot PCs take off, we may see AMD invest into their AI compatibility libraries because of interest from both sides.


https://stratechery.com/2024/an-interview-with-amd-ceo-lisa-...

"One of the things that you mentioned earlier on software, very, very clear on how do we make that transition super easy for developers, and one of the great things about our acquisition of Xilinx is we acquired a phenomenal team of 5,000 people that included a tremendous software talent that is right now working on making AMD AI as easy to use as possible."


Oh no. Ohhhh nooooo. No, no, no!

Xilinx dev tools are awful. They are the ones who had Windows XP as the only supported dev environment for a product with guaranteed shipments through 2030. I saw Xilinx defend this state of affairs for over a decade. My entire FPGA-programming career was born, lived, and died, long after XP became irrelevant but before Xilinx moved past it, although I think they finally gave in some time around 2022. Still, Windows XP through 2030, and if you think that's bad wait until you hear about the actual software. These are not role models of dev experience.

In my, err, uncle? post I said that I was confused about where AMD was in the AI arms race. Now I know. They really are just this dysfunctional. Yikes.


Xilinx made triSYCL (https://github.com/triSYCL/triSYCL), so maybe there's some chance AMD invests first-class support for SYCL (an open standard from Khronos). That'd be nice. But I don't have much hope.


Comparing what AMD has done so far with SYCL, and what Intel has done with OpenAPI, yeah better not keep that hope flame burning.


this is honestly a very enlightening interview because - as pointed out at the time - Lisa Su is basically repeatedly asked about software and every single time she blatantly dodges the question and tries to steer the conversation back to her comfort-zone on hardware. https://news.ycombinator.com/item?id=40703420

> He tries to get a comment on the (in hindsight) not great design tradeoffs made by the Cell processor, which was hard to program for and so held back the PS3 at critical points in its lifecycle. It was a long time ago so there's been plenty of time to reflect on it, yet her only thought is "Perhaps one could say, if you look in hindsight, programmability is so important". That's it! In hindsight, programmability of your CPU is important! Then she immediately returns to hardware again, and saying how proud she was of the leaps in hardware made over the PS generations.

> He asks her if she'd stayed at IBM and taken over there, would she have avoided Gerstner's mistake of ignoring the cloud? Her answer is "I don’t know that I would’ve been on that path. I was a semiconductor person, I am a semiconductor person." - again, she seems to just reject on principle the idea that she would think about software, networking or systems architecture because she defines herself as an electronics person.

> Later Thompson tries harder to ram the point home, asking her "Where is the software piece of this? You can’t just be a hardware cowboy ... What is the reticence to software at AMD and how have you worked to change that?" and she just point-blank denies AMD has ever had a problem with software. Later she claims everything works out of the box with AMD and seems to imply that ROCm hardly matters because everyone is just programming against PyTorch anyway!

> The final blow comes when he asks her about ChatGPT. A pivotal moment that catapults her competitor to absolute dominance, apparently catching AMD unaware. Thompson asks her what her response was. Was she surprised? Maybe she realized this was an all hands to deck moment? What did NVIDIA do right that you missed? Answer: no, we always knew and have always been good at AI. NVIDIA did nothing different to us.

> The whole interview is just astonishing. Put under pressure to reflect on her market position, again and again Su retreats to outright denial and management waffle about "product arcs". It seems to be her go-to safe space. It's certainly possible she just decided to play it all as low key as possible and not say anything interesting to protect the share price, but if I was an analyst looking for signs of a quick turnaround in strategy there's no sign of that here.

not expecting a heartfelt postmortem about how things got to be this bad, but you can very easily make this question go away too, simply by acknowledging that it's a focus and you're working on driving change and blah blah. you really don't have to worry about crushing some analyst's mindshare on AMD's software stack because nobody is crazy enough to think that AMD's software isn't horrendously behind at the present moment.

and frankly that's literally how she's governed as far as software too. ROCm is barely a concern. Support base/install base, obviously not a concern. DLSS competitiveness, obviously not a concern. Conventional gaming devrel: obviously not a concern. She wants to ship the hardware and be done with it, but that's not how products are built and released in 2020 anymore.

NVIDIA is out here building integrated systems that you build your code on and away you go. They run NVIDIA-written CUDA libraries, NVIDIA drivers, on NVIDIA-built networks and stacks. AMD can't run the sample packages in ROCm stably (as geohot discovered) on a supported configuration of hardware/software, even after hours of debugging just to get it that far. AMD doesn't even think drivers/runtime is a thing they should have to write, let alone a software library for the ecosystem.

"just a small family company (bigger than NVIDIA, until very recently) who can't possibly afford to hire developers for all the verticals they want to be in". But like, they spent $50b on a single acquisition, they spent $12b in stock buybacks over 2 years, they have money, just not for this.


So I knew that AMD's compute stack was a buggy mess -- nobody starts out wanting to pay more for less and I had to learn the hard way how big of a gap there was between AMD's paper specs and their actual offerings -- and I also knew that Nvidia had a huge edge at the cutting edge of things, if you need gigashaders or execution reordering or whatever, but ML isn't any of that. The calculations are "just" matrix multiplication, or not far off.

I would have thought AMD could have scrambled to fix their bugs, at least the matmul related ones, scrambled to shore up torch compatibility or whatever was needed for LLM training, and pushed something out the door that might not have been top-of-market but could at least have taken advantage of the opportunity provided by 80% margins from team green. I thought the green moat was maybe a year wide and tens of millions deep (enough for a team to test the bugs, a team to fix the bugs, time to ramp, and time to make it happen). But here we are, multiple years and trillions in market cap delta later, and AMD still seems to be completely non-viable. What happened? Did they go into denial about the bugs? Did they fix the bugs but the industry still doesn't trust them?


It's roughly that the AMD tech works reasonably well on HPC and less convincingly on "normal" hardware/systems. So a lot of AMD internal people think the stack is solid because it works well on their precisely configured dev machines and on the commercially supported clusters.

Other people think it's buggy and useless because that's the experience on some other platforms.

This state of affairs isn't great. It could be worse but it could certainly be much better.


If we're extremely lucky they might invest in SYCL and we'll see an Intel/AMD open-source teamup


This seems like the option that would make the most sense. If developers can "write once, run everywhere", they might as well do that instead of Cuda. But if they have to "write once, run on Intel, or AMD, or Nvidia", why would they bother with anything other than Nvidia considering their market share? If you're an underdog you go for open standards that makes it easy to switch to your products, but it seems like AMD have seen Nvidia's Cuda and jealously decided they wanted their own version, but 15 years too late.


> Qualcomm and other ARM manufacturers creaming them on AI stuff

That's mostly on Microsoft's DirectML though. I'm not sure whether AMD's implementation is based on ROCm (doubt it).


You do know that Microsoft, Oracle, Meta are all in on this right?

Heck I think it is being used to run ChatGPT 3.5 and 4 services.


I feel like people forget that AMD has huge contracts with Microsoft, Valve, Sony, etc to design consoles at scale. It's an invisible provider as most folks don't even realize their Xbox and their Playstation are both AMD.

When you're providing fab designs at that scale, it makes a lot more sense to folks that companies would be willing to try a more affordable option to nVidia hardware.

My bet is that AMD figures out a service-able solution for some (not all) workloads that isn't ground breaking, but affordable to the clients that want an alternative. That's usually how this goes for AMD in my experience.


If you read/listen to the Stratechary interview wirh Lisa Hsu, she spelled out being open ro customizing AMD hardware to meet partner's needs. So if Microsoft needs more memory bandwidth and less compute, AMD will build something just for them based on what they have now. If Meta wants 10% less power consumption (and cooling) for a 5% hit in compute, AMD will hear them out too. We'll see if that hardware customization strategy works outside of consoles.


It certainly helps differentiate from NVIDIA's "Don't even think about putting our chips on a PCB we haven't vetted" approach.


Yeah, but they will be using internal Microsoft and Meta software stacks, nothing that will dent CUDA.


>I feel like people forget that AMD has huge contracts with Microsoft, Valve, Sony, etc to design consoles at scale.

Nobody forget that, just that those console chips are super low margins, which is why Intel and Nvidia stopped catering to that market after the Xbox/PS3 generations and only AMD took it up because they were broke and every penny mattered to them.

Nvidia did a brief stint with the Shield/Switch because they were trying to get into the Android/ARM space and also kinda gave up due to the margins.


A market that keeps being discussed that is reaching its end, as newer generations aren't that much into traditional game consoles, and both Sony and Microsoft[0] have to reach out to PCs and mobile devices, to achieve sales growth.

Among the gamer community the discussion of this being the last generation keeps poping up.

[0] - Nintendo is more than happy to keep redoing their hit franchaises, in good enough hardware.


On the other hand, AMD has had a decade of watching CUDA eat their lunch and done basically nothing to change the situation.


AMD tries to compete in hardware with Intel’s CPUs and Nvidia’s GPUs. They have to slack somewhere, and software seems to be where. It isn’t any surprise that they can’t keep up on every front, but it does mean they can freely bring in partners whose core competency is software and work with them without any caveats.

Not sure why they haven’t managed to execute on that yet, but the partners must be pretty motivated now, right? I’m sure they don’t love doing business at Nvidia’s leisure.


Hardware is useless without software to make it show off.


when was the last time AMD hardware was keeping up with NVIDIA? 2014?


Been a while since AMD had the top tier offering, but it has been trading blows in the middle tier segment the entire time. If you are just looking for a gamer card (ie not max AI performance), the AMD is typically cheaper and less power hungry than the equivalent Nvidia.


It’s trading blows because AMD sells their cards at lower margins in the midrange and Nvidia lets them.


But, the fact that Nvidia cards command higher margins also reflects their better software stack, right? Nvidia “lets them” trade blows in the midrange, or, equivalently, Nvidia is receiving the reward of their software investments: even their midrange hardware commands a premium.


> the AMD is typically cheaper and less power hungry than the equivalent Nvidia

cheaper is true, but less power hungry is absolutely not true, which is kind of my point.


It was true with RDNA 2. RDNA 3 regressed on this a bit, supposedly there was a hardware hiccup that prevented them from hitting frequency and voltage targets that they were hoping to reach.

In any case they're only slightly behind, not crazy far behind like Intel is.


The MI300X sounds like it is competitive, haha


competitive with H100 for inference. a 2 year old product on just one half of the ML story. H200 (and potentially B100) is the appropriate comparison based on their production in volume.


I have read in a few places that Microsoft is using AMD for inference to run ChatGPT. If I recall they said the price/performance was better.

I'm curious if that's just because they can't get enough Nvidia GPUs or if the price/performance is actually that much better.


Most likely it really is better overall.

Think of it this way: AMD is pretty good at hardware, so there's no reason to think that the raw difference in terms of flops is significant in either direction. It may go in AMD's favor sometimes and Nvidia's other times.

What AMD traditionally couldn't do was software, so those AMD GPUs are sold at a discount (compared to Nvidia), giving you better price/performance if you can use them.

Surely Microsoft is operating GPUs at large enough scale that they can pay a few people to paper over the software deficiencies so that they can use the AMD GPUs and still end up ahead in terms of overall price/performance.


Something like Triton from Microsoft/OpenAI as a cuda bypass? Or pytorch/tensorflow targeting ROCm without user intervention.

Or there's openmp or hip. In extremis opencl.

I think the language stack is fine at this point. The moat isn't in cuda the tech. It's in code running reliably on nvidia's stack, without things like stray pointers needing a machine reboot. Hard to know how far off robust rocm is at this point.


The problem is that we all have a lot of FUD (for good reasons). It's on AMD to solve that problem publically. They need to make it easier to understand what is supported so far and what's not.

For example, for bitandbytes (a common dependency in LLM world) there's a ROCm fork that the AMD maintainers are trying to merge in (https://github.com/TimDettmers/bitsandbytes/issues/107). Meanwhile an Intel employee merged a change that made a common device abstraction (presumably usable by AMD + Apple + Intel etc.).

There's a lot of that right now - super popular package that is CUDA-only is navigating how to make it work correctly with any other accelerator. We just need more information on what is supported.


Why is Win11 such a pain though? I don't get it. You had Win10 that worked just fine and they went on and fucked it up to what gain? Did they manage to sell more of other products in their lineup? And you do all that when Apple has its own silicon and produces superb laptops that could lure a lot of people in their ecosystem. I understand that Windows isn't the cash cow it used to be, they've moved to greener pastures like Azure, but still why mess up your main trademark product and piss off so many of your users? What's the end goal here?


The data they collect for the AI and the advertising opportunities enable more profits and more revenue to keep infinite growth happening. Even with Azure, even with Github and everything else. If you're not using all the profit opportunities, you are not delivering enough. The economic system doesn't reward or demand stable profits or sustainable growth, it demands compound growth.


This, exactly. Things will only continue to get more shitty as long as the expectation, and often requirement, for businesses with investors is to make infinite money, forever.


That has been the expectation the entire time computing has existed. It will always be the expectation. There is no future where that isn't going to be true.

What actually happens is a cycle. Things rise, things fall, things die.

On the initial upswing things tend to get better while profit and growth are aggressively pursued. Once market dominance is achieved, things tend to get shitty thereafter until someone topples the stagnant product.

This is a legitimate opportunity for Linux + hardware partners, if someone can finally realize they need to build and market to average consumers, rather than expecting average consumers to skill-up / knowledge-up just to use their software.


Or until republicans ruin the earth and make things bad enough that even they agree to introduce legislation relegating what businesses can do to the extent other developed countries have.


At least the version number increment is probably necessary because of the "breaking change" of requiring TPM, so they can move into a locked-down ecosystem like the iPhone, which I'm guessing is MS's goal within a few years.

But the terrible UI/UX? I guess they saw they had to entice users to move by changing the look and feel, but didn't give the programmers enough time, so it was mostly unfinished garbage when it was released, and maybe still is the case.

On that note, I have Win11 and Office version whatever at work. I really fucking hate new Outlook, it's a garbage web app, the whole "NEW" label on the icon makes me always think "Oh I've got some new emails. Oh no, it's just Microsoft being fuckwits". Also, the UI for modifying email signatures is a confusing maze of unfinished UI.


Microsoft painted themselves into a corner with Windows and backwards compatibility.

They do crap like UI mixing (old and new versions existing together) because backwards compatibility is what got them the marketshare they have in the enterprise (well that and anti-competitive practices). At some point, to improve their products, they are going to have to break compatibility and just push forward, and yet they can't because that would immediately alienate a significant portion of their enterprise customers that rely on that compatibility to run old software from vendors that no longer exist, that are critical to operating some niche equipment - hell, it's not uncommon for me to see air gapped windows XP machines still in production running some critical workload.

So they're stuck because they both simultaneously need to move forward but also can't break the old stuff without screwing over their customers.


Presumably, the Windows team has various targets to meet in terms of ARPU and things of that nature.


Oddly enough, it's not an unusual pattern for Microsoft, other motivations aside like data collection. Generally, only every other version of Windows is good.

98 good, ME - bad, XP good, Vista bad, 7 good, 8 bad, 10 good. Par for the course, and I wouldn't be surprised to see a bunch of refinement, and some changes rolled back for Windows 12.


One reason they hold their value well is because they have low mileage. They’re not practical cars to use on a daily basis, not to mention maintenance costs which are quite high if you use the car often.


meanwhile a '93 Honda NSX recently sold for 60k showing 234,300 miles on the odo

https://carsandbids.com/auctions/3OnRAn0v/1993-acura-nsx


I learned how to drive stick on an NSX.

I also wedged my skateboard in the back window when it was open, causing it to completely shatter when the owner tried to close it. Didn't appreciate what a bone-headed move that was at the time, but you've enlightened me.


Yeah but aren’t these cars big with people who do after market mods? Ferraris have to be serviced by licensed mechanics.


> Ferraris have to be serviced by licensed mechanics.

"Have to"? Says who? "Licensed"? By who?

As someone who is very close to both the "factory authorized" and "non-authorized" sides of the Ferrari service industry, this is incorrect or at best a gross oversimplification of things like warranty service or the Ferrari Classiche process.

There are a lot of misunderstandings and myths circulating about Ferrari ownership, but this is a new one to me.


Not that car. A '93 NSX today will be bought by a new-money millionaire in his 40s as a nostalgia piece, his dream car from when he was a teenager in the 90s. It will be kept as stock as reasonable. Even the photographs are designed for such a buyer. An NSX on a crisp Chicago day is the definition of 90s cool.


> Ferraris have to be serviced by licensed mechanics.

If you’re talking about special ones like La Ferrari or some others, i can tell you that there are lots of 458 italia and California and Cali T that have been in no-name shops and still being sold without any problem.


>> not to mention maintenance costs which are quite high.

I remember watching Gas Monkey Garage where they bought a smashed Ferrari F40 for $400K. One of the funniest scenes was Richard Rawlings on the phone with the Ferrari parts dealer telling him how expensive the parts were he needed to rebuild the car with. The funniest was the juxtaposition of Richard, a guy who's used to haggling with people to get a good price on everything, and here he was being reminded that these were OEM Ferrari parts with the quip, "How much for a quarter panel? Yeah, I KNOW its a real Ferrari quarter panel!" with the standard eye roll that the cost of this was killing him.

The whole show gave a glimpse into owning one of these cars. IF something does happen to it, in order for it to be "certified" as a legit Ferrari, you have use all OEM parts and have a person from Ferrari oversee the repairs. The whole show was a lesson in the amount of time and money needed to own one of these - even if you don't drive it very much.

Here's an article that detailed the whole process: https://www.hotcars.com/what-happened-to-ferrari-f40-from-fa...


There's a very popular video from a dissatisfied owner who talks about all the maintenance nightmare of owning a Ferrari.

https://www.youtube.com/watch?v=-JgeU3X-2AM


Is this at 4-bit quantization? And how many tokens per second is the output?


I’m doing non-interactive tasks, but in terms of the A6000 running llama3 70b in chat mode it’s as usable as any of the commercial offerings in terms of speed. I read quickly and it’s faster than I read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: