The 4060 (non-ti) has only 8GB of ram, which is less than its predecessor, the 3060, which had 12GB and came out in Feb 2021. I wonder if this is unprecedented?
For anyone not really following the current issues with video cards that have only 8GB of VRAM, there are a bunch of modern games that stutter a lot with only 8GB. I have an 8GB 3060 Ti and regret the purchase since it's been a problem for me in several games so far.
AMD is expected to announce new mid-range models soon, which might provide more options to choose from that have 16GB+.
Just seems like a money grab. The 4060 Ti has half the memory width (128 bit vs the old 256 bit wide) of the 3060 Ti, not much more than half the memory bandwidth. The old 3060 Ti has 448GB/sec and the new 4060 Ti has 288GB/sec.
This card depends on faking textures (DLSS) and faking frames with frame generation to increase performance. Frame generation seems particularly lame since the additional frames are generated without interacting with the game, thus increasing input latency and display latency.
So after 2.5 years you get to pay more for a worse card with less memory bandwidth that depends on cache hits and fakery to show any noticeable performance improvements.
What we need is more competition. Nvidia has plenty of substantially faster cards they are shipping, they are just minimizing performance delivered and maximizing cost to the consumer.
4070 is comparable in bandwidth to the 3060 Ti and would be a much nicer upgrade for 2.5 years, but instead they increased the price by 50% and named it a 4070 instead of a 4060 ti.
Intel are just starting to enter the discrete GPU space in earnest - I'm hopeful they'll do something innovative, but I'll settle for competition and increased supply at this point.
Yes, but AMD GPUs haven't (so far) pulled off any of the triumphs that AMD CPUs have. The ones that spring to mind:
* Intel tried to reserve 64 bit for servers/Itanium, AMD brought out x86-64 across their line
* Intel was pushing front side busses or dual busses. AMD moved the memory controller on chip
As a result Nvidia has a lead in performance (like the 4090), in efficiency (perf/watt), market share, and in software (drivers and CUDA).
I don't think it's fair to say they're winning in drivers. Their driver is an absolute pain to work with. It's to the point that I'd rather have no GPU than an Nvidia one. I'm very much not the common user but their situation is abysmal.
AMD seems to be providing good alternatives. Arguably the 6600XT or the 6700XT are better value than the 4060. And the 6800XT is a good alternative to the 16GB 4060Ti. And AMD's cards are actually purchaseable right now.
6750 XT was on sale a few weeks ago for $330 - $20 rebate and came with a copy of The Last of Us.
Guessing it'll be a while before we see similar deals on Nvidia cards since so many people are determined to buy Nvidia over AMD for reasons I don't understand.
Yes the raytracing performance is better, but not enough to justify what they're charging.
Just using the card for gaming? Absolutely go with AMD. But most of the ML libraries and tooling is built on top of CUDA, so getting things to work with an AMD card can be annoying and have worse performance despite better hardware specs on paper.
Yeah, a lot of that is inertia. Most people don't upgrade video cards all that frequently which explains the popularity of the GTX 1650 (2019), GTX 1060 and 1070 (2016), etc. Those cards still work reasonably well for people who don't need bleeding edge performance or ray tracing. I'd expect their market share to improve over the next couple years as some of those cards age out of circulation.
Availability is a huge problem for ati cards. There weren’t that many out there in the last 2 years where nvida vendors at least had a bunch of overpriced options
DLSS, NVENC, CUDA, better drivers(windows) and more.
I use a 6700xt, and I enjoy its better support on Linux, but absolutely not how its not suoported by ROCm, while a 3050 is fully supported by Cuda and can run any AI or professional program.
These are the latest generation, not mid-tier. Mid-tier is your 3060 IMO, and will probably become 3080 with the release of these new cards. That mid-tier 3060 gives you 4k gaming or HD gaming at 60+Hz refresh. I think the release strategy seems to mostly work, keeping mid-tier gaming rigs on par or better with latest gen consoles and about a 3-5 year lifespan. PC gaming would not exist if the pricing was exploitive, because the bulk of the market would switch to consoles even though the games themselves are more expensive.
I think the more amazing thing is the "only" 165 W power draw for the 4060 Ti. Although, given marketing these days, I am sure that number is lying, but I am still happy to see something reasonable. The 4090 is listed with a TDP of 450W.
Unless a better option comes out in the next few months, I might pick one up. It seemed a bit pointless to upgrade to anything below 16 GB, and up until now that meant a 3090 or 4090. And the prices for those are pretty steep for hobbyist use.
this is not a 4090 and even then your article mentions experiments are better on a cloud solution. Where they only recommend getting a top of the line GPU if you are doing longer than a year work
Since there is so much attention on just a few models now AMD could probably make some big inroads by producing some affordable very high memory cards and porting a few models to them... while nvidia continues to limit consumer hardware ram to market differentiate with their high end products.
I just wish they're release a card with mid-range compute but much higher memory to be honest. Would be cool to have an ML focussed series of cards with consumers in mind. But I guess it doesn't make financial sense to them.
Though I do find the binning interesting...I guess they bin on compute and then restrict the bus width of the memory controller based on that. Would be cool if they could also bin on memory controller/bus width, then just offer some average compute along with that. Means people go from being able to run smaller models faster (3090/4090) to being able to run larger models (where they couldn't before) just a little slower.
Gimme a 4060-4070 with like 24GB or vram, hell even 40-48GB.
it's not profitable to mime Bitcoin on graphics cards any more, you need a dedicated ASIC. Ethereum was profitable, but they have moved ot proof of stake now, so there's not nearly as much demand for gpu mining
How is posting this link a legit use of hacker news? I get that Hacker news is often used for important product releases/updates, but when it comes to Nvidia and AMD, this feels like product placement. Nothing in this is particularly newsworthy hardware, and we're talking about the coke vs pepsi of the pc gaming adapter world. If it was someone reviewing it, that would be different, but this is right to the vendor. I expect more from the mods.
People are very interested in generative AI which can be VRAM hungry. LLMs, Graphics, etc. One of these is a mid range SKU with 16GB of VRAM. The next notch up doesn't even have a 16GB VRAM SKU.
As far as the Coke vs. Pepsi comparison, Nvidia's CUDA is like not even that right now. It's a curbstomp in developer experience. You can get in on that action for cheaper now.
For anyone not really following the current issues with video cards that have only 8GB of VRAM, there are a bunch of modern games that stutter a lot with only 8GB. I have an 8GB 3060 Ti and regret the purchase since it's been a problem for me in several games so far.
AMD is expected to announce new mid-range models soon, which might provide more options to choose from that have 16GB+.