Hacker News new | past | comments | ask | show | jobs | submit login
AMD’s $499 Ryzen 7 1800X Beats $1700 i7 6950X with 1-Click OC on Air Cooling (wccftech.com)
280 points by antouank on Feb 25, 2017 | hide | past | web | favorite | 201 comments



I just hope this lives up to the hype. I built my last machine using Phenom II X6 1090T but I always regretted it after; as the 6 core advantage that I thought future games will use, never materialized and per core performance of that was not as good as the Intel CPUs of the same generation. Most of the benchmark leaks have focussed on top of the line Ryzens, I would be interested in seeing how the mid level Ryzens are comparing to Intel I5s.

There are already reports of Intel CPUs getting price cuts, so this looks good for now at least.


You were just ahead of your time :)

Multicore advantage for gaming is starting to materialize with Vulkan, mantle, etc.


Aren't Vulkan/Metal etc about GPUs rather than CPUs ?


Yes and no: here is a short article about it from Brad Wardell the CEO from Stardock [0]. Vulkan/Mantle/DirectX12 allow the game developer to really take advantage of multicore CPU's. Which is why we see the most gains with these new low level API's for AMD's FX chips because they are better at multi-threaded workloads compared to single threaded workloads like with previous DirectX versions.

A quote from the linked article:

  To to summarize:

  DirectX 11: Your CPU communicates to the GPU 1 core to 1 core at a time. It is still a big boost over DirectX 9 where only 1 dedicated thread was allowed to talk to the GPU but it’s still only scratching the surface.
  DirectX 12: Every core can talk to the GPU at the same time and, depending on the driver, I could theoretically start taking control and talking to all those cores.

  That’s basically the difference. Oversimplified to be sure but it’s why everyone is so excited about this.  
[0] http://www.littletinyfrogs.com/article/460524/DirectX_11_vs_...


Indeed, they are designed to reduce CPU load. Even so, if the game can't use multiple cores and previously was running game logic & OpenGL driver on the same core, now it's only the game logic. [1]

--

[1] nVidia actually implements Vulkan on top of their OpenGL driver, but nVidia's drivers have been relatively well optimized in terms of CPU usage already. AMD is the bigger winner here.


The OpenGL threading model is completely screwed up. You essentially have a global lock per GL context, which means you're restricted to 1 thread for issuing rendering. And in GL a context is for everything, including shaders, texture states, you can't even reasonable do uploads of new scene data in a separate thread.

Vulkan fixes this big time, by allowing apps to construct GPU workloads for a single GPU in parallel. Only the final submission step (which is supposed to be very low overhead if the driver design is decent) is single-threaded per GPU context. And even for that Vulkan is better: It allows you to allocate different contexts for separate engines (e.g. rendering vs. compute vs. copy engine for data up/download to/from the GPU vram).

The lower CPU overhead is just the icing on the cake, the real deal is that Vulkan fixed the threading/locking model.


and current gen console ports forced to work smoothly on multiple anemic 1.6GHz cores


The phenom lineup was very competitive (even slightly better than Intel's) for most of its lifetime. It wasn't really until a couple of generation into the Core 2 Duos that Intel started taking the lead.


> (even slightly better than Intel's)

Competitive on price, sure. But it never held the performance crown. AMD hasn't held that since the Athlon 64 era.

The first gen Phenoms were outpaced by the Core 2 lineup, and the second gen Phenoms that became quite popular were handily beaten by the first gen i7's. The Phenom II was a great bang-for-the-buck chip though, no doubt.

I've owned all the relative players here: the Phenom 9500, Core2 Quad Q6600, Phenom II 955BE, and a core i7-920 (which is still in my main work machine that I'm typing this from right now).

The Q6600 and 955 were comparable in performance (955 much better at stock, slightly better when both chips OC'd), but the i7-920 was out at the 955's release, essentially leaving AMD a generation behind in performance. The 9500 was a dog and well known for an errata that hampered performance after a microcode update, and the i7 was leaps and bounds ahead of anything else at the time, and is still pretty usable today. My little brother has the 955 and it's showing its age in games (KotK being the worst offender).


The Phenom X6 had great performance both in terms of price and actual performance for developer / thread-heavy workloads. Although this is now ~seven years ago.


For performance / $ in HPC systems this will be a field day though.


Most HPC systems are focused on floating point though, and Zen's vector/AVX implementation doesn't match Intel's.


That only matters for applications that are fully vectorized however. Most real world HPC applications depend much more on memory bandwidth and/or multicore performance and how well the compilers can optimize for the vector units (typically not very well). If I can choose two cores with 128 bit vectors for the same money as one core with 256bit, I'll choose the two core system any day of the week.


I certainly agree that vector performance is overrated. But it seems to dominate HPC spending. At least on the top500. But maybe AMD can pick up some HPC spending for smaller clusters designed to run less specialized code? I don't know how big that market is.


I do think it's big. What you see on top500 is the high end of the market, the big bulk is in small to medium sized clusters or in cloud infrastructure where people rather care for benchmarks like requests per second of some specific servers. There again memory throughput, non-vectorized performance and cores per dollar are probably the most relevant characteristics of a processor. And those who chase to have the highest flop numbers have switched to accelerators and custom chips anyways.


For performance/$ in HPC systems, nowadays anyone can buy used Opteron 61XX processors for less than 20$/each, and even Opteron 62XX for around 50$.


I think this is a bit too simplistic for HPC. Yes, you can go really low cost if all you care is theoretical performance / $. In practice, the cost efficiency of the system should be seen as 1 / (time to solution for typical applications) / (cost of ownership per year).

Even 20% higher memory bandwidth or computational performance per socket matters for the time to solution, since a higher number of nodes is uncertain to replace that due to the communication overhead. As does the performance per watt (costs of cooling).

Furthermore, the biggest news for me regarding Ryzen is what they're doing with APUs. Affordable APUs with unified memory supporting HBM (stacked memory) could be a game changer. HBM means an order of magnitude higher memory bandwidth than what we're used to from CPUs, matching what the high end Nvidia Pascal cards are currently offering. By coupling this to a capable multicore X86 CPU with fat cores (as opposed to Knights landing with a higher number of slow cores), could mean tremendous speedups without any programming work for bandwidth bound applications - which includes pretty much any stencil application, e.g. atmospheric models, ocean models, earthquake prediction etc. By 'tremendous' I mean 5-10x per Socket, which is a big deal in these fields.


> Yes, you can go really low cost if all you care is theoretical performance / $.

Yes that's a fact, and if you're trying to evaluate a solution based on performance/$ then the comparison is made with regards to this factor, and not others.

> In practice, the cost efficiency of the system should be seen as 1 / (time to solution for typical applications) / (cost of ownership per year).

If you believe that criteria is relevant then you should know that Ryzen's advertised TDP is around 95W while some Opteron 61XX processors have a TDP of 85W.

> Furthermore, the biggest news for me regarding Ryzen is what they're doing with APUs. Affordable APUs with unified memory supporting HBM (stacked memory) could be a game changer.

For 40$ you can buy 4 opterons with 12 cores each. This means that for less than 1000$ you can put together a 48-core system that supports up to 512 GB of RAM. For around 500$, anyone can put together a 24-core system that also supports a couple of GPUs.

Anyone that cares about HPC on a budget knows quite well that used Opterons is where the optimal price/performance ratio can be found, particularly as Opterons already out-perform Xeons in BLAS-based work.


Would you care to tell how to get hold of such a system, which sites to visit? Thanks!


Just a reminder that wccftech.com is notoriously pro-AMD. I would caution against taking their word as proof.

I'm not an AMD hater (actually own AMD stock), just cautious.


It's more of a rumor/gossip tech site with a reputation for immediately publishing almost anything you could put a click-baity title on.

As a result it does look pro-AMD, because there is significantly more hype (the infamous hype train) in AMD community (just compare /r/amd and /r/nvidia on Reddit). I can't say if it's something that emerged organically in those communities and embraced by corporate or the other way round.


Techies love a hype train, but Ryzen is genuinely important. Intel have had a near-total monopoly on high-performance CPUs for many years. This has led to a dismal stagnation in the CPU market. If you bought an i7-2600k six years ago, you have little reason to upgrade - an i7-7700k is only about 30% faster in most workloads.

If Ryzen is anywhere near as good as the benchmarks suggest, then Intel have got a lot of work to do. They'll need massive performance increases, huge price cuts or both to remain relevant. That's good news for everyone except Intel.



And of course https://www.reddit.com/r/AyyMD/

Right now the hot topic is this gem:

https://www.youtube.com/watch?v=PjCVaMYdjgo


Haha. Amused to see the hype train Ryzen has created.


Holy crap that was a bit much. Good for AMD though, they need any break they can get..


>actually own AMD stock

Any insight for that? Why do you think AMD is good investment (or speculation).


As another investor, the stock has already gone up 650%. You should probably look at another company to buy. Never buy during the hype.


Maybe he/she bought when the stock was under 2 dollar last year? I know some people who invested their vacation money back then into AMD stock and are happy now.


I won't be replacing my i7 6800K build any time soon, but I'm very happy to see AMD looking competitive again. There hasn't been much excitement about the recent generations of i7 for good reason, and hopefully this forces them to start pushing the envelope again.


Exactly. Ridiculous Intel pricing and artificial limitations were a consequence of de facto monopoly.

I absolutely hate how they cap maximum RAM on consumer machines, so that you need to pony up for a Xeon workstation if you need more than 64 GB.


Their worst crippling so far is ECC being unavailable on even costly customer parts...


FWIW consumer Zen is also limited to 64GB, to go higher you'll need to step up to AMDs Xeon-esque Naples platform.


I know, but competition would hopefully raise these limits in consumer machines.


There is no free lunch. Moving to quad channel incurs more costs in pins, mobo traces, etc.. It's targeted product development.


Nah, it's just artificial limits. Would totally just work if they flipped a bit in the processor. /s

Less sarcastically, what do people need >64 gigs for on a consumer desktop? I'm a dev and work on a pretty memory intensive service and don't need 128 gigs. Holding an entire double layer blu-ray in memory would only require 50 gigs.


Minecraft. That shit is heavy! On a more serious note, some gamers are using RAM disks to greatly improve performance of some games, especially modded games like Minecraft and Skyrim.



> Less sarcastically, what do people need >64 gigs for on a consumer desktop?

I use mine as a terminal server to some super cheap Raspberry PI machines that are basically just monitors running VNC software.

For good performance you need about 4 to 8GB per user. I don't have that many users personally to need 64GB, but if you have a bunch of kids having terminals is much cheaper than a real computer for each user, plus you only need to keep one computer updated.

You could easily afford a computer in each room that way.


8-16 dedicated terminals? This is not a consumer scenario.


When did a >server become a consumer desktop?


They are not artificial, but Intel seems to have big margins. Competition reduces margins.


Sure. I don't personally care if Intel's margins drop. But I also don't care if professionals making six figures have to pony up a couple hundred for a Xeon system. The idea that Intel should make casual consumers subsidize professional needs so that professionals can save a hundred bucks is absurd.


Machine learning model training. A large buffer of data waiting to be trained speeds up training. Especially with training on the GPU the biggest bottle neck is reading the data from disk.


Forgive me, but is training ML models really a common consumer need?


Are you kidding? I eat AI for breakfast.


i can't imagine why anyone needs more than 640k personally


Did you miss every example people gave?


The other day, I was struggling to emulate a Hadoop cluster in a series of VMs on a 16G laptop with zero work being performed - just keeping all the services and Cloudera's management scripts and tools made everything swap everywhere.

VMs are my biggest memory hog, particularly when they are running lots and lots of separate Java processes.


And the problem with your 16GB system was that it didn't support >64GB?


The problem with my laptop was that it didn't support >16GB; I'd take more than 64GB if I could have it.

It's usually foolish to say that we don't have need for more X in computing because nothing today warrants more than X; the thing is, if we have more of X, we find uses for it. Most of what we do can trade space for time, or incorporate things in cache hierarchies; and sometimes quantitative change adds up to a qualitative change because new architectural approaches become possible.

If I could have 1TB in my laptop, it would change the way we at my company develop software. In fact we'd rewrite our product to take advantage of that much memory, the case would be overwhelming; we'd introduce a new tier in our architecture for keeping a copy of the customer's working set (typically several GB) in memory. As it is, we rely on the database and its caching; it's OK, but it can't support all the operators we'd like to support.


Recent i7 chips support 128GB (my 6800K does).


64gb workstation ought to be enough for anybody. Jokes aside, what do you do if you are interested in >64gb workstations?


Quantum chemistry. There are a number of high accuracy wavefunction based methods that are compute-tractable on desktop machines but very demanding of storage. Even when there's a disk based alternative to RAM storage it's a lot slower when you can't just allocate huge arrays in memory. I could easily use 256 GB, maybe 512, on a machine with 8 physical cores.

You can get commercial software that is a bit better tuned for running on "ordinary" desktop hardware, but it's so expensive that you don't really come out ahead on total cost.


Forget about desktop processors and buy yourself a double-socket LGA2011v3 board and some engineering-sample Xeons. Then you can address that kind of memory and run ECC.

The clocks are a little lower than the retail Xeons, and you do need to pay attention to compatible boards/BIOS versions, but you can get a big Xeon E5 for like $200-300.


Thank you for the suggestion. I am surprised how cheap these can be. It has been years since I assembled a system from parts so I had been looking at complete workstations from vendors, but for that much cost saving it is worth considering parts-assembly.


> Forget about desktop processors and buy yourself a double-socket LGA2011v3 board and some engineering-sample Xeons. Then you can address that kind of memory and run ECC.

If you follow this very thread, you'll notice that another user already complained that "you need to pony up for a Xeon workstation if you need more than 64 GB."


"Ponying up" here means, in context, Xeon processors that are several thousand dollars each. In contrast, the grandparent was wishing that he could use a Zen that cost several hundred dollars instead.

I was pointing out that there were other processors already on the market that met his requirements: Xeon engineering samples, which most people don't think about when they are talking about the costs of high-end Xeon workstations.

(This is partially because they are not "finished" products like retail chips. Clocks are lower, and you need to use motherboards/BIOSs from a specific compatibility list, and they are not something you can source from Intel. Thus, they slip many people's minds because they aren't something you would consider for an actual server deployment - but they are perfect for a workstation with specialty needs that needs to hit a tight budget)

So yes, I did actually read the thread. You just didn't grasp enough of the context to make a sensible response and figured it was the perfect time for a superslam "did you even read bro?" gotcha response.

Typically this sort of response is not welcome on HN. If you don't have something substantiate to post, please don't post at all.


Honestly that application sounds like you should be able to just get that 512G machine. Buffered ECC DDR4 RAM has gotten cheaper over the last year too.


If I still did research full time I'd probably get a Xeon workstation loaded with RAM for home. But it's just a hobby for me now that my day job is writing non-scientific software, so it doesn't seem like a justifiable expense. Might go up from my current 4 cores and 32 GB to something Zen-based later this year.

(Tangentially: I wouldn't bother with ECC RAM for this application unless that were required by the rest of the hardware. Memory errors may slightly affect time to a converged solution, but it's rare that a flipped bit would actually cause a job to converge to a different solution.)


If this is just a hobby why not rent a large VM from us (GCE) or AWS? It sounds like you don't run enough to want a box for anything like 24x7, but that it'd be great to have tons of memory and lots of cores (or a giant SSD). We just announced our Skylakes yesterday ;).

Disclosure: I work on Google Cloud,


I work on NLP projects and have similar issue with memory.

The reason I don't use cloud computing is that you constantly have to worry about moving data around, turning stuff on and off, API changes, data movement costs, product lineup changes (should I store this stuff on S3 or local HD), locally compiling all your stuff again or containerization and associated time overhead.

Local large box makes way more sense if you're doing solo projects. And if traveling, it would way more sense to rent a VPS from Hetzner. Flat cost structure and tons of bang for the buck.

BTW, I you can get 128 gb LGA2011-v3 motherboards.

https://www.newegg.com/Product/Product.aspx?Item=N82E1681315...

and it seems like it works: http://www.pcworld.com/article/2938855/hardcore-hardware-we-...

even though the spec says max 64gb: https://ark.intel.com/products/82932/Intel-Core-i7-5820K-Pro...


I can confirm, I run machine learning workloads on a home machine with 128GB of RAM on an X99-E WS/USB 3.1 motherboard. My manual explicitly states that it has 128GB RAM support, as well.


Some examples that can easily push you beyond 64 GB: - high resolution photo editing: when you start with a 80 mpix / 48 bit photo from a medium format camera pushing beyond 16 GB requires only a couple layers and a couple undo steps being available - high resolution video processing. 1 second of uncompressed 4k 60fps video is almost 1.4GB - very large application compilation: building Android requires 16GB of RAM/swap - I'm sure there are apps that push that requirement even further - development environment for a complex system that requires you to run dozens of VMs if you want all components running locally (I've had to run 2 VMs with 12GB requirements each once)


If you can afford a $30,000 medium format camera, you can afford the Xeon workstation it usually needs. The kind of person that work with those systems would pay $6,000 for an umbrella that doesn't even have any electronics in it: https://www.adorama.com/bcb3355203.html?gclid=CLKvmd2erNICFY...

Absolutely no one that owns one of these professional cameras uses them with non-Xeon CPUs.

Really the only use for greater than 64Gb for non-Xeon CPUs would be student animation or machine-learning projects.


It's 25 pounds so not really an umbrella. It's like calling Xeon's just sand.


While still true it might not be for long: "advanced amateur" small formal SLRs with 36mpix @42bpp now start below $2000 and 24mpix @36bpp in the lower part of $500-1000 bracket.

https://www.adorama.com/ipxk1.html https://www.adorama.com/inkd810.html https://www.adorama.com/inkd7100.html

Which brings a slightly unrelated note: people seem to be blissfully unaware that you need less than 9mpix for a 12x9" 300 dpi print or that the zoom lens they bought with the camera has sharpness that effectively limits the resolution to 5-10, maybe 12 mpix if they are lucky. But megapixel count is easy to sell -> race for more megapixels -> smaller physical pixels -> more signal amplification needed -> more noise / general lower photo quality.


I used to shoot 33megapixel medium format cameras 10 years ago, on 8GB machines with 1GB GPUs just fine, doing multilayer photoshop on 48-bits-per-pixel TIFFs.

64Gb is completely unnecessary for photography. Photographers don't even go over 16GB.


Just because someone can afford to rent a $30,000 camera doesn't necessarily mean they have unlimited cash to burn on other things.


No, but the difference between i7 and Xeon is significantly less than what you'd have to pay for that extra RAM. We're talking O($100).


So O($1)?


If you are buying that level of professional gear, it's probably for a profession. The rates you are charging ought to include the necessary costs of all necessary equipment.


It's about $2000/week to rent an IQ180. If you can afford the rental price, you can afford a computer to process the images.


thats not how money works. Just because you can afford one expensive luxury, doesnt mean you can afford all the other expensive stuff. Sometimes you do sacrifice all the other thing to get that one important expensive stuff


Also the camera rental cost goes to the client. The computers to process are generally owned by the company doing the work. Big difference. I will regularily spend $1-2k for cameras for ~1 week, and pass that on, but still use our own gear for processing.


"I can't afford the premium gasoline my Ferrari needs."


Pro photographers don't do that.


That still doesn't follow. Having a certain amount of money doesn't imply that you have even more money, nor that you want to spend more than is truly necessary.


Sure it does. If someone can't afford the machine necessary to process the images that they got from the camera they spent thousands to rent, the problem isn't actually money. It's that they're an idiot.

You don't buy/rent a camera you can't afford to process images from just like you don't buy a car you can't afford to service and you don't buy a house you can't afford taxes for. Or if you do, I have little to no sympathy.


The specifications from the client are quite particular, and usually only one or a few cameras meets the spec.

The computers often are owned by the company renting the gear, where the end customer pays for the camera rental.

So it's not so simple. And one thing - go easy on using the word idiot... I work with some very smart people who fit your description of idiot on a regular basis.


You're a professional that clients are paying multiple thousands of dollars for photography and you cannot afford the hardware to process their images? Color me skeptical.

If it's really true, then your pricing is clearly wrong.


Having >64GB is far from necessary to edit any kind of multimedia, but it would be helpful to have. If your goal is to capture high-quality imagery on a $2,500 budget, the camera is more import than the RAM. It's just that you might end up with $500 unspent if you can't afford a $500 RAM upgrade and a whole new computer to use it. There are far more photographers and indy filmmakers in a situation where they have to make these kinds of tradeoffs than the tiny minority making tons of money.


Rent an Azure G5 and remote in perhaps?


I am a FEA Consultant and work with Abaqus and LS-Dyna. I need >64 GB now.


What camera is capturing 16GB photos? An IQ180 captures 80 megapixels but the images are something like 500 MB each.

All of your hypothetical examples are also for very niche high-resource usage professionals. It's absurd that these cases need extremely high memory support from consumer processors. Intel is not obligated to subsidize anyone, especially not people who can afford to pay for high end systems.


To address the core of your argument: no one is saying Intel should subsidize anything. The assertion is that using monopoly status to do additional work to gimp functionality in one product line and artificially drive up the price of another is at least abusive in the short term, probably makes Intel less competitive in the long run, and increases the volatility of the market.


> The assertion is that using monopoly status to do additional work to gimp functionality in one product line and artificially drive up the price of another

1. Are they actually doing that? I'll be honest, I don't know for sure, but I do know that more capacity rarely comes for free. I assume that supporting 256GB of memory efficiently relative to 64GB requires either a faster clock speed or some more pins. Is Intel "gimping" the consumer product or simply not building in the extra capacity?

2. Is it necessarily a bad thing if they are? If consumer demand peaks at, say, 32GB and professional demand reaches, say, 512GB, Intel could develop two entirely different architectures, which seems wasteful and more costly to everyone. They could ship just one chip with professional capacity supported, which drives up consumer costs effectively subsidizing professionals (because professionals are no longer paying the premium for the "pro" chip; they're just buying the consumer one). Or they could ship a consumer chip that doesn't support professional needs and ship a professional chip at a price that makes pros pay for the capacity Intel had to engineer for them.

The last option seems like a good option for everyone except the people who think everyone else should pay a premium for unnecessary pro support so that a few people can get cheaper pro chips.


I think he meant 16GB memory used when editing. Not the filesize.


Same question still applies.


You asked in another sub-thread "what do people need >64 gigs for on a consumer desktop?" but jeff_vader started this sub-thread asking "what do you do if you are interested in >64gb workstations?"

AFAICT nobody on this sub-thread is claiming that consumer desktops should cater to our niche use cases. We're just offering examples of application domains where a high RAM-to-CPU ratio is useful.


You're right. I was following up on the "I absolutely hate how they cap maximum RAM on consumer machines" comment further up the chain but jeff_vader's question pivoted the conversation and I missed the memo. There are definitely cases where professionals need (or can greatly benefit from) >64GB workstations and I don't dispute that.


Not really. As the actual editing being added to the base image is what could push that. I think the question you'd be looking for is, "what amount of photo editing, effects, etc. would make the editing process of a 80mpix photo consume >16GB memory?"

Additionally, as for your original question: For hobby or even 'normal' professional photography, I'd guess none. But rigs like the ones used for Gmaps Street View, the recent CNN gigapixel, etc. would probably have the capability to approach that size.


I actually want to know what camera is capturing 80 megapixels at 48 bits per pixel. The IQ180 is 16 bits per pixel. Where's the camera that triples this? Or are we doing Bayer interpolation in a way that requires 48 bits per pixel for some reason? Because there definitely isn't 48 bits of actual signal there.

And yes, I also want to know what turns a 500MB photo into >16GB in memory. That's 32 full-res copies of the photo in memory. Just "a couple layers and a couple undo steps", really?


Bits per pixel: A lot of small frame SLRs (your Canons and Nikons) offer 12 or 14 bits per channel for a combined 36 or 42 bits per pixel. Medium format digital backs now offer 16 bits per channel.

Layer size: Layers add more data on top of this: alpha channel (8-16 bits/pixel), clipping mask and maybe half a dozen other things, easily bumping the whole thing from 6 to 9 or 10 bytes per pixel.

Layer count: It's fairly common to have multiple layers that are copies of the original photo, with different processing applied and blending them into the final one. I'm very much an amateur and when I'm serious about just making a photo look (not even creating something new) I end up with around 10 layers.

Undo step memory: a lot of work in the photo processing workflow is global: color correction, brightness, contrast, layer blending settings and modes, filters (including sharpening or blur) apply to every pixel of a layer. Each confirmed change (by releasing mouse button / lifting stylus / confirming dialog) is likely to have to store an undo step for entire layer.

Of course you can just persist some of that to disk, but if a single layer/undo step can be 800MB this will hurt productivity - only very recently we got drives that can write this much fast enough and that's why just a couple years ago, when having enough RAM was not really an option a lot of pro photographers had 10 or 15k RPM HDDs running in RAID0 in their workstations.


I replied about the 48 bit thing to the sibling comment. That may be how the Bayer interpolation is done so I won't argue that.

For the layers/undo steps, the "couple" of each was not my phrase. It was yours. If by "a couple" you actually meant dozens, then sure, maybe an 80 mpix photo really needs that much memory to process.


> I actually want to know what camera is capturing 80 megapixels at 48 bits per pixel. The IQ180 is 16 bits per pixel...

Presumably that's 16-bits for each channel of Red, Green, Blue. Since every pixel is composed of the three RGB components, that's how you get 48-bits for every pixel (and internally, Photoshop refers to 16-bits per color channel as 'RGB48Mode').


There aren't actually three channels per pixel on most sensors. It's one channel (and only one color) and then there's a mixing process. I'm doubtful that you need 48 bits to do that mixing correctly, but maybe that is the standard way to do it.


I use a sound sampler (Hauptwerk) that loads tens of thousand of wavefiles and needs a polyphony (simultaneous samples being played) of thousands of voices to simulate a pipe organ. The minimum configuration for the smallest instrument (one voice) with natural acoustics (ie. original ambient reveberation recorded) is 4gb. The normal requirements for some good instruments are 64-128gb for 24-bit 6-channel uncompressed audio.


> 24-bit (...) uncompressed audio

Do you really need this much headroom, e.g. for producing music professionally? (If so, why are you arguing over the $50-100 extra for a Xeon?)

Because for listening, it's proven fact that even professional musicians and mixing engineers using equipment costing >$100k can't tell 24 bit from 16 bit in a proper double-blind A/B test. Presumably you use a high sample rate as well (maybe 192 kHz), which is equally useless. Drop both of those, and you're within 64GB easily with no noticable SQ loss.


For editing, you want to use the highest possible starting bandwidth (sampling rate * bit width) because every intermediate edit and calculation leads to round-off and other errors. You also need them to be WAVs or some other raw format since you cause minor errors every time you compress + decompress something, not to mention the performance loss.

It's the same rationale for doing calculations in full 80-bit or 64-bit FP even if you don't need the full bits of precision -- if you start your calculations at your target precision, your result will have insufficient precision due to truncation, roundoff, overflow, underflow, clipping, etc. in intermediate calculations.

In theory, if you knew what your editing pipeline is, one could mathematically derive the starting headroom required to end up with acceptable error, but in practice that's probably a very risky proposition because 1) you don't know every intermediate calculation going on in every piece of software and 2) errors grow in highly non-linear ways, so even a small change in your pipeline might cause a large change in required headroom [1].

[1] https://en.wikipedia.org/wiki/Numerical_analysis#Generation_...


Yes, that's exactly what I was trying to ask. Does OP really need 24 bits because they are producing music professionally? In that case, $100 on the i7->Xeon upgrade is a cheap no-brainer. If they are not producing audio professionally, I'm saying they probably can't tell the difference between 16bit and 24bit audio.

Also, you don't cause errors when you compress and decompress using a lossless compression format (as the name sort of implies).


> If they are not producing audio professionally, I'm saying they probably can't tell the difference between 16bit and 24bit audio.

That's not how it works. You can totally hear clipping or noise even as an amateur. Just as an amateur photographer first thing first learns to look out for overlit skies and hilights that aren't really recoverable at all unless they perhaps have access to the camera's RAW files. But unlike in photography, in sound work you typically need to process as well as sum multiple recordings, all while not blowing out the peaks that might just happen to line up.

If anything, dealing with the large dynamic range of 24-bit is easier for the amateur. An experienced pro would probably have a good bunch of tricks in his bag should he really have to mix music in 16 bits, enough to produce something that doesn't sound horrible. The amateur would be more likely to struggle.


I understand in OPs case, someone else (being a professional) has already recorded and mastered the sound of a real live pipe organ. That person of course benefits from working at 24bit, but once he's done mastering, OP is "just" playing back the mastered audio and should be fine with 16bits, I think.


But OP is not just playing back a single mastered recording, it's thousands of simultaneous samples that get mixed together live, presumably. In that case the professional who recorded all these samples can do nothing to prevent the peaks from lining up during playback.


Would using fixed-point arithmetic[1] help with these issues at all?

[1] - https://en.wikipedia.org/wiki/Fixed-point_arithmetic


Fixed-point arithmetic essentially gives you more resolution at the cost of not having a dynamic range, so in practice you reduce some round-off errors, but overflow errors are much more likely. Overflow ends up as saturation in the context of audio editing and usually sounds like really bad clipping.

In practice, fixed-point or floating-point is better depends on the operation you plan to use and whether you have a good idea whether you'll be able to stay in a fixed range ahead of time [1].

[1] The loss of 'footroom bits' corresponds to the extra resolution lost whenever the FP mantissa increases - http://www.tvtechnology.com/audio/0014/fixedpoint-vs-floatin...


Yes. Bit depth is almost entirely about dynamic range. Run out of bits at the top of the range and you get clipping; run out of bits at the bottom end and the signal disappears below the noise floor of dithering.

For mixed and mastered music, the ~120dB perceived dynamic range of properly dithered 16-bit audio is more than sufficient. If you're listening at sufficient volume for the dither noise floor to be higher than the ambient noise floor of the room, you'll deafen yourself in minutes.

For production use, the extra dynamic range of 24 bit recording is invaluable. You're dealing with unpredictable signal levels and often applying large amounts of gain, so you'll run into the limits of 16-bit recording fairly often. In a professional context, noise is unacceptable and clipping is completely disastrous. Most DAW software mixes signals at 64 bits - the computational overhead is marginal, it minimises rounding errors and it frees the operator from having to worry about gain staging.

You can probably get away with 16 bit recordings for a sample library, but it's completely unnecessary. Modern sampling engines use direct-from-disk streaming, so only a tiny fraction of each sample is stored in RAM to account for disk latency. The software used by OP (Hauptwerk) is inexplicably inefficient, because it loads all samples into RAM when an instrument is loaded.


> Because for listening, it's proven fact that even professional musicians and mixing engineers using equipment costing >$100k can't tell 24 bit from 16 bit in a proper double-blind A/B test.

Digital samples can and will generate intermodulation distortion, quantization and other audio artifacts when mixed. Using 24-bit lessens or eliminates the effect versus 16-bit.

Experiments and results on this software are here: http://www.sonusparadisi.cz/en/blog/do-not-load-in-16-bit/


Most people who believe this is a proven fact have never worked in pro audio and know a lot less about it than they assume.

While people here like to quote that infamous looks-like-science-but-isn't Xiph video, the reality is that a lot of professional engineers absolutely can hear the difference.

If you have good studio monitors and you know what to listen for the difference between a 24-bit master and a downsampled and dithered 16-bit CD master is very obvious indeed, and there are peer reviewed papers that explain why.

Dither was developed precisely because naively interpolated or truncated 16-bit audio is audibly inferior to a 24-bit source.

Many people certainly can hear the effect of dither on half-decent hardware, even though it's usually only applied to the LSB of 16-bit data. From a naive understanding of audio a single bit difference should be completely inaudible, because it's really, really quiet.

From a more informed view it isn't hard to hear at all - and there are good technical reasons for that.

For synthesis, you don't want dither. You want as much bit depth as possible so you can choose between dynamic range and low-level detail. So 24-bit data is the industry standard for orchestral samples.


> Dither was developed precisely because naively interpolated or truncated 16-bit audio is audibly inferior to a 24-bit source.

Dither was developed because it's technically better. Audibly inferior though? Perhaps, with utterly exceptional source material (e.g. test tones) and when the output gain is high enough for peaks levels to be uncomfortably loud.

In reality, many recording studios have a higher ambient noise level than the dither, making it redundant in practice — the lower bits are already noise, so audible quantisation artefacts weren't going to happen anyway. That said, dithering is pretty much never worse than not dithering, and almost all tools offer it, so everyone does it.

24 bits is important because it gives the recording engineer ample headroom, and it gives the mixing and mastering engineers confidence that every last significant bit caught by the microphone will survive numerous transformation stages intact. Once the final master is decimated to 16 bits per sample, you know that your 16 bits will be as good as they could have been.


A high noise floor is not even close to being the same as dither, never mind noise shaping.

Have you actually heard the difference between simple dither, noise-shaped dither, and non-dithered 16-bit audio? A test tone is the worst possible way to hear what they do.

24-bit audio is used because you want as clean a source as possible.

This also applies to mastering for output at the other end. With the exception of CD and MP3, most audio is delivered as 24-bit WAV at either 44.1, 48, 88, or 96k.

Even vinyl is usually cut from a 24-bit master. Here's a nice overview of what mastering engineers deliver in practice:

https://theproaudiofiles.com/audio-mastering-format-and-deli...


> A high noise floor is not even close to being the same as dither, never mind noise shaping.

Dither is noise. Well chosen noise, very quiet noise, but noise nonetheless. Whether the signal is noisy for one reason or another, the consequences at the point of decimation/quantization are the same. Either way the least significant bits are filled with stochastic values and the desired signal isn't plagued with quantization noise artifacts.

> Have you actually heard the difference between simple dither, noise-shaped dither, and non-dithered 16-bit audio? A test tone is the worst possible way to hear what they do.

...is the sort of thing someone who hasn't done a blinded listening test would say. Stop assuming the commercially successful "experts" are also technical experts, because few are. I doubt more than a tiny fraction could describe what a least significant digit is.

(Cutting vinyl, a hilariously lossy process that requires compression and EQ to avoid the needle jumping the groove, doesn't need a 24 bit master. Barely needs a 14 bit master. But since an extra hundred megabytes of really accurate noise floor doesn't make anything worse, they do it anyway.)


There are professional audio "engineers" who claim they can hear the difference in the types of audio cables used. And when when challenge them on it, rather than defend it, or demonstrate it, they'll just name-drop musicians they've worked with... most of which are producing albums these days that sound like garbage because of the loudness war.

Their words mean very little.

I totally get going for 24 bit over 16 for the noise floor, though.


I knew some audio engineers who had different cables (cardas copper, pure silver, some others i can't remember) on identical headphones (Beyerdynamic) and everybody agreed you could hear not insignificant differences that they would describe as sharper transients, tubelike saturation etc. I consider that they were pro "engineers", they designed hardware and software for Avid.


Undoubtedly. Such obvious delusions are commonplace and don't necessarily correlate to one's scientific literacy or electrical engineering skills.

Such audible differences constitute a testable claim. So far, the number of claims that have withstood reasonable test conditions is zero.


Do you subscribe to the AES journal? This explains why 16-bits isn't enough:

http://www.aes.org/tmpFiles/elib/20170226/18046.pdf


Nice 404.

Nobody is arguing that 16 bits is enough during the recording and mixing process. If they are claiming that 16 bits is insufficient for consumer distributed music, they need to go back to remedial science class.


While many people can hear a difference, and while one is technically better... most couldn't tell you which is actually better. There have been many blind tests to this.

I'm all in favor of keeping sample sizes better.. but raising the price for everyone by 10-20%, so that less than 1% can take advantage of it is a bit ridiculous.


Genomics. E.g. building a genome index of k-mers (n-grams) usually requires more than 64 GB.

It can be quickly done on a consumer workstation, but you do need 128 GB in many scenarios.


Who is doing this and can't afford a Xeon system? And why should Intel and AMD be making everyone else pay more so that these people can pay a little less?


Grad students :). More specifically, a professor with a research grant can afford to fund N grad students where N is inversely proportional to equipment cost.

The complaint is not about adding the capability to all chips. It's about smoothing out the step function between consumer machines and professional workstations.


The steps from consumer to professional seem pretty smooth. The higher end consumer machines cost more than the lower and professional machines. You can buy SkyLake Xeon processors from Newegg for as low as a couple hundred dollars.


Maybe to play around with genome data?

http://i.imgur.com/UYqaVoD.jpg


Don't need a xeon for more than 64GB of ram. I have a 5820K with 128 GB ram on an X99 mobo. Runs very well for it's price, though a little power hungry.


Yup, I have the same. But the spec sheet says 64gb https://ark.intel.com/products/82932/Intel-Core-i7-5820K-Pro...

Which is what confuses people. I am actually not sure what that max memory refers to. Possibly max per module?


The recent generations can address 128GB as long as your motherboard supports it (my X99 board does).


This is false, I have an i7-6900K and 128GB of RAM on an X99 motherboard. You could do the same with a significantly cheaper CPU, too.


Are you sure? I bought an i7-6850K for a personal GPU server, and 'top' sees all 128GB that I put in.

https://www.newegg.com/Product/Product.aspx?Item=N82E1681911...

I see an i7 6950X should also support 128GB:

http://www.overclockers.com/intel-i7-6950x-broadwell-e-cpu-r...

I wish it had ECC, but you can't have everything.


With that much RAM, you'd be insane not to use ECC, which is another good reason for Xeon.


Or AMD, which supports ECC across all (most?) CPU lines historically. Again, the argument is about limitations of Intel consumer lines.


Question is, is buying those historical AMD CPUs which support ECC a good idea?

So far, none of the modern ones are confirmed to support ECC.


You won't be replacing your i7 6800K anytime soon, but will you be changing your username to skylake?


How dare you make a joke on HN?


They will more likely just sell at a loss to undercut AMD, it will cost them less than R&D.


If most of the outlay on tooling changes is already paid out, it won't necessarily be a loss, but much thinner margins.


That TDP is unreal. 95 Watts?

Looks like AMD has made a real performance breakthrough here. When is general availability expected?


And 65W for the Ryzen 7 1700, which is an absolute feat for 8 cores


Would the 1700X and 1800X also be 65W if they were downclocked to 3GHz, or is the 1700 fundamentally different from them?


That's caught my eye as well... will be interested in seeing how the 1700 compared to my current i7-4790k, as I'd like to stay/get lower on the power usage... though 95w wouldn't be too far off if I can get what I want out of it.


They will also be 65W in that case, they are made from the same dies


That depends, the binning process means that products using the same die layout can have different characteristics. We don't yet know to what extent Ryzen is binned but here's a possibility: the 65W 1700 is made from cherry-picked dies that tolerate especially low voltages, the premium 1800X is made from dies that tolerate especially high clocks, and the 1700X is made from leftovers that didn't qualify for either.

https://en.wikipedia.org/wiki/Product_binning


It's a hard launch on March 2. I believe preorders are already sold out though.


TDP would be for stock clock, but still that is very good.


Lots of skeptical comments below. I tend to believe the result, given that Intel has slashed prices on its Kaby Lake/Skylake processors in anticipation of the Ryzen launch:

http://wccftech.com/intel-amd-price-war-ryzen-processors/


That shows 'Microcenter' slashing prices, not Intel.

If Intel slash prices it will presumably be at the bulk "tray" level and reflected on their ARK website.


Microcenter has a tradition of getting really good cuts before/just after a new product comes out. I remember the Surface RT + Keyboard sale they had for Black Friday:

https://www.neowin.net/news/micro-center-selling-surface-rt-...

Not saying this is predictive of an Intel price cut, but I do think that MicroCenter knows its audience.


This article doesn't actually list any benchmarks and the link/URL is totally broken for the "Ryzen offers even better single-threaded performance per clock than Intel’s Kaby Lake." - http://single-threaded/


in the video, they say they cant release any benchmark until march 2nd. not sure why.


There's a publishing embargo set by AMD. If you break it, they won't send you pre-release hardware in the future. And probably no one else will either since you can't be trusted to play ball.


Yes, customer Ryzen CPUs support ECC.


Unfortunately even the most expensive ASUS board I could find at the time I pre-ordered my Ryzen claims no support for ECC: https://www.asus.com/us/Motherboards/ROG-CROSSHAIR-VI-HERO/s...

AMD Ryzen™ Processors

4 x DIMM, Max. 64GB, DDR4 3200(O.C.)/2933(O.C.)/2666/2400/2133 MHz Non-ECC, Un-buffered Memory

I'm guessing the demand isn't there and won't be there unless some company can market it as some new gimmick with more mainstream appeal.


I pre-ordered 1800x + the Asus TOTL mobo for Ryzen. It doesn't support ECC: https://www.asus.com/us/Motherboards/ROG-CROSSHAIR-VI-HERO/s...

Memory AMD Ryzen™ Processors 4 x DIMM, Max. 64GB, DDR4 3200(O.C.)/2933(O.C.)/2666/2400/2133 MHz Non-ECC, Un-buffered Memory * AMD 7th Generation A-series/Athlon™ Processors 4 x DIMM, Max. 64GB, DDR4 2400/2133 MHz Non-ECC, Un-buffered Memory * Dual Channel Memory Architecture Refer to www.asus.com for the Memory QVL (Qualified Vendors Lists).

AFAIK, Ryzen won't support ECC, sadly.


Hmm, the cheaper ASUS boards are listed as supporting ECC, according to this page: http://geizhals.eu/?cmp=1582178&cmp=1582183&cmp=1582185&cmp=...


This page does not match the information given on the ASUS website: https://www.asus.com/us/Motherboards/PRIME-B350M-A/specifica... As such all we can say for now in my opinion: inconclusive


More history in this which supports your inconclusive conclusion: https://community.amd.com/thread/210870

I'm not holding my breath for ECC support. My thinking is that if it could run ECC RAM with the actual corrections enabled, they'd be talked about that as a selling point. I'd sure like it with the 32GB I have going into this build, but I just went with "G.SKILL F4-3000C15D-32GTZ TridentZ Series 32GB (2 x 16GB) 288-Pin DDR4 SDRAM DDR4 3000 (PC4 24000)" for now, with the idea that I'll get a second of the same kit in a couple of years when it's half the price of today.


It's a mistake. They used to list ECC but they rolled it back on the US pages. They missed the German pages.

https://www.reddit.com/r/Amd/comments/5vlg7o/asus_updated_it...

Even if you can run with ECC RAM, I doubt you can enable the ECC functionality, it will just be running in non-ECC mode.


No confirmation on ECC with ECC checks enabled so far though. Just the ability to use the RAM.


> Yes, customer Ryzen CPUs support ECC.

No, they don't.

> We did ask about a potential single socket Ryzen/ Zen part with ECC memory support and were told that AMD was not announcing such a product at this time alongside the Ryzen/ Zen launch.

https://www.servethehome.com/amd-ryzen-7-parts-available-for...


Ryzen does support ECC. Confirmed by the CEO of AMD in the Reddit AMA.

https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...


I can't wait to see the result this will have on the CPU market especially with the rest of their CPU line up coming later this year


Intel could just drop their prices, but it looks like Intel will [also] rush out their 8th-gen stuff this year, even though it's still on 14mm: https://arstechnica.com/gadgets/2017/02/intel-coffee-lake-14...


It will be a very interesting match to witness. Intel will have to act under pressure while saying goodbye to a huge sum of money.


Just saw this, looks like Intel is already dropping prices significantly: http://wccftech.com/intel-amd-price-war-ryzen-processors/

If you were on the verge of getting a 7700k or something, it looks like now is the time to buy (or give Ryzen a shot!).

Personally I'm willing to pay a bit more to give Ryzen a chance, because hopefully it will lead to sustained competition in this space again. A $500 8-core CPU is more than I need for gaming right now, but given that my 2500k has been great for 6 years, I don't have a problem spending $2k on a PC every 4-6 years for really great performance.


> Just saw this, looks like Intel is already dropping prices significantly: http://wccftech.com/intel-amd-price-war-ryzen-processors/

Those are Microcenter's normal prices - they have always sold Intel processors significantly under normal pricing as a loss-leader (not sure if it's an actual loss). They will also knock another $30 off if you buy a motherboard at the same time.

https://www.reddit.com/r/buildapcsales/comments/5cvz4z/cpu_i...

For example, they have been running 6700Ks for $260-280 since before Christmas. They sold Kaby Lake 7700Ks for $330 once those launched, now they're down to $300. This is just the normal Microcenter pricing life-cycle.

But, that's the kind of "journalism" you come to expect from WCCFTech. They will literally publish any random garbage someone writes up.


Those prices match Intel promotions and bonus gifts Intel ordinarily offers to big distributors for pushing hi-end product (CPU and Chipsets). There is nothing suggesting Microcenter is running a loss on this, they might have a very thin margin, or even sell at 0 and only count Intel gifts as a potential profits (Intel will give distributors gifts like "free" SSDs for selling X number of expensive parts). Its all in the spirit of tactics that were killing AMD in ~2002 and lead to Intels monopoly trial, just made a little more subtle this time.


If you were thinking of this as being a cheap CPU for a GPU training rig. It only has 24 PCI Express lanes. It might be better to stick with Xeon CPUs because most have 40 lanes.


Thanks for that. Was thinking about building a similar build I current have which uses a 5960X for the 40 lanes.


Usually you don't need x16 lanes for each GPU. x8 or even x4 in PCIe 3.0 will be enough but there is no way to know without checking.


Anyone knows where can I get detailed information about the processor? I tried to search in AMD page but the information is very scant.

Namely do these processors support ECC and what virtualisation capabilities do they have (for KVM with full GPU access).


No official statement yet, but I think they will support virtualization. There hasn't been an AMD CPU in ages without AMD-Vi/IOMMU.

But lack of ECC support is a problem if you want to use virtualization for security, as Rowhammer can be used for attacking other VMs and ECC is often the last line of defense against Rowhammer.

https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events...


None of the launch motherboards support ECC, so ti sees doubtful that the Ryzen supports ECC.


IIRC they will not reveal this until March 2.


Finally some competition. I'm sick of Intel monopoly. Good job AMD!


Micro Center is selling Intel i7 7700k for $299

Those 4 cores are stronger than the first 4 cores of the 1700X or the 1800X. If you are a gamer, then most if not all of your games will use 4 or less cores. Why pay more money for worse gaming performance?

https://www.techpowerup.com/230638/amd-ryzen-benchmarks-leak...


This post is pretty full of bad logic and could benefit from comparative benchmarks in place of supposition.

- One retailers sale price on one chip is a poor way to judge the relative value of 2 entire lines of products.

- the Intel i7 7700k is normally north of 350 which is what a lot of retailers are selling it for. There is no reason to suppose that one retailers temporary sale right now ought to be compared to the projected sale price of amd chips.

-This price is without cooler/paste amd 1700 includes both for 329 so most would end up paying almost 400 for Intel $70 more if you include the next item.

- Historically Intel motherboards have been more expensive and have been worthless as far as upgrading even to the very next generation of cpu. Amd motherboards are often upgradable.

- It would be more reasonable to compare it to the 1700 which has uses 2/3 the power to run twice the cores with substantially better multi core performance as much as 40% better in fact.

http://wccftech.com/amd-ryzen-7-1700-gaming-performance-benc...

While single core performance isn't quite as good if you followed pc gaming you would in fact know that either chip is going to be more than enough to keep up with any mid and most high end gpu setups you could throw at it.

In summary going Intel on your next mid to low high end gaming rig will probably cost you slightly more, use a lot more power, give you substantially worse performance and no better gaming performance.


Doesn't seem to be the case for modern games [0]. And this comes from a side which sometimes is called NvidiaBase or IntelBase. With the consoles having 8 (slow) cores, and Mantel/Vulkan and DirectX 12 emerging this will probably get more pronounced over time. Especially when Unreal 4, Unity and Cryengine optimize for more threads which means even indie games should take advantage of more cores.

btw: the 65 watt tdp Ryzen R7 1700 for 329 dollar is matched against the Intel i7 7700k and has double the cores. Yes it will be slower in single thread applications like ARMA 3 (having only Broadwell IPC and being clocked lower), but even at stock you will have better performance in modern games (where the GPU is more important anyway).

[0] https://translate.google.com/translate?sl=de&tl=en&js=y&prev...


FTA:

“With our overclocked 1800X sample cooled by the Noctua unit AMD provided in the reviewer’s kit we managed to surpass the 7700K in single threaded performance and the temperatures were great."

The 7700k might be a reasonable value choice for gaming today, but I have little doubt that the extra four cores of the 1800X will come into play quite soon.

Not so long ago, the same argument applied to dual vs quad core processors. Game developers didn't start optimising for four cores until there were a significant number of users with quad core processors.

We're also yet to see the performance of the hex-core and quad-core Ryzen parts. At $199, the 4c/8t 1400X could demolish the 7700k.


Core count seems also to be partially fall into the classic chickenegg problem. Intel has shown no interest in increasing core count below enthusiast grade segment. Most people buy max quad cores because that's what Intel offers, games keep whatever they can split into more cores below 4


Not sure, but some gamers are youtubers and they need to process videos. What about having an extra core or two for handling recording and playing live? Those cost serious real time processing.


When I'm streaming, I let my GPU do the video capture and compression for me (Nvidia NVEnc). You can do this with ShadowPlay if you feel like hurting yourself, but Open Broadcaster Studio supports it too.


Extra cores might help with that, Intel Quicksync might be even better or some GPU based video encoding.


Quicksync is a marketing bullet point. Nobody sane uses this crap because quality is abysmal. Its bad for streamers (they usually run second PC just for stream compression), and bad for archiving video(quality again).


How is the single core performance in real world beside benchmarks?

AMD mentions some AI technology to improve the perf. If one runs the same software many times, will the performance change? It could be good if it learns and improves the performance, but results might not be reproduce. Is it like the Pentium 4 with its long pipelines that ideally result in better performance but meant more misses?

Good that AMD has something in peto to compete with Intel again.


The AMD strategy seems to just be "give the customer what they want". Revolutionary indeed.


While I recognize it is a 'what is there to lose' kind of move, I think it is really awesome that AMD is unlocking all of their Ryzen parts. Seems like a when for the "I want to smoke my own processor thank you very much" generation.


I'm maybe the only amd fan who is going to miss my power/$ of my 8350 for 150$ (or was it even cheaper?...)


I can say I don't miss the $/power from my electric bill when I shut down my 8350 home server/nas box.


That 65w does look pretty impressive!


Beating a 10 core doesn't sound real, but it sure kicks the Intel 8-core's ass


What're the chances Apple puts ryzen into their lineup?


Power requirements aside, AMD is still lacking a Thunderbolt 3 implementation so Apple will have to stay with Intel when it comes to Mac CPUs.


I wonder if AMD's patent cross licensing with Intel is broad enough to even allow them to do Thunderbolt.


Zero.

While the performance of Ryzen is pretty good, the overall power consumption and efficiency (idle, semi-load and load) i still no where near Intel. They won't kill 2-3 hour battery life on Macbooks for a AMD CPU.


How do you know this? The only Zen-based chips we have any information about are desktop parts and we don't have any power usage benchmarks.

The Raven Ridge chips, which are meant for laptops, would have the added advantage of removing the need for a discrete GPU in the 15" MacBook Pro. Switching to AMD seems unlikely but I'd be interested if they did.


Absolutely, nothing is really known at this point. The leaked power consumption results that we have show that a Ryzen engineering sample uses slightly less power than comparable Intel CPU's (i7 6900k). But we don't know if the leak is real.

So let's wait for real benchmarks (at march 2) before saying Intel is better in power consumption @ sparkling. But keep the following in mind: https://news.ycombinator.com/item?id=13736809


They could bifurcate their lineup and put ryzen in the desktops or is that silly too?


Looking at previous lineups, they tend to use the same chips for a generation. E.g. when they switched from AMD GPUs to Nvidia, they did it on all products.


If the 32 core Naples server Zen, due Q2, has anything like this kind of price/perf versus Xeon, Intel is in trouble.


Meaningless, oh look our overclocked cpu beat an non-overclocked one that could have been overclocked.


Fake news?


Intel probably have something ready to go from 5+ years ago (that was never released because they didn't need to)...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: