Hacker News new | past | comments | ask | show | jobs | submit login

I won't be replacing my i7 6800K build any time soon, but I'm very happy to see AMD looking competitive again. There hasn't been much excitement about the recent generations of i7 for good reason, and hopefully this forces them to start pushing the envelope again.



Exactly. Ridiculous Intel pricing and artificial limitations were a consequence of de facto monopoly.

I absolutely hate how they cap maximum RAM on consumer machines, so that you need to pony up for a Xeon workstation if you need more than 64 GB.


Their worst crippling so far is ECC being unavailable on even costly customer parts...


FWIW consumer Zen is also limited to 64GB, to go higher you'll need to step up to AMDs Xeon-esque Naples platform.


I know, but competition would hopefully raise these limits in consumer machines.


There is no free lunch. Moving to quad channel incurs more costs in pins, mobo traces, etc.. It's targeted product development.


Nah, it's just artificial limits. Would totally just work if they flipped a bit in the processor. /s

Less sarcastically, what do people need >64 gigs for on a consumer desktop? I'm a dev and work on a pretty memory intensive service and don't need 128 gigs. Holding an entire double layer blu-ray in memory would only require 50 gigs.


Minecraft. That shit is heavy! On a more serious note, some gamers are using RAM disks to greatly improve performance of some games, especially modded games like Minecraft and Skyrim.



> Less sarcastically, what do people need >64 gigs for on a consumer desktop?

I use mine as a terminal server to some super cheap Raspberry PI machines that are basically just monitors running VNC software.

For good performance you need about 4 to 8GB per user. I don't have that many users personally to need 64GB, but if you have a bunch of kids having terminals is much cheaper than a real computer for each user, plus you only need to keep one computer updated.

You could easily afford a computer in each room that way.


8-16 dedicated terminals? This is not a consumer scenario.


When did a >server become a consumer desktop?


They are not artificial, but Intel seems to have big margins. Competition reduces margins.


Sure. I don't personally care if Intel's margins drop. But I also don't care if professionals making six figures have to pony up a couple hundred for a Xeon system. The idea that Intel should make casual consumers subsidize professional needs so that professionals can save a hundred bucks is absurd.


Machine learning model training. A large buffer of data waiting to be trained speeds up training. Especially with training on the GPU the biggest bottle neck is reading the data from disk.


Forgive me, but is training ML models really a common consumer need?


Are you kidding? I eat AI for breakfast.


i can't imagine why anyone needs more than 640k personally


Did you miss every example people gave?


The other day, I was struggling to emulate a Hadoop cluster in a series of VMs on a 16G laptop with zero work being performed - just keeping all the services and Cloudera's management scripts and tools made everything swap everywhere.

VMs are my biggest memory hog, particularly when they are running lots and lots of separate Java processes.


And the problem with your 16GB system was that it didn't support >64GB?


The problem with my laptop was that it didn't support >16GB; I'd take more than 64GB if I could have it.

It's usually foolish to say that we don't have need for more X in computing because nothing today warrants more than X; the thing is, if we have more of X, we find uses for it. Most of what we do can trade space for time, or incorporate things in cache hierarchies; and sometimes quantitative change adds up to a qualitative change because new architectural approaches become possible.

If I could have 1TB in my laptop, it would change the way we at my company develop software. In fact we'd rewrite our product to take advantage of that much memory, the case would be overwhelming; we'd introduce a new tier in our architecture for keeping a copy of the customer's working set (typically several GB) in memory. As it is, we rely on the database and its caching; it's OK, but it can't support all the operators we'd like to support.


Recent i7 chips support 128GB (my 6800K does).


64gb workstation ought to be enough for anybody. Jokes aside, what do you do if you are interested in >64gb workstations?


Quantum chemistry. There are a number of high accuracy wavefunction based methods that are compute-tractable on desktop machines but very demanding of storage. Even when there's a disk based alternative to RAM storage it's a lot slower when you can't just allocate huge arrays in memory. I could easily use 256 GB, maybe 512, on a machine with 8 physical cores.

You can get commercial software that is a bit better tuned for running on "ordinary" desktop hardware, but it's so expensive that you don't really come out ahead on total cost.


Forget about desktop processors and buy yourself a double-socket LGA2011v3 board and some engineering-sample Xeons. Then you can address that kind of memory and run ECC.

The clocks are a little lower than the retail Xeons, and you do need to pay attention to compatible boards/BIOS versions, but you can get a big Xeon E5 for like $200-300.


Thank you for the suggestion. I am surprised how cheap these can be. It has been years since I assembled a system from parts so I had been looking at complete workstations from vendors, but for that much cost saving it is worth considering parts-assembly.


> Forget about desktop processors and buy yourself a double-socket LGA2011v3 board and some engineering-sample Xeons. Then you can address that kind of memory and run ECC.

If you follow this very thread, you'll notice that another user already complained that "you need to pony up for a Xeon workstation if you need more than 64 GB."


"Ponying up" here means, in context, Xeon processors that are several thousand dollars each. In contrast, the grandparent was wishing that he could use a Zen that cost several hundred dollars instead.

I was pointing out that there were other processors already on the market that met his requirements: Xeon engineering samples, which most people don't think about when they are talking about the costs of high-end Xeon workstations.

(This is partially because they are not "finished" products like retail chips. Clocks are lower, and you need to use motherboards/BIOSs from a specific compatibility list, and they are not something you can source from Intel. Thus, they slip many people's minds because they aren't something you would consider for an actual server deployment - but they are perfect for a workstation with specialty needs that needs to hit a tight budget)

So yes, I did actually read the thread. You just didn't grasp enough of the context to make a sensible response and figured it was the perfect time for a superslam "did you even read bro?" gotcha response.

Typically this sort of response is not welcome on HN. If you don't have something substantiate to post, please don't post at all.


Honestly that application sounds like you should be able to just get that 512G machine. Buffered ECC DDR4 RAM has gotten cheaper over the last year too.


If I still did research full time I'd probably get a Xeon workstation loaded with RAM for home. But it's just a hobby for me now that my day job is writing non-scientific software, so it doesn't seem like a justifiable expense. Might go up from my current 4 cores and 32 GB to something Zen-based later this year.

(Tangentially: I wouldn't bother with ECC RAM for this application unless that were required by the rest of the hardware. Memory errors may slightly affect time to a converged solution, but it's rare that a flipped bit would actually cause a job to converge to a different solution.)


If this is just a hobby why not rent a large VM from us (GCE) or AWS? It sounds like you don't run enough to want a box for anything like 24x7, but that it'd be great to have tons of memory and lots of cores (or a giant SSD). We just announced our Skylakes yesterday ;).

Disclosure: I work on Google Cloud,


I work on NLP projects and have similar issue with memory.

The reason I don't use cloud computing is that you constantly have to worry about moving data around, turning stuff on and off, API changes, data movement costs, product lineup changes (should I store this stuff on S3 or local HD), locally compiling all your stuff again or containerization and associated time overhead.

Local large box makes way more sense if you're doing solo projects. And if traveling, it would way more sense to rent a VPS from Hetzner. Flat cost structure and tons of bang for the buck.

BTW, I you can get 128 gb LGA2011-v3 motherboards.

https://www.newegg.com/Product/Product.aspx?Item=N82E1681315...

and it seems like it works: http://www.pcworld.com/article/2938855/hardcore-hardware-we-...

even though the spec says max 64gb: https://ark.intel.com/products/82932/Intel-Core-i7-5820K-Pro...


I can confirm, I run machine learning workloads on a home machine with 128GB of RAM on an X99-E WS/USB 3.1 motherboard. My manual explicitly states that it has 128GB RAM support, as well.


Some examples that can easily push you beyond 64 GB: - high resolution photo editing: when you start with a 80 mpix / 48 bit photo from a medium format camera pushing beyond 16 GB requires only a couple layers and a couple undo steps being available - high resolution video processing. 1 second of uncompressed 4k 60fps video is almost 1.4GB - very large application compilation: building Android requires 16GB of RAM/swap - I'm sure there are apps that push that requirement even further - development environment for a complex system that requires you to run dozens of VMs if you want all components running locally (I've had to run 2 VMs with 12GB requirements each once)


If you can afford a $30,000 medium format camera, you can afford the Xeon workstation it usually needs. The kind of person that work with those systems would pay $6,000 for an umbrella that doesn't even have any electronics in it: https://www.adorama.com/bcb3355203.html?gclid=CLKvmd2erNICFY...

Absolutely no one that owns one of these professional cameras uses them with non-Xeon CPUs.

Really the only use for greater than 64Gb for non-Xeon CPUs would be student animation or machine-learning projects.


It's 25 pounds so not really an umbrella. It's like calling Xeon's just sand.


While still true it might not be for long: "advanced amateur" small formal SLRs with 36mpix @42bpp now start below $2000 and 24mpix @36bpp in the lower part of $500-1000 bracket.

https://www.adorama.com/ipxk1.html https://www.adorama.com/inkd810.html https://www.adorama.com/inkd7100.html

Which brings a slightly unrelated note: people seem to be blissfully unaware that you need less than 9mpix for a 12x9" 300 dpi print or that the zoom lens they bought with the camera has sharpness that effectively limits the resolution to 5-10, maybe 12 mpix if they are lucky. But megapixel count is easy to sell -> race for more megapixels -> smaller physical pixels -> more signal amplification needed -> more noise / general lower photo quality.


I used to shoot 33megapixel medium format cameras 10 years ago, on 8GB machines with 1GB GPUs just fine, doing multilayer photoshop on 48-bits-per-pixel TIFFs.

64Gb is completely unnecessary for photography. Photographers don't even go over 16GB.


Just because someone can afford to rent a $30,000 camera doesn't necessarily mean they have unlimited cash to burn on other things.


No, but the difference between i7 and Xeon is significantly less than what you'd have to pay for that extra RAM. We're talking O($100).


So O($1)?


If you are buying that level of professional gear, it's probably for a profession. The rates you are charging ought to include the necessary costs of all necessary equipment.


It's about $2000/week to rent an IQ180. If you can afford the rental price, you can afford a computer to process the images.


thats not how money works. Just because you can afford one expensive luxury, doesnt mean you can afford all the other expensive stuff. Sometimes you do sacrifice all the other thing to get that one important expensive stuff


Also the camera rental cost goes to the client. The computers to process are generally owned by the company doing the work. Big difference. I will regularily spend $1-2k for cameras for ~1 week, and pass that on, but still use our own gear for processing.


"I can't afford the premium gasoline my Ferrari needs."


Pro photographers don't do that.


That still doesn't follow. Having a certain amount of money doesn't imply that you have even more money, nor that you want to spend more than is truly necessary.


Sure it does. If someone can't afford the machine necessary to process the images that they got from the camera they spent thousands to rent, the problem isn't actually money. It's that they're an idiot.

You don't buy/rent a camera you can't afford to process images from just like you don't buy a car you can't afford to service and you don't buy a house you can't afford taxes for. Or if you do, I have little to no sympathy.


The specifications from the client are quite particular, and usually only one or a few cameras meets the spec.

The computers often are owned by the company renting the gear, where the end customer pays for the camera rental.

So it's not so simple. And one thing - go easy on using the word idiot... I work with some very smart people who fit your description of idiot on a regular basis.


You're a professional that clients are paying multiple thousands of dollars for photography and you cannot afford the hardware to process their images? Color me skeptical.

If it's really true, then your pricing is clearly wrong.


Having >64GB is far from necessary to edit any kind of multimedia, but it would be helpful to have. If your goal is to capture high-quality imagery on a $2,500 budget, the camera is more import than the RAM. It's just that you might end up with $500 unspent if you can't afford a $500 RAM upgrade and a whole new computer to use it. There are far more photographers and indy filmmakers in a situation where they have to make these kinds of tradeoffs than the tiny minority making tons of money.


Rent an Azure G5 and remote in perhaps?


I am a FEA Consultant and work with Abaqus and LS-Dyna. I need >64 GB now.


What camera is capturing 16GB photos? An IQ180 captures 80 megapixels but the images are something like 500 MB each.

All of your hypothetical examples are also for very niche high-resource usage professionals. It's absurd that these cases need extremely high memory support from consumer processors. Intel is not obligated to subsidize anyone, especially not people who can afford to pay for high end systems.


To address the core of your argument: no one is saying Intel should subsidize anything. The assertion is that using monopoly status to do additional work to gimp functionality in one product line and artificially drive up the price of another is at least abusive in the short term, probably makes Intel less competitive in the long run, and increases the volatility of the market.


> The assertion is that using monopoly status to do additional work to gimp functionality in one product line and artificially drive up the price of another

1. Are they actually doing that? I'll be honest, I don't know for sure, but I do know that more capacity rarely comes for free. I assume that supporting 256GB of memory efficiently relative to 64GB requires either a faster clock speed or some more pins. Is Intel "gimping" the consumer product or simply not building in the extra capacity?

2. Is it necessarily a bad thing if they are? If consumer demand peaks at, say, 32GB and professional demand reaches, say, 512GB, Intel could develop two entirely different architectures, which seems wasteful and more costly to everyone. They could ship just one chip with professional capacity supported, which drives up consumer costs effectively subsidizing professionals (because professionals are no longer paying the premium for the "pro" chip; they're just buying the consumer one). Or they could ship a consumer chip that doesn't support professional needs and ship a professional chip at a price that makes pros pay for the capacity Intel had to engineer for them.

The last option seems like a good option for everyone except the people who think everyone else should pay a premium for unnecessary pro support so that a few people can get cheaper pro chips.


I think he meant 16GB memory used when editing. Not the filesize.


Same question still applies.


You asked in another sub-thread "what do people need >64 gigs for on a consumer desktop?" but jeff_vader started this sub-thread asking "what do you do if you are interested in >64gb workstations?"

AFAICT nobody on this sub-thread is claiming that consumer desktops should cater to our niche use cases. We're just offering examples of application domains where a high RAM-to-CPU ratio is useful.


You're right. I was following up on the "I absolutely hate how they cap maximum RAM on consumer machines" comment further up the chain but jeff_vader's question pivoted the conversation and I missed the memo. There are definitely cases where professionals need (or can greatly benefit from) >64GB workstations and I don't dispute that.


Not really. As the actual editing being added to the base image is what could push that. I think the question you'd be looking for is, "what amount of photo editing, effects, etc. would make the editing process of a 80mpix photo consume >16GB memory?"

Additionally, as for your original question: For hobby or even 'normal' professional photography, I'd guess none. But rigs like the ones used for Gmaps Street View, the recent CNN gigapixel, etc. would probably have the capability to approach that size.


I actually want to know what camera is capturing 80 megapixels at 48 bits per pixel. The IQ180 is 16 bits per pixel. Where's the camera that triples this? Or are we doing Bayer interpolation in a way that requires 48 bits per pixel for some reason? Because there definitely isn't 48 bits of actual signal there.

And yes, I also want to know what turns a 500MB photo into >16GB in memory. That's 32 full-res copies of the photo in memory. Just "a couple layers and a couple undo steps", really?


Bits per pixel: A lot of small frame SLRs (your Canons and Nikons) offer 12 or 14 bits per channel for a combined 36 or 42 bits per pixel. Medium format digital backs now offer 16 bits per channel.

Layer size: Layers add more data on top of this: alpha channel (8-16 bits/pixel), clipping mask and maybe half a dozen other things, easily bumping the whole thing from 6 to 9 or 10 bytes per pixel.

Layer count: It's fairly common to have multiple layers that are copies of the original photo, with different processing applied and blending them into the final one. I'm very much an amateur and when I'm serious about just making a photo look (not even creating something new) I end up with around 10 layers.

Undo step memory: a lot of work in the photo processing workflow is global: color correction, brightness, contrast, layer blending settings and modes, filters (including sharpening or blur) apply to every pixel of a layer. Each confirmed change (by releasing mouse button / lifting stylus / confirming dialog) is likely to have to store an undo step for entire layer.

Of course you can just persist some of that to disk, but if a single layer/undo step can be 800MB this will hurt productivity - only very recently we got drives that can write this much fast enough and that's why just a couple years ago, when having enough RAM was not really an option a lot of pro photographers had 10 or 15k RPM HDDs running in RAID0 in their workstations.


I replied about the 48 bit thing to the sibling comment. That may be how the Bayer interpolation is done so I won't argue that.

For the layers/undo steps, the "couple" of each was not my phrase. It was yours. If by "a couple" you actually meant dozens, then sure, maybe an 80 mpix photo really needs that much memory to process.


> I actually want to know what camera is capturing 80 megapixels at 48 bits per pixel. The IQ180 is 16 bits per pixel...

Presumably that's 16-bits for each channel of Red, Green, Blue. Since every pixel is composed of the three RGB components, that's how you get 48-bits for every pixel (and internally, Photoshop refers to 16-bits per color channel as 'RGB48Mode').


There aren't actually three channels per pixel on most sensors. It's one channel (and only one color) and then there's a mixing process. I'm doubtful that you need 48 bits to do that mixing correctly, but maybe that is the standard way to do it.


I use a sound sampler (Hauptwerk) that loads tens of thousand of wavefiles and needs a polyphony (simultaneous samples being played) of thousands of voices to simulate a pipe organ. The minimum configuration for the smallest instrument (one voice) with natural acoustics (ie. original ambient reveberation recorded) is 4gb. The normal requirements for some good instruments are 64-128gb for 24-bit 6-channel uncompressed audio.


> 24-bit (...) uncompressed audio

Do you really need this much headroom, e.g. for producing music professionally? (If so, why are you arguing over the $50-100 extra for a Xeon?)

Because for listening, it's proven fact that even professional musicians and mixing engineers using equipment costing >$100k can't tell 24 bit from 16 bit in a proper double-blind A/B test. Presumably you use a high sample rate as well (maybe 192 kHz), which is equally useless. Drop both of those, and you're within 64GB easily with no noticable SQ loss.


For editing, you want to use the highest possible starting bandwidth (sampling rate * bit width) because every intermediate edit and calculation leads to round-off and other errors. You also need them to be WAVs or some other raw format since you cause minor errors every time you compress + decompress something, not to mention the performance loss.

It's the same rationale for doing calculations in full 80-bit or 64-bit FP even if you don't need the full bits of precision -- if you start your calculations at your target precision, your result will have insufficient precision due to truncation, roundoff, overflow, underflow, clipping, etc. in intermediate calculations.

In theory, if you knew what your editing pipeline is, one could mathematically derive the starting headroom required to end up with acceptable error, but in practice that's probably a very risky proposition because 1) you don't know every intermediate calculation going on in every piece of software and 2) errors grow in highly non-linear ways, so even a small change in your pipeline might cause a large change in required headroom [1].

[1] https://en.wikipedia.org/wiki/Numerical_analysis#Generation_...


Yes, that's exactly what I was trying to ask. Does OP really need 24 bits because they are producing music professionally? In that case, $100 on the i7->Xeon upgrade is a cheap no-brainer. If they are not producing audio professionally, I'm saying they probably can't tell the difference between 16bit and 24bit audio.

Also, you don't cause errors when you compress and decompress using a lossless compression format (as the name sort of implies).


> If they are not producing audio professionally, I'm saying they probably can't tell the difference between 16bit and 24bit audio.

That's not how it works. You can totally hear clipping or noise even as an amateur. Just as an amateur photographer first thing first learns to look out for overlit skies and hilights that aren't really recoverable at all unless they perhaps have access to the camera's RAW files. But unlike in photography, in sound work you typically need to process as well as sum multiple recordings, all while not blowing out the peaks that might just happen to line up.

If anything, dealing with the large dynamic range of 24-bit is easier for the amateur. An experienced pro would probably have a good bunch of tricks in his bag should he really have to mix music in 16 bits, enough to produce something that doesn't sound horrible. The amateur would be more likely to struggle.


I understand in OPs case, someone else (being a professional) has already recorded and mastered the sound of a real live pipe organ. That person of course benefits from working at 24bit, but once he's done mastering, OP is "just" playing back the mastered audio and should be fine with 16bits, I think.


But OP is not just playing back a single mastered recording, it's thousands of simultaneous samples that get mixed together live, presumably. In that case the professional who recorded all these samples can do nothing to prevent the peaks from lining up during playback.


Would using fixed-point arithmetic[1] help with these issues at all?

[1] - https://en.wikipedia.org/wiki/Fixed-point_arithmetic


Fixed-point arithmetic essentially gives you more resolution at the cost of not having a dynamic range, so in practice you reduce some round-off errors, but overflow errors are much more likely. Overflow ends up as saturation in the context of audio editing and usually sounds like really bad clipping.

In practice, fixed-point or floating-point is better depends on the operation you plan to use and whether you have a good idea whether you'll be able to stay in a fixed range ahead of time [1].

[1] The loss of 'footroom bits' corresponds to the extra resolution lost whenever the FP mantissa increases - http://www.tvtechnology.com/audio/0014/fixedpoint-vs-floatin...


Yes. Bit depth is almost entirely about dynamic range. Run out of bits at the top of the range and you get clipping; run out of bits at the bottom end and the signal disappears below the noise floor of dithering.

For mixed and mastered music, the ~120dB perceived dynamic range of properly dithered 16-bit audio is more than sufficient. If you're listening at sufficient volume for the dither noise floor to be higher than the ambient noise floor of the room, you'll deafen yourself in minutes.

For production use, the extra dynamic range of 24 bit recording is invaluable. You're dealing with unpredictable signal levels and often applying large amounts of gain, so you'll run into the limits of 16-bit recording fairly often. In a professional context, noise is unacceptable and clipping is completely disastrous. Most DAW software mixes signals at 64 bits - the computational overhead is marginal, it minimises rounding errors and it frees the operator from having to worry about gain staging.

You can probably get away with 16 bit recordings for a sample library, but it's completely unnecessary. Modern sampling engines use direct-from-disk streaming, so only a tiny fraction of each sample is stored in RAM to account for disk latency. The software used by OP (Hauptwerk) is inexplicably inefficient, because it loads all samples into RAM when an instrument is loaded.


> Because for listening, it's proven fact that even professional musicians and mixing engineers using equipment costing >$100k can't tell 24 bit from 16 bit in a proper double-blind A/B test.

Digital samples can and will generate intermodulation distortion, quantization and other audio artifacts when mixed. Using 24-bit lessens or eliminates the effect versus 16-bit.

Experiments and results on this software are here: http://www.sonusparadisi.cz/en/blog/do-not-load-in-16-bit/


Most people who believe this is a proven fact have never worked in pro audio and know a lot less about it than they assume.

While people here like to quote that infamous looks-like-science-but-isn't Xiph video, the reality is that a lot of professional engineers absolutely can hear the difference.

If you have good studio monitors and you know what to listen for the difference between a 24-bit master and a downsampled and dithered 16-bit CD master is very obvious indeed, and there are peer reviewed papers that explain why.

Dither was developed precisely because naively interpolated or truncated 16-bit audio is audibly inferior to a 24-bit source.

Many people certainly can hear the effect of dither on half-decent hardware, even though it's usually only applied to the LSB of 16-bit data. From a naive understanding of audio a single bit difference should be completely inaudible, because it's really, really quiet.

From a more informed view it isn't hard to hear at all - and there are good technical reasons for that.

For synthesis, you don't want dither. You want as much bit depth as possible so you can choose between dynamic range and low-level detail. So 24-bit data is the industry standard for orchestral samples.


> Dither was developed precisely because naively interpolated or truncated 16-bit audio is audibly inferior to a 24-bit source.

Dither was developed because it's technically better. Audibly inferior though? Perhaps, with utterly exceptional source material (e.g. test tones) and when the output gain is high enough for peaks levels to be uncomfortably loud.

In reality, many recording studios have a higher ambient noise level than the dither, making it redundant in practice — the lower bits are already noise, so audible quantisation artefacts weren't going to happen anyway. That said, dithering is pretty much never worse than not dithering, and almost all tools offer it, so everyone does it.

24 bits is important because it gives the recording engineer ample headroom, and it gives the mixing and mastering engineers confidence that every last significant bit caught by the microphone will survive numerous transformation stages intact. Once the final master is decimated to 16 bits per sample, you know that your 16 bits will be as good as they could have been.


A high noise floor is not even close to being the same as dither, never mind noise shaping.

Have you actually heard the difference between simple dither, noise-shaped dither, and non-dithered 16-bit audio? A test tone is the worst possible way to hear what they do.

24-bit audio is used because you want as clean a source as possible.

This also applies to mastering for output at the other end. With the exception of CD and MP3, most audio is delivered as 24-bit WAV at either 44.1, 48, 88, or 96k.

Even vinyl is usually cut from a 24-bit master. Here's a nice overview of what mastering engineers deliver in practice:

https://theproaudiofiles.com/audio-mastering-format-and-deli...


> A high noise floor is not even close to being the same as dither, never mind noise shaping.

Dither is noise. Well chosen noise, very quiet noise, but noise nonetheless. Whether the signal is noisy for one reason or another, the consequences at the point of decimation/quantization are the same. Either way the least significant bits are filled with stochastic values and the desired signal isn't plagued with quantization noise artifacts.

> Have you actually heard the difference between simple dither, noise-shaped dither, and non-dithered 16-bit audio? A test tone is the worst possible way to hear what they do.

...is the sort of thing someone who hasn't done a blinded listening test would say. Stop assuming the commercially successful "experts" are also technical experts, because few are. I doubt more than a tiny fraction could describe what a least significant digit is.

(Cutting vinyl, a hilariously lossy process that requires compression and EQ to avoid the needle jumping the groove, doesn't need a 24 bit master. Barely needs a 14 bit master. But since an extra hundred megabytes of really accurate noise floor doesn't make anything worse, they do it anyway.)


There are professional audio "engineers" who claim they can hear the difference in the types of audio cables used. And when when challenge them on it, rather than defend it, or demonstrate it, they'll just name-drop musicians they've worked with... most of which are producing albums these days that sound like garbage because of the loudness war.

Their words mean very little.

I totally get going for 24 bit over 16 for the noise floor, though.


I knew some audio engineers who had different cables (cardas copper, pure silver, some others i can't remember) on identical headphones (Beyerdynamic) and everybody agreed you could hear not insignificant differences that they would describe as sharper transients, tubelike saturation etc. I consider that they were pro "engineers", they designed hardware and software for Avid.


Undoubtedly. Such obvious delusions are commonplace and don't necessarily correlate to one's scientific literacy or electrical engineering skills.

Such audible differences constitute a testable claim. So far, the number of claims that have withstood reasonable test conditions is zero.


Do you subscribe to the AES journal? This explains why 16-bits isn't enough:

http://www.aes.org/tmpFiles/elib/20170226/18046.pdf


Nice 404.

Nobody is arguing that 16 bits is enough during the recording and mixing process. If they are claiming that 16 bits is insufficient for consumer distributed music, they need to go back to remedial science class.


While many people can hear a difference, and while one is technically better... most couldn't tell you which is actually better. There have been many blind tests to this.

I'm all in favor of keeping sample sizes better.. but raising the price for everyone by 10-20%, so that less than 1% can take advantage of it is a bit ridiculous.


Genomics. E.g. building a genome index of k-mers (n-grams) usually requires more than 64 GB.

It can be quickly done on a consumer workstation, but you do need 128 GB in many scenarios.


Who is doing this and can't afford a Xeon system? And why should Intel and AMD be making everyone else pay more so that these people can pay a little less?


Grad students :). More specifically, a professor with a research grant can afford to fund N grad students where N is inversely proportional to equipment cost.

The complaint is not about adding the capability to all chips. It's about smoothing out the step function between consumer machines and professional workstations.


The steps from consumer to professional seem pretty smooth. The higher end consumer machines cost more than the lower and professional machines. You can buy SkyLake Xeon processors from Newegg for as low as a couple hundred dollars.


Maybe to play around with genome data?

http://i.imgur.com/UYqaVoD.jpg


Don't need a xeon for more than 64GB of ram. I have a 5820K with 128 GB ram on an X99 mobo. Runs very well for it's price, though a little power hungry.


Yup, I have the same. But the spec sheet says 64gb https://ark.intel.com/products/82932/Intel-Core-i7-5820K-Pro...

Which is what confuses people. I am actually not sure what that max memory refers to. Possibly max per module?


The recent generations can address 128GB as long as your motherboard supports it (my X99 board does).


This is false, I have an i7-6900K and 128GB of RAM on an X99 motherboard. You could do the same with a significantly cheaper CPU, too.


Are you sure? I bought an i7-6850K for a personal GPU server, and 'top' sees all 128GB that I put in.

https://www.newegg.com/Product/Product.aspx?Item=N82E1681911...

I see an i7 6950X should also support 128GB:

http://www.overclockers.com/intel-i7-6950x-broadwell-e-cpu-r...

I wish it had ECC, but you can't have everything.


With that much RAM, you'd be insane not to use ECC, which is another good reason for Xeon.


Or AMD, which supports ECC across all (most?) CPU lines historically. Again, the argument is about limitations of Intel consumer lines.


Question is, is buying those historical AMD CPUs which support ECC a good idea?

So far, none of the modern ones are confirmed to support ECC.


You won't be replacing your i7 6800K anytime soon, but will you be changing your username to skylake?


How dare you make a joke on HN?


They will more likely just sell at a loss to undercut AMD, it will cost them less than R&D.


If most of the outlay on tooling changes is already paid out, it won't necessarily be a loss, but much thinner margins.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: