The main difference between white and pink noise, is white noise has the same level of power across all frequencies.
Whilst pink noise has the same level of power across an octave.
There are more frequencies in the high-range, so it naturally sounds considerably higher pitched.
i.e. a C6 on a keyboard sits at around 1Khz (1046.502 to be exact) and a C#6 is 1108.731 KHz (a difference of 62.229 Hz)
While a C3 is 130.8128, and C#3 is 138.5913 (a difference of only 7.7785 Hz).
Then using an entire octave as an example, C3 (130.8128) to C4 (261.6256) = 130.8128 (0Hz up to C3)
vs C6 (1046.502) to C7 (2093.005) = 1046.503 (0hz up to c6 +- 0.001hz)
The orders of magnitude in power/loudness is pretty astonishing.
When introducing decibels to new audio engineers, we generally introduce it as a 3db increase is a doubling in power, a 10db increase is a 10x increase in power.
It gets silly when you start talking sound pressure level, because how people perceive a 10x increase in the output power, is about a doubling of the perceived loudness.
The question was whether or not the audio displays have a correction to make pink noise render as a flat horizontal line, whereas general test and measurement spectrum analyzer would show a tilted line.
I'd say it depends on how much they're willing to dive into "growth" mode for the company. If they're willing to spend those Microsoft dollars on product usage being embed everywhere, then sacrificing some short-term monetary gain for businesses that are built around your product would be valuable.
I think it kind of flows into two trains of thought in the against category.
First off, that some people are worried about copywrited, private stuff being included in the training data.
I've not read up on copilot recently, so not sure if this is a reasonable thing to be worried about or not.
The other, is that people might be using Github to share stuff they've come up with other developers, but having an AI parse that information means that there's a disconnect between giver and receiver. It removes a chunk of the feedback loop being possible, which makes it so rather than it being a community of developers, it becomes something more akin to content creators and lurkers.
That's not necessarily a bad thing, due to it opening up the sheer number of possible usages that end up using something. But it would minimize community feedback.
An alternative for a local network is running NDI.
That's how for events we stream a bunch of remote cameras (and even computers on the network) into visual displays.
Also NDI native cameras are slowly becoming a thing. I'm really in love with the Logitech Meevo. It's targeted for use with phones, but it works great with computers and OBS too. Drop dead simple to use, and with the POE kit very, very stable. I'd go something like it over a mirrorless camera hack any day.
There are more frequencies in the high-range, so it naturally sounds considerably higher pitched.
i.e. a C6 on a keyboard sits at around 1Khz (1046.502 to be exact) and a C#6 is 1108.731 KHz (a difference of 62.229 Hz)
While a C3 is 130.8128, and C#3 is 138.5913 (a difference of only 7.7785 Hz).
Then using an entire octave as an example, C3 (130.8128) to C4 (261.6256) = 130.8128 (0Hz up to C3)
vs C6 (1046.502) to C7 (2093.005) = 1046.503 (0hz up to c6 +- 0.001hz)