Mark Russinovich justified it by explaining that the network interrupt routine was just too expensive to be able to guarantee no glitches in media playback, so it was limited to 10 packets per millisecond when any media was playing:
but obviously this is a pretty crappy one-size-fits-all prioritization scheme for something marketed as a most-sophisticated best-ever OS at the time:
Many people had perfectly consistent mp3 playback when copying files over the network 10 times as fast in other OSes (including Win XP!)
Often a company will have a "sophisticated best-ever algorithm" and then put in a hacky lazy work-around for some problem, and obviously don't tell anyone about it. Sometimes the simpler less-sophisticated solution just works better in practice.
(The annecdote really starts at 2:30).
To some extent he's onto something. Sometimes you are in this zone when everything will work if you just do this one quick little hack, and then the hack ends up not getting replaced with a proper solution, and in the end you end up with a harddrive right next to a speaker >.<
So it's important to figure out what kind of manager you have, and try not to be the slow guy on a team full of quick-hack people. You won't look good and nobody will be happy.
I'd also add that it's important to understand the context in which your team is working.
If I were managing this hypothetical team in a pre-product-market-fit startup I could see the 'slow' person as a potentially greater risk than the others. Not because of speed per se, but because he may be investing too much time in directions that don't help the company learn about their market, and he may be building grand architectural visions that are hard to delete when we realise the product is going in a direction that person didn't anticipate.
On the other hand if I were managing the same hypothetical team in a highly defined context, for example a mature product or an open source library with a large userbase, the 'quick hack' people would need to change their ways.
Obviously this analysis is incredibly shallow and would need a ton of conversation and observation; I'm just making the point that different phases of product market fit require very different approaches & it's worth being aware.
isn't that called 'cultural fit'?
Now fixing these issues can often lead to really lean mean software that flies, but if you're in the all too typical situation of overambitious deadlines then you're already in triage mode and long hard fixes are just about the bottom of the pile
Of course, I fully agree. And yet I have witnessed enough developers stating the first stage almost verbatim as a response to a complaint that I grew really quite fond of the link.
It is, actually, quite useful in that regard - it allows you to point out the absurdity of the statement in a humoristic way.
That, and it wouldn't be a good parody of the Kübler-Ross model if it didn't start with Denial :-)
(This is all very vague memory. I know how this stuff works on Linux. Linux does not have interrupt priorities.)
Interrupt prioritization doesn't help much, because the sound data is likely being generated from user mode, and the playback complete interrupt is being handled in kernel mode, which would require a transition back for further processing. When receiving data from a network card, no transfer back to user mode is required and will have implicit priority over sound generation. Therefore, network processing is likely to starve out sound generation barring choices like those described.
(This is somewhat hearsay, corrections welcome :))
Plus typically general purpose OSes don't map hw interrupt priorities mechanisms to their own priorities (and very soon the hw interrupt controllers do not even know whether an ISR is running or not). I believe neither Linux nor Windows can benefit from hw assistance for interrupt priorities (and they mainly don't care at sw level anyway)
Mostly PnP decides the levels, unless you have multiple IRQs and you need to specify the priority.
It's amazing how important the scheduler is in an OS. I'm no expert on OS-level programming, but couldn't they have just used a simple round robin scheduler?
Both the Network Interface Card (or wireless chip) and the Audio hardware generate interrupts for a variety of reasons. Network cards generate interrupts when data is finished sending or receiving. Audio hardware generally generates its own interrupts when its buffers have emptied out, to signal the OS to provide it more audio to play.
In this case, it sounds like Vista's network interrupt was running for so long that it was taking priority over other interrupts (audio) and causing them to not be serviced quickly enough. This is definitely a problem, but the fix should not have been to limit the amount of work done per interrupt, but instead to delegate that work to some other process with lower priority.
It's weird to think of audio as higher priority than networking traffic, but in this case that actually makes a ton of sense. Audio generation only requires ~88.2 KB of data per second, but latency is much lower and less tolerable by a human listener. Modern network connections are working with multiple MB per second, and are (relatively) more tolerant of slight delays. Same with mouse and keyboard input, it's why your mouse tends to move smoothly on most Windows machines even when the rest of the OS is obviously struggling with some heavy task.
But the top comment was about how interrupts from the network card interfere with interrupts from the sound card so it can't keep its buffer filled. That shouldn't happen.
Unless the CPU just isn't fast enough to do networking and decode or generate the audio at the same time, but then it's never going to work properly.
The strange part of the story is that it took so long to download that the next day I went and bought the game (Battlefield 4) from the shop and brought it back home and installed it and started playing it, all whilst the original purchase from the PlayStation store was still downloading.
I ask Sony if they would refund the game that I bought from the PlayStation store given that I had gone and bought it elsewhere from a physical store during the download and they said "no".
So I never want to buy from the PlayStation store again.
Why would Sony not care about this above just about everything else?
So you will continue to suffer abominable download speeds on your PS4, because the cost of switching is too great for most. Basically, suck it up.
I hate to get all PCMR on you, but this is why everyone should game on PC. If my steam purchase downloaded super slow, I could simply get a refund on steam (an automated process) and purchase the same game on GOG, or on Humble Store. Sometimes Origin or Uplay. PC means freedom, or not being at the mercy of a single uncaring corporate entity.
I like to sit on the couch and I want my gaming device to be a black box that does its job and is dedicated to couch based entertainment (I use it for TV too).
PC gaming requires a...... PC... usually sitting at a desk, much more powerful than I need, lots of configuration etc etc to hard.... I'm a couch gamer in the evening after the programming is done.
I don't care much about the PlayStation download thing. I generally prefer to buy my games on DVD anyway.
Sony should care however.... despite not wanting to spend money on digital downloads I find I spend ever more with Apple App Store because despite my hesitations they make it easy and it works, whereas I don't want to buy digital with Sony and they make it easy not to do that.
>>>PC means freedom, or not being at the mercy of a single uncaring corporate entity
I just don't care about that stuff any more.. life's too short to worry about who our corporate overlords are.
Yeah, wait until you will have 30GB update for a game to download.
Put in game, 20GB update. Cool, guess I'm not playing that today then.
Sold that thing a couple of months later, what a waste of money the whole rigmarole was. Bloodborne wasn't even that good imo either.
Oh, you mean the 15 mins swap of a graphics card every 3 years?
It's not as bad as it used to be… but it's still not like an appliance.
"Maintaining" a PC is super easy...
But consoles drop frames as well, people just don't care. You can also not care on PC, like lots of people do.
And hardly no one uses SLI or even recommends it, precisely because it's annoying.
I don't see how you can state that it's any easier to troubleshoot a console. Usually your only feedback is a generic error code which if you Google refers to 20 different problems all with the same error, or an LED blink code which is equally vague.
Sure, PCs have their own unique problems that consoles don't suffer from (e.g. malware on Windows) but I find that PCs typically have much more descriptive error codes, and usually lack the kind of design flaws present in consoles (e.g. Xbox 360 RRoD due to insufficient cooling).
And since the hardware is fixed games don't have incompatibilities. None of the "this version of the driver causes this but that version of the driver causes that" issues. No worrying about system requirements. If it sold for the system, it will play.
Like you said, it's an appliance and I never have to think about it. You buy a game for the PS4 and it works on your PS4 and that's all there is to it.
You can treat your PC like a console. Just clue everything so that you can't maintain and upgrade it.
Now you have PC that is like a console. Buy a new one when this is not good enough.
This is not what freedom of choice is about (to me). What I care about is the amount of power they yield over my experience. In Linux their power is low. Windows? Mac? Too high for my taste.
Right now my infant child wields "absolute power" over my life so the convenience and time savings of a console is worth it to me.
Oh this state of things can change faster than the duration of your lifetime, so you should care.
I'm with you in general (PC Gamer here) but your example does not really work well. There are many games only available on Steam and not GOG, and while you get most Steam games (but not all) on Humble Bundle, Humble Bundle only gives you a Steam key so you are back to Steam and its potential issues.
Unfortunately, because of Steam's predominance in PC Gaming, you are more of less stuck with the same problems as the PS4, as in "if it does not work, suck it up". There is no other store platform on PC that has the breadth of choice of Steam.
No, you can download the game directly from Humble Bundle but sometimes (often) they also give you a steam key.
Edit: Steam puts your account in a restricted mode when you issue a chargeback.
A few months ago, my account was hacked and someone charged $50 to my card. I contacted support and they told me "normally all sales are final, but they would do me a one time courtesy and refund the money since my account was indeed hacked." WOW THANKS FOR BEING SO KIND SINCE THIS WAS CLEARLY MY FAULT.
Sony no longer receives money from me.
Case in point, Microsoft shipped a EXPLORER.EXE with Windows 10 that is 20% whitespace because of an Adobe invented metadata format that I bet nobody involved with Windows ever thought about it. See
As time goes by, minimum hardware requirements for operating systems increase. This isn't because it is more difficult to write software, or the hardware stopped being capable of doing the same thing it used to do. It's because software design wastes more resources as those resources become increasingly available.
Today, people ship statically compiled binaries of tens to hundreds of megabytes in size because it is convenient to do so. That would have been absolutely insane before increased bandwidth and storage space made it practical. I have an internet connection so fast my wifi card can't use all the bandwidth.
What a time to be alive.
But - I like that. Not having to worry about some library version because I know the obe that is supplied is the same one the developer put there. Nice.
However, more often than not, you've just lost visibility into what libraries and versions you're actually running..
The actual benefit to containers is abstracted and unified software components in a complex system. But as we create new, different cloud computing tech, this becomes more difficult, and defeats the purpose somewhat.
Slowdown issues. On a game that is literally nothing but 90s graphics.
Anyway, it was about 500Mb in size and it runs Sonic 2 on the Xbox One's Xbox 360 emulator, which is then emulating a Sega Mega Drive.
That just blows my mind! 8 x86 cores to emulate 3 PPC cores to emulate a mega drive.
Also Sonic on the Mega Drive had multiple framerate drops, e. g. when you lose lots of rings.
Dreamcast would probably work, given enough effort by people who know the platform inside and out. But then, a much lower barrier to entry is probably a significant reason as to why Sega was willing to fund another 2D Sonic game in the first place.
There really is no excuse whatsoever for a 2D game like sonic mania to run badly on even low-end PC hardware even with built-in graphics hardware.
Memory bandwidth is the main concern when blitting several layers of pixels 1:1, unless you have some kind of crafty system in place for avoiding overdraw. Although Sonic Mania seems to genuinely use a 320x240-ish internal framebuffer, so my wild guess is that the bottlenecks relate to scaling and post-processing, or possibly some funky driver use causing sync issues.
From your link - so I guess the situation on linux isn't different? Or am I missing something, other than that it depends on the embedded PNG?
Yes, you could embed images in ELFs, too, if you really wanted, but why? There's no standardized way to do it, and putting the images in separate files is easier.
I explain how to do this in the context of embedding a version file here: https://stackoverflow.com/questions/16349557/does-gcc-have-a...
Well, that's just a job. It's the people spending their time voluntarily on a project who are the few that really care.
Also often a project team develop blind spots; I'm on an internal initiative at work helping out another team with some long-standing performance problems, and it's remarkable how much a fresh pair of eyes helps. It challenges assumptions and asks basic questions like "how does this work exactly" and "why are you doing that".
I'm not surprised, Google is better at these kind of things.
Anecdotally, when I switched DNS servers to Google vs. my ISP, PS4 download speeds improved significantly (20 minutes vs. 20 hours to download a a typical game).
But CDN configuration errors can happen to anyone. I think this client-side mess is a much more creative way to screw up :)
There's a similar thing happening (different DNS giving vastly different experience) with Netflix as well: Netflix has local nodes _inside_ many ISP's networks, close to ISP subscribers. If you use your ISP's DNS (and if they're doing their job properly), they'll send you to one of the local nodes and it'll take much less time to buffer, your stream will start faster, you'll drop less etc. Whereas if you use Google's DNS, they don't (necessarily) know the inner workings of your ISP so they send you to a generic Netflix CDN node that's near-ish but still probably 10-100x "farther" as the photon flies.
If you want to have a mix of DNS settings per domain name, you can set up a dnsmasq inside your network and configure it to recurse to google's DNS normally but recurse to your ISP's DNS for just netflix.com, nflxvideo.com etc. see for instance  for one way to set that up. they have it running on a dd-wrt router but I have it running just on some rando linux machine somewhere in the house and configure my DHCP to tell everything to use that machine's IP as the primary DNS.
It's been a while since I've looked into it, but I thought that Netflix's devices peer with the provider's routers and intercept traffic through that way, so the DNS server shouldn't matter (because they would share the same IPs as the publicly accessible CDN load balancers).
I thought most CDNs worked that way now - using DNS never really worked well.
I guess it's location more than anything. I have a fiber connection in Norway and it always downloads games at maximum speed.
It's definitely not an ideal method when throughput is more important than latency, as in the case of 40GB game downloads (or streaming video, for that matter).
Sony could easily implement an alternate system where DNS based GSLB is used to first locate a nearish node that then issues a redirect to an actually nearby node after doing some basic lookups (to resolve the "I am not where my DNS resolver is" problem).
Sony could also implement additional logic on the client (since they control the client after all), where it obtains a list of possible sources and runs races against them before deciding. Some server control is still desirable, in the event of a business reason to prefer a less optimal path or an overloaded source. But again, since you control the client, you can have it respond to your "chill out" packets and move to its next nearest target.
That Sony doesn't appear to do any of this probably speaks more to a lack of expertise than anything else. They run a pretty decent online service with PSN, but I get the sense that they aren't really hiring people with a ton of Internet content distribution experience.
But yes, there's a lot of client side smarts they could benefit from.
Anycast isn't perfect either though. It may be more likely to get you a route with fewer hops, but that's not necessarily the lowest latency link. And if there's a congested link in the path, it's much harder to steer a portion of users, but not all to alternate servers. You also have to deal with the potential of routing changing from packet to packet, and potentially different locations getting different parts of the same connection. It's not 'just announce the same IPs from multiple locations' and take a nap.
Anyway, unless you have thousands of datacenters well distributed throughout the globe, a lot of people are going to have high latency to whatever server is chosen, and high latency and very small windows == poor throughput, regardless of how you get to a server.
If there is a set of choices and a decision to make, letting the client choose would be the best option.
I think I'd start with a client shuffling the list of results and asking for small fragments of the dataset from each. Based on the latency, bandwidth detected, and the size of the requested data it would then pick the most optimal set of servers. This also allows for fallback in case servers are unresponsive.
Sure, but it's better than DNS based solution.
> It may be more likely to get you a route with fewer hops, but that's not necessarily the lowest latency link. And if there's a congested link in the path, it's much harder to steer a portion of users, but not all to alternate servers.
This is also true for DNS based solution because of DNS caching. Smart networks will change routes before your DNS changes get propagated.
> Anyway, unless you have thousands of datacenters well distributed throughout the globe, a lot of people are going to have high latency to whatever server is chosen, and high latency and very small windows == poor throughput, regardless of how you get to a server.
Depends what's your definition of high latency. If you live on desert in Africa or some forgotten village in India, then you will always have high latency because no one will build a POP there.
One could imagine a system where one datacenter forwards the packets (encapsulated) to the other so that the stream isn't broken.
However, with both PS4 and Xbox One it's amazingly slow to browse the stores and much of the dashboard. Anyone else experience that? It's so bad I feel like it must just be me... I avoid it as much as possible and definitely decreases the number of games I buy.
Same, my PS4 can take up to a few minutes to load store pages. And sometimes it just hangs, or times out, and I need to quit the app and try again. It's really not a good experience at all.
(I use steamcache/generic myself, but should probably move to caching my 2 consoles as well).
Using a local DNS resolver instead of Google DNS helped my PS4 speeds.
The other "trick" if a download is getting slow is to run the in built "network test". This seems to reset all the windows back even if other things are running.
I did not see that in my testing. The built in speedtest runs with a large receive window, but the store downloads are not affected. (You can see an example in the last graph; there's a speedtest early on that has a receive window 100x larger than the PSN downloads that are crawling along).
It's probably just a placebo.
Possibly, but I think the OP above was onto something about it 'resetting' the windows. I just tested downloading a large game while I had another game running. The PS4 showed it was going to take 6 hours. I closed the other game I had running, and it still showed 6 hours. I ran the speed test, and now it shows it's going to take 50 minutes.
BTW, thanks for running these tests as I have some stuff to try instead of waiting hours for a game to download :)
The measurement really needs to be done on the wire.
that's not a surprise, as when using Google DNS the CDN has much less information about you to try and route you to a good choice of CDN endpoint. will be variable based on where the google DNS endpoint is and you are [plus the connectivity of a given CDN to both of those places]
More information about that here including "EDNS Client Submit" a solution they deployed to send a partial client IP to the server but only for specific CDNs they have set it up with:
"I've read claims that Google Public DNS can slow down certain multimedia applications or websites. Are these true?"
But also, you want to keep the pipe full (as explained in the article). So, to balance these opposing requirements, the receiver generally has a buffer which is only somewhat larger than the capacity of the pipe (the bandwidth-latency product).
In TCP, the receiver reports the size of this buffer as the receive window. This way the sender knows exactly how much it can send before the receiver's buffers are full. And the receiver's buffer must be large enough to hold at least an RTT's worth of packets, since that is how long it takes (at a minimum) after the sender started sending the data for the sender to hear back from the receiver that it has processed some data and made room in its buffer. Any less and the sender has to stop and wait (like in the article).
The blog post talks about "receive windows" which comes into play here. The server will send up to the number of bytes specified in the receive window before needing to start seeing some ACKs.
So the shorter the round trip time, the less likely the server is to spend time twiddling it's thumbs waiting for an ack, because it's exhausted its receive window. Of course, increasing the size of the receive window would also help.
(I read this answer to get here, I could be way off: https://stackoverflow.com/questions/9613197/what-determines-... )
Shrinking the window is the wrong tool for the job here, like hammering in a screw. It's sensitive to round-trip-time in a way that makes it impossible to equate a given window with a desired speed. A better method might be to monitor download speed and delay ACKs when it exceeds the limit, but traditionally Unix doesn't have a standard API to do this.
Say for example that they clamped it down to a single packet in flight. In that case, if the server was 30ms (round trip time) away then you could only get a packet every 30ms, and if the server was 60ms away then it would cut your speed in half.
In normal situations this doesn't happen: your max-packets-in-flight limit (aka your receive window) will eventually scale large enough that it doesn't matter how far you are from a server, the speed will eventually reach the bandwidth limit of the slowest part of the route.
You will have data inside de pipe all the time, the amount of it is a product of bandwidth an latency (pipe length), which is measured as RTT (round-trip time).
The sender puts data on the pipe and must wait for acknowledgment from the receiver. How much data should it put on the pipe before the first acknowledgement arrives?
If the pipe is too length, the sender will have a lot of data on transit, but this amount will grow over time based on some algorithms (congestion control).
TCP keeps a sliding congestion window that relies on RTT.
If the delay is longer, and the receive window is the same size, then total throughput is lower.
See this commit:
I'm in Singapore and my normal download speed is around 250 Mb/s, sometimes getting closer to 300.
However, I sometimes download from the Swedish store as well, and those download speeds are always very slow. I don't think I've ever gone above one tenth of what I get with local downloads.
That said, bandwidth between Asia and Singapore are naturally more unpredictable, so I don't know if I can blame Sony here. My point is that PS4 downloads can be very fast, and the Singapore example is evidence of this fact.
I hope this is addressed by Sony in the future, or at least let us select if a download is a high priority or not.
Turns out I was right.
The Switch firmware even states that it will halt downloads if a game attempts to connect to the network.
Win10 and XboxOne have peer-to-peer download - who would want that, bad for users, wasting upload bandwidth and counts against your monthly internet consumption. https://www.reddit.com/r/xboxone/comments/3rhs4s/xbox_update...
The proliferation of legal peer-to-peer systems will hopefully drive the spread of symmetric connections.
So if you were downloading or streaming and uploaded anything more than a small image your download speed would plummet as you ran out of upstream bandwidth.
Online backups/syncing? Hope you're not trying to do anything else that requires bandwidth.
You can work around bufferbloat if you have a DSL modem with fq_codel (rare), or if you insert a router with a rate limiter and fq_codel, so basically you never fill up your DSL modem's queue.
I 'solved' it by going to a provider with more upstream (cable) then later fiber. Even without this issue the download speed wasn't very good for the price so leaving was an easy decision.
I get the same on Sky (UK) ADSL (fibre[TTC] seems to be unaffected). When I'm uploading e.g. a video to YouTube, even something like Hacker News will often time out.
What organization has multiple XBOs, though? Game dev studios? Hospitals? It must be much rarer than the Win10 case.
Is it P2P or more like torrent where there are multiple peers? That would make more sense.
At the end of the day, isn't the switch and xbox faster at downloading? That's what really matters.
For example, my home connection (in Western Canada) is 150mb down and 15mbit up. For general home usage (Streaming Netflix and whatnot), you definitely need more download than upload, but if I'm working from home over VPN, syncing pictures to Dropbox, etc. you can definitely saturate 15mb. And most North American households have much less bandwidth than I do.
Maxing out the standard 3mb uplink is easy when photos sync or any number of other use cases.
Maybe you can find another ISP, I’m in northern Germany and my ISP provides 100mbps down, 40mbps up via DSL.
Here in America we believe that competition involves 1 cable company and one legacy phone company who offers 2/0.5 DSL for $70/mo, and will not actually install it if you try!
See my other comment in this post for an example.