Flying is jerky because we do no client-side processing. A real MMO will use dead-reckoning etc.
At the top left of the screen you can see your network latency. Obviously, it depends on where you are. The server is a single US-EAST EC2 instance. So this is entirely a network issue. I can tell you that the server isn't starting to feel the load yet, and is currently running at 32.5Hz (pretty much the maximum rate). This is a much higher rate than most MMOs that run at 4-10Hz.
I have to concur with Ryan, but for slightly different reasons. For me, the biggest scaling concerns are not with efficient use of hardware, but with guaranteeing user experience. I can see how the actor model can ensure efficient use of hardware. I can also see how it could be leveraged to give responses to user input almost as fast as network latency will allow. I'm not yet sure that it will make performance monitoring or understanding performance bottlenecks any easier or that it won't make such activities more complicated.
I don't think it will make identifying performance bottlenecks easier, but I can't see why it would make it harder, either. Modern software systems are complicated; they're built on complicated middleware on top of complicated OS, on top of complicated hardware. There are tools to help you find your bottlenecks, but handicapping your application just so you'll understand it as we used to understand performance in the 80s and 90s seems like the wrong choice. If you want to have a simple mental model of your application's performance, just run it on a 386. With instruction-level parallelism, multiple cores with MESI coherence protocols, optimizing JITs, automatic memory management and more, the days of running the code in our head to find problems are long gone. You have to measure with the new tools and learn to trust them.
I don't think it will make identifying performance bottlenecks easier, but I can't see why it would make it harder, either.
Well, in the example of your demo, the system can detect individual actor-loop overruns. How do I know that this will result in a graceful degradation? What if all of the overruns happen so close together in time, the system has insufficient time to mitigate and keep out of the notice of players? If having game loops on individual actors is more efficient but otherwise semantically equivalent to having a game loop, I would like to know more about this. However, my understanding of the actor model is that it comes with built-in indeterminacy.
Modern software systems are complicated; they're built on complicated middleware on top of complicated OS, on top of complicated hardware. There are tools to help you find your bottlenecks, but handicapping your application just so you'll understand it as we used to understand performance in the 80s and 90s seems like the wrong choice.
A system one can analyze is always preferable to a system that's potentially more efficient but less understandable.
I'm not trying to shoot down your architecture. I considered it for a couple of weeks, but I turned it down because I couldn't understand the risks involved. You can leave this as "an exercise for the reader" but we both understand this isn't going to win most people over. (Maybe that's your goal?)
> A system one can analyze is always preferable to a system that's potentially more efficient but less understandable.
Yes, but analyze how? In your head? If that's the case, then the last hardware+software architectures you could analyze in your head were about ten years ago. If that kind of performance is always enough for you then fine. But no one is able to mentally analyze complex software performance on modern OSes and hardware any more. On the other, there are tools that analyze performance for you.
Because game developers are among those who care the most about analyzing performance, and because they like working close to the metal, they like to continue using that approach, but it's no longer viable. There is no more "close to the metal" when the CPU itself is so complex it's almost non-deterministic. Unless your game runs on a console, there is so much interference from other processes, that even believing you understand how your code works in a vacuum is not good enough to guarantee real-world behavior.
What I'm trying to say is that writing single-threaded code, combined with network calls and queues for asynchronous processing is not really more understandable. It's just similar to 90s code so people have the illusion they can understand performance as they did back in the 90s.
For example: suppose your thread is now idle. What do you do? sleep or spin? If you sleep you're at the mercy of the OS scheduler; if you spin, modern Intel CPUs can actually decide to power down your core. In either case you'll have increased latency when waking up.
: forget virtual memory and multi-level caches. There's ILP, TLB, core boosting and power management, SSDs, modern thread schedulers and more and more.
Of course not. But I can work out how the software tools for performance measurement are going to work in my head. I can also work out if the performance degradation of a game loop will be gradual with some degree of reliability and also know how the measurement tools to verify that are going to work.
There is no more "close to the metal"
Long time Smalltalker and Clojurist here. Preaching to the choir you are. However, to minimize risks, one should still understand what one is doing as well as they can beforehand. I'm not saying that your architecture is bad. I'm trying to explain my decision making process with regards to it. Furthermore, if I had more information about how to implement pragmatic performance guarantees or at least a high probability of gradual degradation, and if I knew enough about performance monitoring in the actor architecture, I would probably change my assessment of the risks involved in trying it.
Unless your game runs on a console, there is so much interference from other processes
It's probably going to run on a dedicated server.
Basically, what I've been trying to get across in these threads is: Your architecture sounds really cool but I don't quite have enough information about it to make me want to try it.
Or to put it another way:
"It's really great because of Z!"
"Yes, but I also really need X and Y and I don't know how it would be with X and Y."
"Yeah, you should be able to work all that out, and can't you see that it's really great with Z!?"
Over the holiday I did usual, fix/clean my grandmother's computer. She's been using chrome because I explained to her how much safer it is.
I did a google search and realized something wasn't right. Uninstalled all the crapware apps that wormed their way in. And then I looked at the chrome extensions and low and behold there it was, more crapware.
I removed them and they re-added themselves. I had to run spybox s&d to remove it completely.
Moral of the story: chrome extensions are in some ways worse than toolbars.
Freddie Mac and Fannie Mae do way more than mortgage houses, first of all. Second they are an example of a single entity among hundreds or thousands who also played a roll. Freddie Mac and Fannie Mae are subject to the same Regulatory and market pressures as any other financial institution.
You say "government" like the "market" and the "government" are separated by a vacuum. Spoiler alert, the "government" is the single largest purchaser for the "market", they drive the most demand and ensure a stable supply for a large majority of commodities that prop up your "market". They also seem to provide a safe and stable market place.
FMFM are an organization that is subsidized by the government and is held accountable to the government. In the same way Farmers are subsidized and held accountable to the government. In the same way hospitals and eldercare are largely subsidized. In the same way weapons manufactures are subsidized. In the same way fire departments, police, road construction, rail, utilities, and air are all subsidized by the government.
And whoa, low and behold they are all also held to the same regulatory and market pressures any other company are held to. Just that, society decided these institutions are important to THE PEOPLE as a whole to let them go to shit because they don't attract investment because the margins are too thin.
Its not as black and white as you want to make it. Look hard enough at an "market" and you will find government subsidizing it in some way, even when it regulates.
The most interesting part was trying to decipher what was got cleanly and what was got with PRISM.
Stackoverflow is one not specifically mentioned as "obtained records from" just that those actions happened on stackoverflow. Non-public actions. Also why would stackoverflow keep a record about each username and email change, but not IP and access times? They never mentioned how he connected to SO or if he masked it. BUT they mentioned that in every other case.
I imagine they had access to his gmail account and StackOverflow emailed him when he changed his account information. I would bet that the StackOverflow information came after they identified his gmail account (which had his full name in it!).