What's interesting to me is that Apple products generally don't have margins anywhere near this large. Apple's products aren't generally built the 'easiest' way possible for fast assembly and such. The optimization/cheapness on these is amazing.
Until recently I was teaching for another 'coding bootcamp' (or something that could be described as such) and when I was reading the linked article/pdf my jaw was on the floor. We surely weren't perfect, but this type of stuff makes everyone look bad.
It does seem that if there's no way to buy from the USA that there's a hell of a business opportunity there. I know nothing about this, but if that gap exists and anyone wants to work on the problem together... let's do this?
The business opportunity exists in helping mainland investors move their money outside China.
Matching those that want to invest in China and those moving their money outside happens already, mainly via internet banking exchange of passwords and sufficient trust in the middle-man. It is legal (it is not legal to swap passwords, this is a simplification, but the crux of it), but an authorized exchange of ownership of back accounts.
I work in this space, the only way to get exposure to these is via equity swap using a QFII's capital. I.e. you must be a foreign institutional investor with an existing relationship to a QFII. Retail flow is not supported :) (although thats kind of what north bound stock connect is for).
Strange question; why didn't many games in this era exploit this? I'd guess that the knowledge just wasn't as easily shared? Seems these systems were capable of pretty amazing things, but those things were frequently overlooked.
However, the era of CGA composite color graphics was very short-lived. By the time the IBM-PC started to become a popular home computing platform, 16-color Tandy/EGA graphics and RGB monitors were the norm. Any composite color hacks would be rendered obsolete and unusable on these later machines.
If I'm reading correctly between the lines, they are pouring a ton of CPU into this effect. I wouldn't be surprised if you told me what we saw in the demo is literally almost all this effect can do, and there's not a lot of power left over for actually running a game.
Same thing for all the things you see Commodore 64 demos do... by the time you're creating the awesome graphical effect there's often not a lot left over for the game itself. (Though there are some interesting exceptions... there appear to be some surprisingly high-quality side-scrolling platformers now based on "bad lines", which are both explained and then the platformers shown at https://www.youtube.com/watch?feature=player_detailpage&v=fe... . Though the entire presentation is fascinating and shows a few other demoscene effects in real programs.)
The CPU usage isn't that bad compared to some of the other things we did in the demo - with some help from interrupts it could be done with maybe 20% of CPU. There is also a much easier ~500 colour variant which doesn't take any CPU time at all once set up.
I think the real reason it wasn't discovered earlier is that most CGA PCs were not connected to composite monitors or TVs (people who could afford the big expensive IBM machine could generally afford a dedicated digital monitor as well). A few games used 160x200x16 composite but even those generally had modes for RGBI monitors as well (which wouldn't work so well with the 500/1K colour modes, though I guess there are the dithering characters 0xb0 and 0xb1 which might have worked). These +HRES modes also suffer from CGA snow, which might have been a deal-breaker.
I think if you asked most game makers of that time, they would simply say, "You're crazy. You're planning to exploit the exact CGA chips and the exact mux and the exact nature of the monitor's response to the horizontal porch. The failure mode is that the whole screen goes wavy and maybe lets out a puff of smoke and dies. You're crazy!"
But also, the reality is that the demoscene didn't even figure any of this out until 2013-ish.
Some did use the 160x100x16 graphics mode (using some of the same techniques in the OP) but one drawback is that they didn't always look 100% correct on clones, and some eventually broke when VGA cards came out.
We are human beings and we take it to the limit, constantly testing boundaries and how far we can take it. Is it not always smart but it is part of our differentiation properties to find new beneficial paths. There may be some good innovation that comes out of this but right now it is a problem.
So far the relied on voluntary reductions, which has worked well in some areas and less well in others. It's not like they just woke up to the drought now, but they considered restrictions to be a measure of last (or later) resort.
Well, Texas is probably drier on average than California. California has wide swings between very wet winters and very dry winters. See this chart of Fresno, which is pretty dry on average, but ranges from a high of 22" in a year to a low of 4" over the last 80 years or so. http://www.bytemuse.com/post/drought-historical-rainfall-cal...
It was already done done mostly in the state/county level. For example, in my part of the bay area, they have raised rates especially if you have disproportionate usage. Also you cannot water your lawn more than once per week.
A serious question; did developers in the 90's simply not know things that we know to be good and true today (mutability can induce bugs, consistent naming is good, global state is almost always bad, huge switch statements bad, various code smells, etc), or were doing some of these things not efficient enough for the computers of the day for various reasons?
Like, there's definitely some smart thinking in this code, but there are also several things that just floor me. A 3000 line switch?
Haven't seen it, but a large switch can be for implementing a state machine. Which is a good structure to put a whole lot of application code in. You can tell if all the states and events are handled; you can find exactly where code for a state transition lies.
Yeah, but my understanding is that it helps a lot. CloudFlare masks the actual IP address of the web site, and distributes the load through the CloudFlare platform.
It's still possible to overload that capacity, but it's a lot more than what any standalone web server can take. Furthermore, the web server itself will stay online, as it isn't actually getting hit by the flood of requests.