As I say probably too often, the modern usage of "full stack" basically equates to "iceberg tip".
For other areas, we usually talk about "Systems programming" or "Application programming". I suppose the full stack there would be to do both?
Even the omission of the word "web" as you have pointed out, seems to have the smell of arrogance, as if web programming is all there is worth working on these days. But the web programming world, IMHO, is full of this kind of CV inflating jargon, lingo and buzzwords, whether it's "full stack", "Agile", "devops" etc.
Obviously, there is an element of old man "get off my lawn" that I am invoking. I am almost certainly projecting here, but it still grinds me the wrong way.
Even if they don’t have the authorization to fix it. They should at least be able to tell the ops folks what they need.
I'd never heard it prior to five or ten years ago.
It's always been a pretty ambiguous term IMO. I think it may have been coined to make some genuine distinction but just how broad was it intended to mean? The question is moot because it was quickly spread to mean very different things. You need a lot of context to make a good guess what it refers to.
Important bits almost always left out: provisioning of hypervisors, network switching/trunking/routing, security, embedded on the backend, etc.
(Should I link my professional life to my HN account in my bio? Is that the "done thing"?)
The keyword being "near". It's like all the "Java is fast" articles out there --- in theory, you can write microbenchmarks that show your language is fast, but in practice, everyone knows what it's really like.
Of course if you have a powerful enough CPU and enough RAM you could emulate any system at a decent speed with any language, but I'm willing to bet that if you ran this C# emulator on the best hardware from ~2000 (which is roughly when the first PSX emulators appeared), and compared it to the (native) ones that were around at that time, you'll see a huge difference in performance. I remember a 1GHz Pentium III with 256MB of RAM would've been sufficient for those.
It's really hard to do because you can't easily see when you pay for an abstraction (is this foreach slower than a for loop? does it allocate an enumerator? can it unroll? will this call allocate a heap array for its params? Will this call be devirtualized?)
The "Burst compiler" in Unity takes it a step further: it restricts the language to those things that are actually always fast.
I really wish the C# team could do something similar: add a fast AOT subset to C#.
However, C# has better potential for performance than Java right now, because it has value types and supports pointers in unsafe mode, which are very useful for high performance code. Java is catching up in this area, with Project Valhalla and Project Panama it should get there at some point too.
Also, sometimes “fast enough” is just that - fast enough. It is not necessarily true in every case, but sometimes it is.
Well at least we know how a NIC driver written in C# performs. Was pleasantly surprised it was very performant.
Not that you would want to replace C/Rust for that task, but anyways it is on-par with Go for this particular task: https://github.com/ixy-languages/ixy-languages
I remember running games on Bleem (yes, I actually bought Bleem) on significantly less. I know it ran on my family's Pentium 166Mhz with 64MB or RAM, but it probably wasn't full speed. I am fairly confident it ran on full speed on my 2-300Mhz P2 laptop though.
I just am most familiar with Eto off the top of my head... mostly because of PabloDraw.