While you probably don't want to write a whole program like that, sometimes it is helpful to use these features when porting code from C or for performance reasons. See for example this implementation of memmove that is used for copying strings and other things:
// Managed code is currently faster than glibc unoptimized memmove
// TODO-ARM64-UNIX-OPT revisit when glibc optimized memmove is in Linux distros
One could argue that C# wasn't the best choice, but that was out of my hands.
TLDR: arguably the most popular game engine of our time has already implemented exactly that (although in more of an experimental and unstable stage yet).
The real issue with GCs is the lack of control that makes it very difficult if not impossible to do things like grouping allocations together. However, this capability comes at a cost since then you can’t move around existing allocations without going to extreme lengths and can easily lead to memory fragmentation.
As far as the language goes, you can do anything in C# that you can do in C in terms of memory-unsafe programming - raw pointers, pointer arithmetic, unions, allocating data on the stack - it's all there, and it's not any harder to use. Since semantics are the same, it should be possible to implement it just as efficiently as in C. At that point, the only overhead that you can't remove by avoiding some language feature or another is GC, although you can avoid making any allocations through it. So the real question is whether a language with GC can be considered low-level.
I think it's important to separate the language from the runtime so people come out less confused versus conflating them to cover the 90% use case.
I don't consider this pedantic in this case. But the line isn't hard so we'll have some disagreement how to teach / not teach this.
Even when you write in a low level style using the .Net/Mono VM, you're still fully protected by the guarantees of the language and runtime. You can't chase bad pointers around through memory or overflow the stack. It's impossible to access an array without a bounds check unless the JIT can prove it's not needed. The safety this provides is night and day to what it's like coding close to the metal (I develop games in C# and have written a fair amount of C/asm).
But as you say, we can agree to disagree about that...
Many native languages have these features without having a managed runtime.
The memory safety goes out the window when pointers are in play; this is why we have 'unsafe' blocks.
If you used the same code in a full .NET runtime deploy it would still be very efficient and most likely vectorized as well.
Unity's Burst compiler is not nearly as mystical as their marketing team would have you believe at first glance.
.NET Core 3.0 will bring in CrossGen for AOT, and you also missed Xamarin/Mono support for AOT.
Moreover, Unity has invested heavily in the c#/.net/mono ecosystem; the editor runs on mono, and the runtimes are all either IL2CPP (an in-house translation from .NET bytecode to C++) or mono as well. It would be a truly crazy amount of work to switch, for little or no clear benefit. If one were picking a language today to write an engine in, Rust might be a reasonable competitor, but it's also fairly new and is arguably less noob-friendly. C# offers a nice ramp from new programmer to performance-critical AAA with burst.
C++ is terrible for fast iteration times and is capable of hard-crashing with simple mistakes, Lua is not typesafe and the library support is weak, and Java is still not a portable language in 2019 (PS4, Xbox, Switch and even iOS).
C# has ended up being a wonderful choice to build games in, and now that the 2019 version of Unity has an incremental GC we are entering a golden age with the tool.
Its not about what you like, what you know, what is trendy, what is top of the pops this week - its about getting shit done and that happens when you use the right tools. The tech industry, like every other industry, ultimately involves people and people are as subject to fashion, peeves, and emotional attachment to their investments(i.e. the languages you know well) - a comment on C# vs C++ seems to start wars just as religious as vi vs emacs as we had back in the 80s with COBOL vs RPG3!
End of rant.
* yes, I know ‘low-level’ is a subjective term
Then again some people will say anything. It's how things are used in practice that matters.
Garbage collection traditional means you can't meet hard latency contracts. And by definition you have no control over where garbage collected objects are in memory.
This is where the term "low level" is a bit ambiguous and relative. If you're trying to bit bang some serial protocol on a microcontroller with tight timings, C# and the GC are going to give you a hard time. But even C might give you a hard time here. I've had to drop down to inline assembly because it's much easier to count the cycles it'll take and insert a few nops where necessary to get the timing right.
Then, if you're on a non-realtime OS like Windows or Linux, no language is going to give you hard latency constraints.
Going further, it would be difficult to bit bang with tight timing constraints on x86 altogether due to speculative execution.
So from that perspective, C, x86, and modern operating systems might all be too high level to achieve your goal.
But for other goals, you might have a different answer. One interesting example was Xbox 360 games.
Game developers have traditionally used C++, because they have performance and latency constraints: if you're targeting 60 fps, you don't want any frames taking more than 16.6 ms. Indie developers had to use C# to make games for the 360. Unfortunately, the .Net Compact Framework it used had an old, slow, non-generational GC, which would cause hitches. So games would do one of two things to avoid hitches. Either they'd design their games to make zero allocations per frame, or they'd design their memory layout to be very simple so the GC would run fast. Structs are very helpful here because they don't generally allocate on the heap.
So was C# low level enough for game development on the Xbox 360? I'd say mostly. It gave you some hassle with the GC, but ultimately you had enough control over performance and latency to make it work.
If you're inputting hand assembled hex, then an assembler is high level. If you writing ASM, C is high level.
C is now thought of as low level, I'm sure many formerly 'high' level languages will become low level also.
I agree that it may not be the best choice, but it is still interesting to see what is possible.
The problem occurs when you liberally allocate a dynamically-sized list of dynamically-sized lists (possibly of a union of different data types). That situation will always throw up an engineering trade-off between memory size, algorithm performance, and code complexity.
The solution will usually consist in severely constraining and restricting runaway flexibility in the data structure being managed, simply because it won't badly affect real-user use cases. Once in a while the strategy will exact a serious and real price anyway, but hey, you cannot please everybody.
>Yep, Bob uses i++ instead of ++i everywhere. Meh, what’s the difference.
I know what the difference is, but if you're just using it on its own line or in a for loop, why would you use `++i` instead of `i++`?
var temp = i;
i = i + 1;
i = i + 1
See example of actually implementing them
For some inexplicable reason the postfix operators are usually taught and learnt first, and thus considered more natural, even though there are more objective reasons for the opposite, at least in C++ world.
The (unary) postfix operators might have semantics that are more surprising than the prefix variants, but the postfix syntax itself feels a bit more natural and fitting with other syntax of the Algol language family, IMHO, when compared to the prefix syntax. I think that's what leads people to introduce it first.
You can sort of read `i++` as an "abbreviation" of `i += 1`—in both cases, you've got an operator that follows an lvalue variable, and which both reads from and mutates the memory referred to by that variable. The only difference is that one has an additional "argument"—the amount to mutate the lvalue by—while the other has a "default" argument of +1.
`++i`, on the other hand, is a surprising bit of syntax, the first time you see it. It takes (and mutates) an lvalue... on its right! None of the (few) other unary prefix operators in Algol-like languages take lvalues, so it's kind of a shock.
(Also, for newcomers to programming, unary - [numeric negation] is commonly interpreted as part of the grammar of a number literal rather than being an expression; and unary ! is commonly seen as just "part of the language" (i.e. less like & and |; more like && and ||). Unary ~ is the only one I'd expect it would occur to any new programmer as even being a "unary prefix operator" rather than just "the way the language is." And it's a pure-functional transformation of its input.)
But postfix operators have simpler/more intuitive presentation.
++i and --i ressemble + +i and - -i
Whereas i++ and i-- resemble abbreviations of i += 1 and i -= 1
That's my question. What are those reason?
C does not even have a standard (dynamically-sized) list concept built in, because that would amount to committing oneself to a default heap allocation/deallocation policy. All you can get is a contiguous range of bytes with some vague, default interpretation as to what it is supposed to be, through the use of malloc/free (possibly hacking it through realloc). That is why C is considered low level.
Still, in a true low-level solution, you would use the sbrk system call directly. So, in a sense, C may already "add too much value" to be considered truly low level.
For example, here, complete access to the linux/bsd/osx system call interface, including sbrk, from lua: https://github.com/justincormack/ljsyscall
Just load libffi (https://sourceware.org/libffi) or a similar module in any arbitrary language, and off you go!
C isn't a low-level language
Astrobe is still in business.