I am writing a C to Common Lisp translator right now (https://github.com/vsedach/Vacietis). This is surprisingly easy because C is largely a small subset of Common Lisp. Pointers are trivial to implement with closures (Oleg explains how: http://okmij.org/ftp/Scheme/pointer-as-closure.txt but I discovered the technique independently around 2004). The only problem is how to deal with casting arrays of integers (or whatever) to arrays of bytes. But that's a problem for portable C software anyway. I think I'll also need a little source fudging magic for setjmp/longjmp. Otherwise the project is now where you can compile-file/load a C file just like you do a Lisp file by setting the readtable. There's a few things I need to finish with #includes, enums, stdlib and the variable-length struct hack, but that should be done in the next few weeks.
What I can't wrap my head around is how one would implement pointer-arithmetic with these closures? C pointers are not just references to cells, but those cells are guaranteed to be contiguous (up to a certain limit, be it an VM allocation unit, say "page", or all available system unit in VM-less systems)
That is to say, C pointers are not like ML references. Along with SET and REF they also allow addition, subtraction, scaling, etc.
For closures to model C pointers, wouldn't they need to order the allocation of cells in some manner? say, big array? And if so, this could get expensive very quickly (worst-case being "modeling" of entire memory, i.e. emulation) without certifying compiler or at least some exhaustive pointer analysis.
As you can see I don't do anything with type declarations, so arithmetic performance is going to be terrible. Multiple levels of pointer indirection work correctly with pointer arithmetic, since all pointers which can be legally added/subtracted are first-class objects.
The C standard basically only guarantees that pointer arithmetic works when the pointers involved all point to the same array object (it also allows for a pointer just off the end of an array object). Other pointer arithmetic or comparison is undefined behavior and an implementation can do whatever it pleases.
> interesting Real World C programs make use of undefined behavior
I assume you've worked with a significant amount of real world C programs? Because they surprisingly often do. The difficulty with porting many programs to 64-bit for instance, is due to relying on implementation defined behavior.
I assume you've worked with a significant amount of real world C programs?
Only embedded systems (AVR & PIC24). I have much more experience in C++, which I've used for both Desktop apps and telco server components.
The beauty of C (over C++) is that the standard is actually readable. C++ especially is a quagmire of undefined behavior. The scary thing about C/C++ is that its easy to hit undefined (or, at least, as you state, implementation defined) behavior and not even realize. Often the code looks valid, does what it looks like it does, yet is actually undefined or implementation defined and will break elsewhere.
With that said, while I don't expect everyone to have memorized the standard, I do hope most would have at least enough familiarity to avoid most cases of undefined behavior.
Before I give you an example, I will define what the term undefined behavior means by quoting the standard - Section §1.3.12 (of the C standard, not the C++ standard which is horribly ginormous and hard to read):
behaviour, such as might arise upon use of an erroneous program construct or erroneous data, for which this International Standard imposes no requirements 3.
Undefined behaviour may also be expected when this International Standard omits the description of any explicit definition of behavior.
The reason undefined behavior is dangerous is that the standard does not guarantee any particular behavior and the implementation is free to do whatever it wants - ignore it, give an error message, delete everything on your hard drive.. whatever.
The two most commonly cited piece of undefined behavior is modifying a variable twice in one sequence point. The standard says:
Between the previous and next sequence point a scalar object shall have its stored value modified at most once by the evaluation of an expression.
Emscripten in no way emulates LLVM (as far as I know it's the only C-to-JS compiler that uses LLVM, and hence must be what you're referring to!) — it compiles LLVM to JS, with no emulation, and there's currently work going on to re-implement it as a LLVM backend (it currently is an entirely separate codebase that takes LLVM bitcode in and spits JS out).
The only problem is how to deal with casting arrays of integers (or whatever) to arrays of bytes. But that's a problem for portable C software anyway.
Byte-wise access to objects is legal and completely portable between conforming implementations. What isn't portable are arbitrary type conversions through pointer casts as these violate the effective typing rules. Such casts may break in practice due to mis-alignment or because of aliasing behind the optimizers back.
In a way, C is a strongly typed language - the type system is just really unsound.
> Byte-wise access to objects is legal and completely portable between conforming implementations.
That makes absolutely no sense. Just think about endianness for example. Type conversions in C are extremely tricky, and in many cases are not guaranteed to be portable across different compilers even on the same architecture. There is a good explanation of what you can and cannot count on in chapter 6 of Harbison and Steele's C A Reference Manual.