The first C "interpreters" I know of were for Lisp machines: Symbolics' C compiler (http://www.bitsavers.org/pdf/symbolics/software/genera_8/Use...) and Scott Burson's (hn user ScottBurson) ZetaC for TI Explorers/LMIs and Symbolics 3600s (now available under the public domain: http://www.bitsavers.org/bits/TI/Explorer/zeta-c/). Neither of them are interpreters, just "interactive" compilers like Lisp ones are.
I am writing a C to Common Lisp translator right now (https://github.com/vsedach/Vacietis). This is surprisingly easy because C is largely a small subset of Common Lisp. Pointers are trivial to implement with closures (Oleg explains how: http://okmij.org/ftp/Scheme/pointer-as-closure.txt but I discovered the technique independently around 2004). The only problem is how to deal with casting arrays of integers (or whatever) to arrays of bytes. But that's a problem for portable C software anyway. I think I'll also need a little source fudging magic for setjmp/longjmp. Otherwise the project is now where you can compile-file/load a C file just like you do a Lisp file by setting the readtable. There's a few things I need to finish with #includes, enums, stdlib and the variable-length struct hack, but that should be done in the next few weeks.
As to how to do this for C++, don't ask me. According to the CERN people, CINT has "slightly less than 400,000 lines of code." (http://root.cern.ch/drupal/content/cint). What a joke.
What I can't wrap my head around is how one would implement pointer-arithmetic with these closures? C pointers are not just references to cells, but those cells are guaranteed to be contiguous (up to a certain limit, be it an VM allocation unit, say "page", or all available system unit in VM-less systems)
That is to say, C pointers are not like ML references. Along with SET and REF they also allow addition, subtraction, scaling, etc.
For closures to model C pointers, wouldn't they need to order the allocation of cells in some manner? say, big array? And if so, this could get expensive very quickly (worst-case being "modeling" of entire memory, i.e. emulation) without certifying compiler or at least some exhaustive pointer analysis.
Hope I'm wrong on this.
(defun allocate-memory (size) ;; shared by malloc and static allocation
(make-memptr :mem (make-array size :adjustable t :initial-element 0)))
(defmacro vacietis.c:mkptr& (place) ;; need to deal w/function pointers
(let ((new-value (gensym)))
`(make-place-ptr :closure (lambda (&optional ,new-value)
(setf ,place ,new-value)
(defun vacietis.c:deref* (ptr)
(memptr (aref (memptr-mem ptr) (memptr-ptr ptr)))
(place-ptr (funcall (place-ptr-closure ptr)))))
(defun (setf vacietis.c:deref*) (new-value ptr)
(memptr (setf (aref (memptr-mem ptr) (memptr-ptr ptr)) new-value))
(plate-ptr (funcall (place-ptr-closure ptr) new-value))))
(defmethod vacietis.c:+ ((x number) (y number))
(+ x y))
(defmethod vacietis.c:+ ((ptr memptr) (x integer))
(make-memptr :mem (memptr-mem ptr) :ptr (+ x (memptr-ptr ptr))))
(defmethod vacietis.c:- ((ptr1 memptr) (ptr2 memptr))
(assert (eq (memptr-mem ptr1) (memptr-mem ptr2)) ()
"Trying to subtract pointers from two different memory segments")
(make-memptr :mem (memptr-mem ptr1) :ptr (- (memptr-ptr ptr1) (memptr-ptr ptr2))))
This leads me to this statement based on what you said: interesting Real World C programs make use of undefined behavior
I assume you've worked with a significant amount of real world C programs? Because they surprisingly often do. The difficulty with porting many programs to 64-bit for instance, is due to relying on implementation defined behavior.
Only embedded systems (AVR & PIC24). I have much more experience in C++, which I've used for both Desktop apps and telco server components.
The beauty of C (over C++) is that the standard is actually readable. C++ especially is a quagmire of undefined behavior. The scary thing about C/C++ is that its easy to hit undefined (or, at least, as you state, implementation defined) behavior and not even realize. Often the code looks valid, does what it looks like it does, yet is actually undefined or implementation defined and will break elsewhere.
With that said, while I don't expect everyone to have memorized the standard, I do hope most would have at least enough familiarity to avoid most cases of undefined behavior.
could you give an example of a case where you hit undefined behavior since I hardly seem to recall a case where that bit me in the past? (I'm mostly working on embedded systems (PPC & ARM))
The cases that come to my mind for C++ all involve initialization...
behaviour, such as might arise upon use of an erroneous program construct or erroneous data, for which this International Standard imposes no requirements 3.
Undefined behaviour may also be expected when this International Standard omits the description of any explicit definition of behavior.
The two most commonly cited piece of undefined behavior is modifying a variable twice in one sequence point. The standard says:
Between the previous and next sequence point a scalar object shall have its stored value modified at most once by the evaluation of an expression.
a = b++ * ++b;
For more real world examples of where undefined behavior may bite you in the ass in C++, take a look at Washu's simple C++ quiz. It's only four questions: http://www.scapecode.com/2011/05/a-simple-c-quiz/
Take a moment to answer the questions before looking at the answers.
Once you've done that, here are three more quizzes by the same guy - these ones are about OOP in C++, so may be much more relevant to your question: http://www.scapecode.com/2011/05/c-quiz-2/ and http://www.scapecode.com/2011/05/c-quiz-3/ and http://www.scapecode.com/2011/05/c-quiz-4/
Byte-wise access to objects is legal and completely portable between conforming implementations. What isn't portable are arbitrary type conversions through pointer casts as these violate the effective typing rules. Such casts may break in practice due to mis-alignment or because of aliasing behind the optimizers back.
In a way, C is a strongly typed language - the type system is just really unsound.
That makes absolutely no sense. Just think about endianness for example. Type conversions in C are extremely tricky, and in many cases are not guaranteed to be portable across different compilers even on the same architecture. There is a good explanation of what you can and cannot count on in chapter 6 of Harbison and Steele's C A Reference Manual.
While the values of the bytes are not specified, the ability to get at them is, and a conforming implementation needs to provide this ability.