I can certainly implement Scheme using Pascal. That is, I can write a Pascal program that, when given a program written in Scheme, executes the Scheme program according to the Scheme spec.
You can implement an interpreter or a compiler. But it won't turn your Pascal into Scheme.
> That is, I can write a Pascal program that, when given a program written in Scheme, executes the Scheme program according to the Scheme spec.
It's irrelevant. Can you mix a bit of Scheme code into a definition of a Pascal procedure, with all the local identifiers transparently available? No. But it's trivial the other way around.
A simple experiment for you: imagine you've got a system scriptable in Pascal. And you want to write your scripts in Scheme instead. Your actions? Implement a slo-o-ow and broken Scheme interpreter in that Pascal, right? And then think hard on how to do interop in between Scheme and all of the stuff already available for that Pascal. Funny and stupid.
Now, the other way around: you've got an embedded Scheme, but you hate all that parentheses and want to just code in Pascal. Fine. Write a little macro which will translate transparently your Pascal into Scheme. No runtime cost whatsoever and all the interop done for free. See the difference now?
It's possible, for instance, to write a program in C which compiles Scheme-formatted strings according to the Scheme spec. You use one; it's whichever Scheme compiler you use (therefore making your Scheme programs a "subset" of a C program). What's more, any Scheme program can be replaced by one written in C that runs faster and with less overhead.
Does this mean that C is "better by definition" than Scheme? No, both have their places. But any Turing-complete language with string handling is a "subset" of any other; that is an interpreter or compiler for that language can be written in any other language, so long as it's Turing-complete and can parse strings. If we look at which languages are "supersets" in practice, we find that C and C++ are the best languages ever by your theory, since many, many compilers and interpreters are written in them, while Scheme is a terrible language because compilers and interpreters are almost never written in it, even in most implementations of itself.
Your poor understanding of these pretty basic theoretical concepts and your silly fanboyism are why you're being downvoted.
Again... A compiler or an interpreter written in C is not the same as implementing a language on top of C, fusing it into all the existing infrastructure.
You did not understand what static metaprogramming is, what is an extensible language, and yet you're talking about my poor understanding.
And, by the way, I fixed an inferior C. Any inferior language can be made superior by adding a tiny bit of compile-time Turing-completeness and a bit of compile-time reflection: https://github.com/combinatorylogic/clike
Made it equally powerful to Scheme, Forth, Nemerle, TH, C++ and the other proper meta-languages.
Not very - Pascal has records, pointers, arrays, etc. You can do it, but you're writing a Pascal compiler and runtime system as Scheme macros. I don't find that a practical application of macros - and macros are powerful, and they let you do things in a Lisp that are painful in other languages.
I bowed out of this conversation because the poster was talking in absolutes, and if you're willing to go to the extreme of implementing a Pascal compiler and runtime system in Scheme macros, then I think other extremes are on the table that can give similar functionality. But I didn't see such nuance getting across.
Pascal is an extreme indeed, it is a large language and as such, unpractical. What is practical is embedding an imperative language with pointers and an unmanaged memory. And the practical value of such a language inside a meta-host is mostly as a building block for DSLs, not as something that end users would face directly.
It's really hard to convey the gigantic difference in expressive power between the meta languages and the primitive ones to those who never got even exposed to the higher level methodologies, who never built elaborate DSLs.
And no, you're wrong in assuming that there are "other extremes" providing a comparable expressive power at no runtime costs. There are none, provably. You mentioned implementing compilers or interpreters in any Turing-complete language - but it won't solve the interoperability issue between the host, the implemented new language and any languages you'd build on top, which may need to borrow semantic building blocks from your new language. As soon as you start to address these concerns, you'll end up turning your host into a proper meta-language (as I did with C, for example).
Pascal was your original example upthread, tho:
"Now, the other way around: you've got an embedded Scheme, but you hate all that parentheses and want to just code in Pascal. Fine. Write a little macro which will translate transparently your Pascal into Scheme. No runtime cost whatsoever and all the interop done for free. See the difference now?"
Yes, it was my example - exactly because there are multiple implementations of this very thing, this thread is littered with the links. My last remark was that the full, 100% standard-compliant Pascal, although impressive, is not very practical, and a lower level language is what is usually designed for this purpose.
OK, so I'm looking at https://github.com/soegaard/minipascal right now, and ok it looks like there's a couple thousand lines of racket code that will translate a very limited (no records, no reals, does not appear to be any pointers) pascal-like language.
So presumably for this to be exciting, it would have to be executing in a larger environment where it's surrounded by racket code doing its own thing. But now I don't see what's so special about that, and why that's different from, say, a python script that takes a string of fake-pascal, turns it into python thru an internal processor, and then calls an eval.
The value is in mixing semantic properties of multiple (dozens) of languages in your single environment. Building the first fundamental blocks may take time, for they are fundamental for a reason, but then adding new things on top is trivial.
So, this is essential for building rich DSLs. See my framework as an extreme example of such an approach.