From http://www.paulgraham.com/icad.html: There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.
There is a distinction. I can compile code while reading, but when I'm compiling, then I'm in compile-time.
In a compiled implementation, code needs to be compiled before it gets run. When I want it to be compiled, I have to call functions like COMPILE or COMPILE-FILE.
There is a conceptual distinction between read time, compile time, and run time, but it's only conceptual: all these things happen in a single process in an interleaved manner. When you type an expression into the REPL, first it gets read, then it gets compiled to some extent that depends on the implementation, then it gets run. Some implementations run every expression you type through the full native-code compiler, but others just interpret the expression, expanding any macros as they encounter them. In the latter case, it's switching back and forth between "run time" and "compile time" potentially many times during the interpretation of a single expression.
It is certainly true that macros are a different kind of code from most ordinary code, because what they're doing is different. But importantly, they are written in the same Turing-complete language. I once wrote a program (a C implementation for Lisp Machines) that allowed the user to interactively execute C expressions. Obviously, it had to parse the C code according to the syntax of that language, but the parser produced list structure that consisted entirely of macro calls. The guts of the compiler were implemented as a collection of macros that translated the output of the parser into Lisp Machine Lisp. These macros did things like accessing and updating a symbol table — far beyond what you could do in a C macro.
I can compile a file to machine code and load it into another process. The file compiler provides the 'compile-time'. The other process then provides 'load-time' and 'run-time'.
If anyone's still reading, I can't resist adding that I was doing type checking by macro expansion — a technique that was the subject of a 2017 POPL paper [0] — in 1983.
There are also systems like Forths, where the compiler and interpreter really are the same system, in two different modes of operation. In these systems, there is only runtime; compiletime is a specific style of runtime.
In Forth, the interpreter reads tokens. By default, each token is read and then executed. There is a token that switches the interpreter into "compiling" mode. When in compiling mode, each token is read and then used to instruct the compiler to emit code representing the token. There is a traditional and beautiful interplay [0] between the Forth interpreter and its initial stream of tokens, as the tokens customize the interpreter and compiler by augmenting their behaviors.
The main contrast that I would draw between Forths and Lisps is syntax. Forths don't really have syntax; they have token-parsing streams. Lisps are extremely tree-oriented, but Forths are stack-oriented.
Forth has two singleton stacks, and each has a DSL. Sometimes I wonder what Forth would be like with stacks as a first-class type. Maybe actors each with their own stack …
That is from one of the founders of YCombinator, the author of http://www.paulgraham.com/onlisptext.html and http://www.paulgraham.com/acl.html.