(And yes, its name is truly unfortunate. It's hard to have a conversation about the language, especially in person, without giggling over the name. Pronouncing it "coke" helps, but it's still the elephant in the room.)
(Also, I'm not sure if the fact that this pdf is entitled "coqasm" is a pun or an amusing coincidence.)
It truly is a shame what horrid reputation object-oriented programming has received, even amongst a lot of people who should supposedly know better.
OO and functional programming are not opposites in any way, nor should they be.
I don't know through what circumstances exactly did we ultimately get the quasi-procedural Nygaard-style OO of today, which is indeed clunky and arcane, as opposed to the style of OO rooted in message passing espoused by Alan Kay and the Smalltalk family.
Huge advances in JIT and language VMs can be traced to the work of the Self language designers (itself descended from Smalltalk), the sheer elegance and minimalism of Smalltalk syntax is likely unmatched, with a plausible exception of Forth, the power and dynamic introspection capabilities of the image-based Smalltalk environments are to this day the subject of a lot of poor reimplementation of various subsets, and one could even make the case that languages like Erlang are object-oriented to a degree.
Yet it seems OO research has stagnated and instead FP has become the master paradigm everyone wants to focus on. There's a lot of talk about "programming language as operating system" as well in these circles, but no one has come closer to replicating it than the Smalltalk environments yet.
Furthermore, the types required are a bit more general than OO, so once you've introduced the former it doesn't make sense to constrain your landscape of thought to the latter.
What assumptions are you referring to?
Depends what you mean by "OO".
If you mean mutable + stateful programming with objects as an abstraction mechanism (which is what people usually mean by "OO"), then yes, it is in some ways opposite to pure FP.
OO to FP is kind of like Turing Machines to the Lambda Calculus. They both get the job done, but one is kind of a hack and one is very elegant.
IIRC I've heard from knowledgeable sources that it's quite intentional, a joke at the expense of les anglo-saxons.
Judging by the logo, it's quite intentional.
language -> Coq -> prove properties
As another example of this, Sandrine Blazy and Xavier Leroy have a paper that formalize a subset of C in Coq:
There's also some discussion on StackOverflow about the definition of what a certified program is:
In any case, formalizing a subset of the x86 architecture in Coq means that we can prove properties about a program written in assembler using Coq, which is both interesting and impressive.
Hell, it makes me want to open the IDE and prove something right now.
It makes me want to do a little bootloader development too... in Coq!
It might be fun to make a tiny 16-bit real mode OS with this also. You could head towards your own miniature version of SeL4 os something.
Oops! I forgot that it couldn't produce 16-bit machine code. Still, build a tiny bootloader that grabs the next sectors from the boot medium, then jump into protected mode (pretty easy really) and you are good to go!
This is a really interesting and thought-provoking idea. In writing httpdito, and in particular in putting a sort of tiny peephole optimizer into it as gas macros, it occurred to me that (some) Forth systems are awfully close to being macro assemblers, and that really the only reason you’d use a higher-level language rather than a macro assembler is that error handling in macro assemblers, and in Forth, is terrible verging on nonexistent. Dynamically-typed languages give you high assurance that your program won’t crash (writing the Ur-Scheme compiler very rarely required me to debug generated machine code), while strongly-statically-typed languages give you similarly weak assurances with compile-time checks; but it seems clear that the programmer needs to be able to verify that their program is also free of bugs like memory leaks, infinite loops, SQL injection, XSS, and CSRF, or for that matter “billion-laughs”-like denial-of-service vulnerabilities, let alone more prosaic problems like the Flexcoin bankruptcy due to using a non-transactional data store.
Against this background, we find that a huge fraction of our day-to-day software (X11, Firefox, Emacs, Apache, the kernel, and notoriously OpenSSL) is written in languages like C and C++ that lack even these rudimentary memory-safety properties, let alone safety against the trickier bogeymen mentioned above. The benefit of C++ is that it allows us to factor considerations like XSS and loops into finite, checkable modules; the benefit of C is that we can tell pretty much what the machine is going to do. But, as compiler optimizations exploit increasingly recondite properties of the programming language definition, we find ourselves having to program as if the compiler were our ex-wife’s or ex-husband’s divorce lawyer, lest it introduce security bugs into our kernels, as happened with FreeBSD a couple of years back with a function erroneously annotated as noreturn, and as is happening now with bounds checks depending on signed overflow behavior.
We could characterize the first of these two approaches as “performing on the trapeze with a net”: successful execution is pleasant, if unexpected, but we expect that even a failure will not crash our program with a segfault — bankrupt Flexcoin, perhaps, and permit CSRF and XSS, but at least we don’t have to debug a core dump! The second approach involves performing without a net, and so the tricks attempted are necessarily less ambitious; we rely on compiler warnings and errors, careful program inspection, and testing to uncover any defects, with results such as Heartbleed, the Toyota accelerator failure, the THERAC-25 fatalities, the destruction of Ariane 5, and the Mars Pathfinder priority-inversion lockups.
So onto this dismal scene sweep Kennedy, Benton, Jensen, and Dagand, with a novel new approach: rather than merely factoring our loops and HTML-escaping into algorithm templates and type conversion operators, thus giving us the opportunity to get them right once and for all (but no way to tell if we have done so), we can instead program in a language rich enough to express our correctness constraints, our compiler optimizations (§4.1), and the proofs of correctness of those optimizations. Instead of merely packaging up algorithms (which we believe to be correct after careful inspection) into libraries, we can package up correctness criteria and proof tactics into libraries.
This is a really interesting and novel alternative to the with-a-net and the without-a-net approaches described earlier; it goes far beyond previous efforts to prove programs correct; rather than attempting to prove your program correct before you compile it, or looking for bugs in the object code emitted, it attempts, Forth-like, to replace the ecosystem of compilers and interpreters with a set of libraries for your proof assistant — libraries which could eventually allow you to program at as high a level as you wish, but grounded in machine code and machine-checkable proofs. Will it turn out to be practical? I don’t know.
(End quote from lobste.rs comment.)
Also, WHAT IN THE LIVING FUCK IS WRONG WITH YOU PEOPLE THAT THE ONLY THING YOU HAVE TO SAY ABOUT THIS PAPER IS ABOUT WHETHER YOU LIKE COCK. HAVE YOU EVEN READ THE FUCKING PAPER. JESUS FUCKING CHRIST WITH A SHOTGUN IN A SOMBRERO, PEOPLE. FUCK YOU, YOU DROOLING MONSTERS. MAKE YOUR COCK JOKES BUT SAY SOMETHING SUBSTANTIVE.
Infinite loops, on the other hand...
I just think the only real motivation people have taken up on for total languages is their proof theoretic properties which drives you quickly to dependent types.
Another reason to use a high-level language is that macro assembly code and Forth programs aren't very portable across machines. I guess you could model a virtual machine in the same way the authors model x86, target the virtual machine instead, then have a verified translation from VM operations to x86 instructions, ARM instructions, etc. I wonder if you could get similar performance to a native compiler with this approach.
Macro assembly code can be portable across machines; that's how e.g. SNOBOL4 was ported to 50 different machines starting in the late 1960s, including even the IBM PC; see http://www.snobol4.org/history.html and https://news.ycombinator.com/item?id=3108946 (a thread here that I posted in, before HN was apparently taken over by drooling cock-obsessed chimpanzees who can't take the time to read EVEN THE ABSTRACT of linked papers, by which I do not mean you, panic) for more details.
In effect, the coqasm approach moves portability, just like safety, from the language into a library. Normal macro assemblers (and, for that matter, C) make it difficult to tell if your application code has avoided processor dependencies. It seems like Coq might actually make it possible to prove that you've done that.
They have some interesting material there on building models of processors' memory-concurrency behavior. Unfortunately, reading the vendors' own manuals is not a good way to find out about that, even if you excuse the verbosity. Memory-consistency behavior is often very vaguely or indirectly specified in the architecture manuals, and worse, behavior of the actual parts does not always reliably agree with the manuals. A good paper: http://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf
For most applications C would be low level enough, and possibly a better option.
"Calculus of Inductive Constructions" (https://coq.inria.fr/about-coq)
> Coq is the result of more than 20 years of research. It started in 1984 from an implementation of the Calculus of Constructions at INRIA-Rocquencourt by Thierry Coquand and Gérard Huet. In 1991, Christine Paulin extended it to the Calculus of Inductive Constructions.
Does coq have the same connotation in French, though?
I mean if someone created a programming language called vagina, labia, penis or sphincter it would be ridiculed, cause it's not an appropriate name for a language.