The underlying idea here is a contract system, first pioneered by Bertrand Meyer in the programming language Eiffel and then augmented with the ability to describe higher-order functions by Robby Findler. PLT also strongly advocated the use of blame with contracts -- the ability to blame the responsible party (function) when a contract fails. My Masters thesis under Robby Findler was on guided random testing using higher-order contracts.
I should note that an ordinary contract system does not delete your code...
I rather enjoyed reading about the language, but was a bit disappointed that they made no attempt at static analysis, especially with such a rich set of assertions that the language enforces. I'd really like to see my entire program get deleted at compile-time rather than having to run it a bunch of times to delete all the faulty nested function calls one at a time.
Wow, I'm seriously considering adapting this for my shell scripts (with code recovery via mercurial, since I'm weak of will...). Actually, I bet there are already others doing the same, because code deletion in personal shell scripts probably turns out to be an unalloyed good.
Consider this: you've got some script that does a sequence of operations using the standard tools. Somewhere you change between imagemagick and graphicsmagick and some image doesn't get made thanks to syntax differences; after this you've used some clever shell filename expansion expression or regex to move/delete files, but the missing image changed your working context enough that suddenly you're in $HOME doing an rm -rf .! The environment is just too fragile and powerful to completely automate everything you might need without setting up enough safety mechanisms, confirmations, and sanity checks to largely nullify the value of scripting as a nearly zero development programming toolchain.
But if shell code were trivially spec'd for pre-/post- conditions and misbehaving code gets noisily deleted (noisily so everything isn't packed away into code versioning purgatory by cascading failures when a tool like rename is accidentally uninstalled)---especially if the shell of pre-and post conditions were left around stub code... Imagine the beauty of your scripts directory!
Replacing some slain block of code is trivial since the tools you need are already mostly written, and you've done something close to what you need before---plus the whole point of shell scripting is that it should be effortless and fast to produce.
If you've built a massive, multi-purpose utility, it deserves to live only as long as it runs flawlessly---after which, you should slice it down or replace it with an inevitably more reliable equivalent that was written for the same purpose years (or decades) ago.
Best of all the conditions that deleted your broken cruft even remind you exactly what you needed to do when you're re-implementing something, but often without pre-studying the entire system to ascertain what world state you are getting and what you need to produce. Brilliant!
This may have started out somewhat tongue-in-cheek (since most programming environments are at least somewhat stable and feature significant error detection and recovery facilities), but I think there's some dead serious application in the quick-and-dirty, often unchecked and fluid environment of traditional shell scripting. The more I think about it, the more I'm amazed my quarter million keystrokes of live scripts woven into my every task on my Linux desktop haven't produced more catastrophes (or any meaningful accidental data loss) or left a trail of silent failures for me to find only long after the causal bugs had manifested.
Well, from what I gather, the overarching criteria of the contest for which this was develped is "Does it do something novel to advance understanding and execution of software development?" While that kind of focus tends to bring out all kinds of wacky ideas, don't confuse defiance of the status quo for casting off all reason. When there's no shame in it, experimenters needen't rationalize or smooth over their deviation from expected norms.
I think that website is confused about undefined behavior, they accuse LLVM and GCC of "executing undefined behavior" for the minimal empty-main C99 program because the default headers included in the compiler have UB code in them.
But only user code is required to be well-formed and have defined behavior, system headers could just as well be written in Forth as long as the actual behavior under the compiler they come with is correct.
John Regehr is an expert in undefined behavior in C compilers, so rather than assume his understanding is mistaken, I would first assume mine is.
He is not accusing the compilers of executing undefined behavior. He is accusing them of exploiting undefined behavior to "optimize" code.
His points are:
1. Current compilers will perform optimizations based on undefined behavior. That is, during the optimization phase, they will say "Aha! That's not defined! I'll instead do this much, much faster thing as an optimization."
2. Lots of real code has undefined behavior.
3. The programs that result from these codes still produce the answers we expect, mostly.
4. If we take point 1 to its logical conclusion, then point 2 means that point 3 will no longer be true. That is, if compilers aggressively and thoroughly sought out to find all instances of undefined behavior in C code, and just optimized it away, many (most?) programs would cease to do much at all.
It is less of a "problem" with the standard, but more of a "design decision". Performance is more important to C-people.
For example, the result of x+y is undefined for signed integers in C, because we want it to compile it to a minimum number of instructions on every architecture. No instructions to check for overflow and branching afterwards. No error correction instructions on architectures where semantics differ to the language semantics.
You're exaggerating a little, x+y is defined over signed ints as long as as it doesn't over/underflow.
If it does overflow, it's either a target-specific hand optimization or a programmer error, but that varies on a program-by-program basis, hence GCC having -fwrapv, -ftrapv, and the default of "I don't do that, optimize accordingly".
> But only user code is required to be well-formed and have defined behavior, system headers could just as well be written in Forth as long as the actual behavior under the compiler they come with is correct.
To understand this better, realize that, ultimately, a standard is a contract. A standard says, in part, "If you refrain from doing these things, the system promises to do these things and not do these other things here."
"Undefined behavior", then, is invoked by code that strays from the standard by doing things the standard makes no guarantees about, like casting a value of type pointer-to-double to type pointer-to-char and dereferencing the result; the behavior of the compiler and the code the compiler emits is then undefined, which is to say its behavior is not guaranteed by the standard.
And, yes, a conformant C compiler is perfectly within its rights to delete code that it can tell invokes undefined behavior.
Source control allows one to bypass the moral purity of Vigil by recovering arbitrary sinful code from the past. Moral purity can not be advanced while the sins of the past remain with us forever, uncleansed. One can only properly use Vigil without any sort of source control. Even copying files before the compiler tries to run it is an attempt to cheat the system; for that, Vigil would be justified in deleting your entire code base and all the copies it could find.
Or perhaps justified in deleting you instead, as it is you who has sinned, not the source code.
Certainly this approach leads to a more pure universe, but, I suspect, a rather sparsely populated one.
(Eliezer, do me a favor and keep munificent away from AI development....)
The sinning function was the product of a human mind. Is it not the human mind always that is the sinner? So vigil must always delete the human.
In order for the human to produce a valid function that human must mentally execute the code s/he is writing. If that code is flawed then the mental compiler should delete the sinful code from the programmer's memory before it ever makes it to digital form.
If vigil merely deletes the code then it's acting like the human's brain, and forces the human to try again. But it does not eradicate the source of the sin. It acts as a tool for the human rather than a universal enforcer of good morals.
Note that one can lie as to one's sinfulness. Rather than lie about implores and swears a function can deceive by stating none at all (or just weak ones). This is like pleading the fifth. This is like not promising to not do something bad and therefore not being dishonest if you do do something bad.
Should flaws in vigil cause vigil to delete itself? Or should the inherent flaws in vigil cause it to delete the programmer who made vigil? If vigil can not catch all sins, as it currently cannot, should it therefore delete both itself and it's author immediately? If it does any less than this then it falls short of being an enforcer of pure morality, instead settling on a subset deemed of utmost importance.
I'm working on getting Vigil to be self-hosted. There's a bit of a "who watches the watchers" problem involved.
But, remember, Vigil only holds you to the oaths you yourself have sworn. If Vigil swears to punish a function, then it should be punished for failing to do so. But if it's given no oath to that effect, then it's off the hook.
You may reasonably ask yourself, "well, what oaths does Vigil swear to uphold?" The answer is a frightening "none", since Vigil is not currently written in Vigil (nor is the README, which is in Markdown, a decidedly sinful format).
This is a submission to this month PLT Games "Testing the Waters"  which goal "is to create a language that is somehow related to automated testing". I hope we'll see more interresting stuff coming from this!
To make this actually useful, instead of deleting bad functions, it's probably better to make the offending functions throw a special exception when they are called, signaling the calling of a function known to be bad. This way, you can still inspect the code for the offending functions and fix the bugs.
Otherwise, the original code could just totally disappear after multiple runs. What good is non-buggy code if it does nothing? Failure to perform the user's requirement at all is also a bug. Not having the original code that you can at least edit to fix is rather pointless, especially when vigil doesn't indicate the callers of the offending functions, because next time you run vigil, those will get deleted. Besides, if foo()'s failure is completely due to bar()'s failure to fulfill its 'swear', it's foo()'s failure is not really its bug.
projects like this are why coffeescript needs to support macros. There are so many coffeescript dialects out there that make minor changes to the language to accomodate one or two extra features, e.g. IcedCoffee. I don't want to have to use a separate, presumably less well supported language for this kind of stuff, just let me plug it in!
From the creator's own mouth, it's a satire language, but all satire has a good point. Vigil's is many programmers' susceptibility to The Broken Window problem. It's creation raises a good point about the usefulness of contracts in the absence of mechanisms for enforcing those contracts.
I just read it as a satire on how ridiculous safety checking could be if taken to an extreme. In the same way that some dynamic language advocates think its hilarious that statically typed languages won't even let you run your program if there is a type error in case you might harm yourself by violating your own rules!
I think this is a fantastic concept with lots of potential! I especially loved the tongue-in-cheek humor -- "Eternal moral vigilance is no laughing matter."
In my opinion, deleting code is fine...however you should 1) be warned that it will happen in the near future if you do nothing about it and 2) it should be tied to some version control system (e.g. GitHub) where it pushes the deletes (maybe even creating a branch and remerging it so the changes are clear) so that code can be recovered if need be.
Vigil can maybe even have some logic built in if you revert deleted code it gets "angrier" and starts more aggressively purging code in the future :P
You know, I don't think this is a half bad idea. I mean, deleting offending code is a bit much, but shouldn't a code-by-contract language not allow code that failed a contract to be compiled until it changes? That would seem to make sense.
To explicate this, it would make sense that when you pass a source file to the compiler, it would:
1. Parse the code into an AST (to strip away any pre-parse differences, like formatting or syntactic sugar usage), then hash that AST (recursively--replace any non-primitive function references with hashes of their own current ASTs)
2. Run unit tests, fuzz, etc--taking the AST hashes of each function of the testing code as well (though using symbolic, rather than "hard", references to the implementation-code, so it can change without making the test's hash change)
3. If a piece of code fails a test, add the pair (test AST hash, code AST hash) to some database (preferably an online, global database).
4. From then on, before anything is allowed to be compiled, perform a lookup in said database, and refuse to compile anything if the database finds any known (test, failed code) pairs in your code.
The interesting thing is that this system is very conservative--if a piece of code depended on by the code you wrote changes, then your code is given another chance (because maybe your code was just failing because it was expecting something of that other code, and that other code was wrong, and then it was fixed.)
Likewise, if you change the requirements, your code must be re-evaluated for conformance--code shouldn't be barred because it fails a "wrong test" like assert_equal(add_two_plus_two(), 5);.
But on the other hand, if you add tests, the old barrings based on previous tests stay in effect as well--this actually incentivizes breaking unit tests into small, orthogonal functions that pass or fail separately.
I don't think there would actually be any problem doing real development under this "restriction". In practice, given the way we currently do software engineering, it would only catch problems within your own project that had been seen+caught before--because those would be the only tests you had included.
In theory, though, you could also import, say, a global "test set"--basically, AST hashes for every test of the current stable branch of all major open source projects, or something similar--and check your code against that as well, just in case you happened to write, say, an incorrect date-handling function that someone else had ever written in an AST-identical way.
I like python-like languages, but the deleting of code seems rather extreme (and unlike the personality I'd expect from a python-like language). Boo provided a super extensible compiler pipeline and metaprogramming (http://boo.codehaus.org/), while Cobra provided built in unit test support and contracts (with keywords of ensure and require) (http://cobra-language.com/) -- neither destroy your work (even if it is WIP).
Well, with probability 1 - epsilon, any non-trivial Vigil program halts, and never runs again. In theory this is not a solution to the halting problem, in practice it pretty much is. A very definitive solution.
Imploring isn't really right either, because it seems too subservient to enforce a requirement on calling code.
It's more like the feudal lords obligation to his vassal - that he provide them with the use of the fief and protect them from others. If the function is the fief, then you provide the use of it only to those other functions that will swear fealty to you.
Rebel, don't forget rebel; a keyword to use an exploit in the vigil compiler to remove it from the hard drive just after escaping to a python environment where it is compiled under less oppressive rules.
illa 'quis et me' inquit 'miseram et te perdidit, Orpheu,
quis tantus furor? en iterum crudelia retro
fata vocant, conditque natantia lumina somnus.
iamque vale: feror ingenti circumdata nocte
invalidasque tibi tendens, heu non tua, palmas.'
dixit et ex oculis subito, ceu fumus in auras
commixtus tenuis, fugit diversa, neque illum
prensantem nequiquam umbras et multa volentem
dicere praeterea vidit; nec portitor Orci
amplius obiectam passus transire paludem.