It's common knowledge that reading off your slides is generally bad and makes for terrible presentation, but he does better than just avoiding that. The slides are always something he can look at at the end of a point, like a 'turn' of sorts. Sometimes it's to demonstrate what he's talking about; sometimes it's comic effect to show what he was referring but not by name before; sometimes it's a segue to a new heading that fills the beat after the previous one.
Not that 'giving good presentations' is so rare a skill that it needs to be called out, but, he's so consistently good about slide usage that I think his style is worth explicitly noting and copying.
Just some key points or diagrams that help to explain the presentation.
The moment the audience can read everything the presenter is about to tell, it is the end of the presentation.
As part of the audience, such presentations full with text are the ones where my brain switches off.
When I prepare slides, I would use the notes field accompanying each slide for additional information that would be useful for someone reviewing the slides after the event. That way, I can still distribute them and get use out of them after the event, without sacrificing their effectivity during the event.
It also lacks the header files of C, as seen by the fact that there wasn't a header file posted for the example.
If it wasn't clearly satire, this language would actually have fans and apologists. "Why do you care about the greek question mark? It's a European language, use the preprocessor to replace it why any character you want, it's that flexible!"
To INTERCALs defense, the only thing I can say is that it's at least a good joke.
sed also takes some getting used to:
Many modern languages sort of have 'DO COME FROM' in the form of exceptions. Having to execute PLEASE regularly (but not too often) is not that different from a buggy version of Oracle PL/SQL I had to use once, where single-line comments sometimes made the next line be ignored as well. Or a research language called Emerald for which the implementation needed extra comments or dummy statements to make the compiler not crash + the garbage collector was buggy so you needed to create and manipulate dummy objects Just So to make the garbage collector throw /them/ away instead of the ones you actually cared about.
I'd say the only unique aspects of Intercal are the stupid numbers and the delightfully twisted way they added threads to it (by having more than one DO COME FROM statement with the same label).
Sadly I get more mileage out of Sed and Tr than any general-purpose text processing language(Perl) for batch processing text.
Can't complain too much since he gave me a very generous letter of recommendation.
And also why I cry when people create languages that are one simple regex away from brainfuck and call those esoteric.
Soo also: Malbolge, so evil that the only non-stupid programs in it were generated by LISP scripts
It's a broad church.
Embedded within this language is an incredibly powerful string pattern matching facility, which predated regex popularity. I think regex is probably as expressive, but I find SNOBOL more comprehensible.
One of my favorite books from the late 70s/early 80s was the SNOBOL book by Griswold, Poage and Polonsky: http://www.thriftbooks.com/w/snobol-4-programming-language-a.... Short, clear, and a great user guide and reference to this bizarre language. I rank it with K&R as one of the great programming language books. (Or maybe nostalgia is getting the better of me.)
I took two compiler courses from R. B. K. Dewar, who was on the team that built the SPITBOL implementation of SNOBOL for IBM mainframes, a really fantastic set of courses from a great teacher. He talked about implementing SPITBOL, in the days when you could get maybe one run a day from the mainframe. By being exceedingly thoughtful and careful, they got the compiler implemented and running in twenty compile/debug cycles.
> ...we thought that our Controller subsystem should also be as simple as possible. We spent quite a bit of time looking at the simplest implementation of a Turing machine, running the simplest possible language, BrainF[uck] (also called BF) - the smallest Turing-equivalent language in existence, having only eight instructions.
As another example, the "minimalism" of Brainfuck has been used to minimise bias when measuring the performance of AI systems: http://arxiv.org/abs/1109.5951
However, when it comes to minimalism, Brainfuck is actually rather complicated. As mentioned above, it has 8 instructions ("+", "-", ",", ".", "<", ">", "[" and "]"), but there are several languages which only require one instruction http://esolangs.org/wiki/OISC
Brainfuck also separates code from data, which causes it to bloat (eg. we have an instruction pointer into the code and a cursor into memory). Many languages don't do this, for example Turing machines, lambda calculus, and even some of the languages on the OISC link above like BitBitJump http://esolangs.org/wiki/BitBitJump
Brainfuck also has built-in IO primitives ("," and "."), whereas other languages like Lazy K ( http://esolangs.org/wiki/Lazy_K ) encode their IO as streams or monads.
Brainfuck also requires arbitrary implementation decisions like word size and memory size, unlike eg. Flump ( http://esolangs.org/wiki/Flump ) which allows any amount of numbers of any size, written in unary (eg. "11111" for 5) and delimited by zeros (eg. "0111011" for 3 followed by 2).
Brainfuck requires an instruction pointer to keep track of which instruction it's up to in the program, unlike purely functional languages which can be evaluated in any order, like Binary Lambda Calculus http://en.wikipedia.org/wiki/Binary_lambda_calculus
There are also arbitrary implementation decisions like what happens when integers overflow, what happens when the cursor overflows the memory, etc. which are complete non-issues when there's no word size, no external memory, etc.
The code is at http://chriswarbo.net/js/optimisation/levin_bbj.js and is mostly dedicated to the search algorithm. The BitBitJump implementation is just these two lines, inside one of the loops:
mem[read_address(counter, m)] = mem[read_address(counter+m, m)];
counter = read_address(counter+2*m, m);
True enough, but I would say the crucial difference is that Brainfuck's eight instructions have no parameters, whereas all single-instruction languages use one or more operands in order for their lone instruction to have multiple effects.
I tend to produce ugly Piet programs :)
(EDIT: I forgot to add, CLC-INTERCAL also introduced the computed COME FROM statement. It was originally written in Perl, but subsequently became self-hosting, or maybe self-infesting ...)
Of course, it has extensive documentation by way of a support gopher site:
And a small script:
I could put intercal on the resume because of this...not that I did or will, but I could...
Some of the great "features" of the language:
* It has no scope. None. Everything is a global. Which it uses to great advantage because
* Functions can't return values! If you want to make a string uppercase, for instance, you would first store that string in the smn_transfer global variable, then call the function :j_caps, then read the smn_transfer string.
Fortunately it had a working syscall system, so I did as much of my work as I could in an accompanying script written in a much saner language, like Perl.
Nostalgia overdrive just seeing this beauty:
Because of its privacy settings, this video cannot be played here."
The only improvement I would suggest is to use \r\n instead. Version control is mentioned, and if this is used, git will happily do some incredible things to the code for compatibility reasons.
What's wrong with Prolog? Oh, wait, he meant Perl. ;)
Really interesting talk.
According to the Groovy devcon 2 report at http://javanicus.com/blog2/items/191-index.html "no agreement was reached on [...] whether we should have any syntax denoting the difference between a true lexical Closure and one of these Builder blocks. The historical reasons go back to Builder blocks looking just like Closures, and I'm afraid this long standing mistake must be removed from the language before any true progress can be made, as no sensible specification rules can be applied while the dichotomy exists. I headed back to London with a very disappointed James Strachan"
In Groovy language creator Strachan's last ever email on its mailing list 2 days later at http://groovy.329449.n5.nabble.com/Paris-write-up-tt395560.h... "Note that no other dynamic language I'm aware of has any concept of dynamic name resolution in the way you suggest; names are always statically bound to objects in a sensible way in all dynamic languages I'm aware of (lisp, smalltalk, python, ruby etc). I see no argument yet for why we have to throw away decades of language research and development with respect to name resolution across the language as a whole"
In case someone was going to take encyclopaedic knowledge away from this, the Greek semi-colon is actually a mid-dot-like period character (not exactly mid-dot, as I think that's quite fat). And since we're on the subject, is no separate unicode "Greek questionmark" character, so you'll have a hard time compiling BS 1.0.
Although the fix is obvious - through the UTF256 character submission website ;)
Great talk. I love how pointing out all the horribleness of languages makes me realize there must be something better to be had. Now we just need a language that removes all the nonsense he describes (and can of course be programmed by Business folks!)
— Stan Kelly-Bootle
(from http://en.wikipedia.org/wiki/Argument_to_moderation )
1. People who index from 1.
1. People who index from 0.
(Dammit, as of Perl 5.16 this is not merely deprecated but outsourced to a module ... mutter, grumble)
Another would be Rebol:
>> a: [0 1 2 3 4 5]
== [0 1 2 3 4 5]
>> a: next a
== [1 2 3 4 5]