It's common knowledge that reading off your slides is generally bad and makes for terrible presentation, but he does better than just avoiding that. The slides are always something he can look at at the end of a point, like a 'turn' of sorts. Sometimes it's to demonstrate what he's talking about; sometimes it's comic effect to show what he was referring but not by name before; sometimes it's a segue to a new heading that fills the beat after the previous one.
Not that 'giving good presentations' is so rare a skill that it needs to be called out, but, he's so consistently good about slide usage that I think his style is worth explicitly noting and copying.
This is why I often get annoyed when people post slide decks on HN. The presentation was often really good, and so the slides (being there only to supplement the presentation) make no sense out of context.
This is by design. Slides are supposed to be visual aids only. They are an accessory of the speaker, not vice versa.
When I prepare slides, I would use the notes field accompanying each slide for additional information that would be useful for someone reviewing the slides after the event. That way, I can still distribute them and get use out of them after the event, without sacrificing their effectivity during the event.
When giving talks, I usually write a script accompanying each slide and include it in the "Notes" box. I don't read from the script while giving the presentation, but it's always there for reference after the talk.
It's actually more reasonable than PHP. It has case sensitivity on the first character of each identifier, but PHP has case sensitivity on variables, but not on functions.
It also lacks the header files of C, as seen by the fact that there wasn't a header file posted for the example.
If it wasn't clearly satire, this language would actually have fans and apologists. "Why do you care about the greek question mark? It's a European language, use the preprocessor to replace it why any character you want, it's that flexible!"
Many modern languages sort of have 'DO COME FROM' in the form of exceptions. Having to execute PLEASE regularly (but not too often) is not that different from a buggy version of Oracle PL/SQL I had to use once, where single-line comments sometimes made the next line be ignored as well. Or a research language called Emerald for which the implementation needed extra comments or dummy statements to make the compiler not crash + the garbage collector was buggy so you needed to create and manipulate dummy objects Just So to make the garbage collector throw /them/ away instead of the ones you actually cared about.
I'd say the only unique aspects of Intercal are the stupid numbers and the delightfully twisted way they added threads to it (by having more than one DO COME FROM statement with the same label).
Sed is like the predecessor to Brainfuck. Every keyword worth using is a single letter, and there are the match space and hold space to keep track of. Plus if you accidentally forget a -r at the beginning, regexes that would work in any other program may not in Sed.
Sadly I get more mileage out of Sed and Tr than any general-purpose text processing language(Perl) for batch processing text.
I haven't thought of that for years. When I was doing my masters I actually TAed a class (CS for non-majors) for one of the creators of MUMPS and used that language for the labs. It was sort of hilarious learning it and trying to help newbies use it.
Can't complain too much since he gave me a very generous letter of recommendation.
To be fair, the point of Brainfuck was about interpreter size: the idea was to create an interpreter that could fit in 256 bytes or less of 68000 assembly language. Brainfuck and its ilk (and I'd include Malbolge/Dis in that) are simply just a branch of esoteric language: the minimalist languages. Intercal is in the weird family. Then you've the likes of Befunge, Q-BAL, and *W, and most of the stuff Chris Pressey has done over the years, which are in the experimental family.
My favorite weird language is SNOBOL. It is a dynamic language with utterly alien syntax. In some ways, the syntax is even more minimal than that of Lisp. It has very unusual control flow: each statement can succeed or fail, and at the end of a line you may specify a label to goto on success, on failure, both, or neither.
Embedded within this language is an incredibly powerful string pattern matching facility, which predated regex popularity. I think regex is probably as expressive, but I find SNOBOL more comprehensible.
One of my favorite books from the late 70s/early 80s was the SNOBOL book by Griswold, Poage and Polonsky: http://www.thriftbooks.com/w/snobol-4-programming-language-a.... Short, clear, and a great user guide and reference to this bizarre language. I rank it with K&R as one of the great programming language books. (Or maybe nostalgia is getting the better of me.)
I took two compiler courses from R. B. K. Dewar, who was on the team that built the SPITBOL implementation of SNOBOL for IBM mainframes, a really fantastic set of courses from a great teacher. He talked about implementing SPITBOL, in the days when you could get maybe one run a day from the mainframe. By being exceedingly thoughtful and careful, they got the compiler implemented and running in twenty compile/debug cycles.
I appreciate Brainfuck for its originality, but I agree with the grandparent that its influence on "esoteric languages" is too strong. Not only are many "esoteric languages" just Brainfuck+something (eg. http://esolangs.org/wiki/Category:Brainfuck_derivatives ), but also having Brainfuck as the default "minimalist language" is also unfortunate. For example, a NASA study on self-replicating machines ( http://www.niac.usra.edu/files/studies/final_report/883Toth-... ) mentions it on page 43:
> ...we thought that our Controller subsystem should also be as simple as possible. We spent quite a bit of time looking at the simplest implementation of a Turing machine, running the simplest possible language, BrainF[uck] (also called BF) - the smallest Turing-equivalent language in existence, having only eight instructions.
As another example, the "minimalism" of Brainfuck has been used to minimise bias when measuring the performance of AI systems: http://arxiv.org/abs/1109.5951
However, when it comes to minimalism, Brainfuck is actually rather complicated. As mentioned above, it has 8 instructions ("+", "-", ",", ".", "<", ">", "[" and "]"), but there are several languages which only require one instruction http://esolangs.org/wiki/OISC
Brainfuck also separates code from data, which causes it to bloat (eg. we have an instruction pointer into the code and a cursor into memory). Many languages don't do this, for example Turing machines, lambda calculus, and even some of the languages on the OISC link above like BitBitJump http://esolangs.org/wiki/BitBitJump
Brainfuck also has built-in IO primitives ("," and "."), whereas other languages like Lazy K ( http://esolangs.org/wiki/Lazy_K ) encode their IO as streams or monads.
Brainfuck also requires arbitrary implementation decisions like word size and memory size, unlike eg. Flump ( http://esolangs.org/wiki/Flump ) which allows any amount of numbers of any size, written in unary (eg. "11111" for 5) and delimited by zeros (eg. "0111011" for 3 followed by 2).
Brainfuck requires an instruction pointer to keep track of which instruction it's up to in the program, unlike purely functional languages which can be evaluated in any order, like Binary Lambda Calculus http://en.wikipedia.org/wiki/Binary_lambda_calculus
There are also arbitrary implementation decisions like what happens when integers overflow, what happens when the cursor overflows the memory, etc. which are complete non-issues when there's no word size, no external memory, etc.
These other languages can also be easier to implement than Brainfuck. For example, I wanted a simple language to embed in a Javascript search algorithm ( http://chriswarbo.net/essays/optimisation/levin.html ) and chose BitBitJump.
`m` is the word size (constant during a program execution), `counter` starts at 0, `mem` is an array of bits and `read_address(x, y)` converts those bits into a Javascript int (reading `y` bits, starting at index `x`).
> Brainfuck is actually rather complicated. As mentioned above, it has 8 instructions [...]
> but there are several languages which only require one instruction.
True enough, but I would say the crucial difference is that Brainfuck's eight instructions have no parameters, whereas all single-instruction languages use one or more operands in order for their lone instruction to have multiple effects.
Insofar as most Brainfuck derivatives are boring and it's had an excessive impact on the field, I agree with you entirely. Believe me, I'm quite aware of all this stuff: I used to be pretty active on Panu's esolang list back in the day; you're preaching to the choir here.
It's gotta be similar to the difference between esoteric art in a large museum, and esoteric art in a small gallery the size of a closet. Sometimes you get so far out in your studies that you can't even realize the people around you don't have the faintest clue of even how to get to your headspace, let alone understand it. There's esoteric languages that look different, but still follow some standardization of language, then there is esoteric that questions every standard convention through the realization of the absurd.
For added lulz, you might want to look into CLC-INTERCAL, a dialect that introduces it's own unique take on object orientation, internationalization, and quantum operations.
(EDIT: I forgot to add, CLC-INTERCAL also introduced the computed COME FROM statement. It was originally written in Perl, but subsequently became self-hosting, or maybe self-infesting ...)
Of course, it has extensive documentation by way of a support gopher site:
Speaking as the original author of both of those documents (and a significant fraction of all INTERCAL code), I can affirm that I have INTERCAL on my resume. It's only come up once in an interview, sadly. Someday someone will ask me to write INTERCAL on the whiteboard....
For years I developed in a language called Sigmac, for the Arris CAD system. I love esoteric languages, but Sigmac was downright hateful. It made you feel like the company was punishing you for daring to use it.
Some of the great "features" of the language:
* It has no scope. None. Everything is a global. Which it uses to great advantage because
* Functions can't return values! If you want to make a string uppercase, for instance, you would first store that string in the smn_transfer global variable, then call the function :j_caps, then read the smn_transfer string.
Fortunately it had a working syscall system, so I did as much of my work as I could in an accompanying script written in a much saner language, like Perl.
JavaScript has function scope, allows return values and doesn't normally allow syscalls. So the languages are nothing alike on any of the points in the comment you're replying to.
You mean just like "the worst programming language ever"? Because I actually think the title should be "BS: The Worst Programming Language Ever" because it isn't really about Intercal but his made up language BS.
I like the compiler hitting the website to download the registered characters and if no internet access is available "making it appear to be your fault"
Spoiler: I'm waiting for the release where block comments are added, which is just anything between two lines consisting of 5 spaces. That has to be one of the most psychotically evil ideas I've seen since #define TRUE FALSE.
The only improvement I would suggest is to use \r\n instead. Version control is mentioned, and if this is used, git will happily do some incredible things to the code for compatibility reasons.
This is really a good way of approaching computer science theory of programming language. For example, i remembered there was something really nasty with javascript closures, and was surprised he didn't mention it. Then i went and look at what it was really that sometimes bugged me, and just realized it was just a combination of heavy closure use together with function-level variable scope (rather than block based), that sometimes made you need to wrap the content of a for loop in an a anonymous function, just to capture the current state of the loop increment.
There's problems not just with Javascript closures but also with Groovy closures.
According to the Groovy devcon 2 report at http://javanicus.com/blog2/items/191-index.html "no agreement was reached on [...] whether we should have any syntax denoting the difference between a true lexical Closure and one of these Builder blocks. The historical reasons go back to Builder blocks looking just like Closures, and I'm afraid this long standing mistake must be removed from the language before any true progress can be made, as no sensible specification rules can be applied while the dichotomy exists. I headed back to London with a very disappointed James Strachan"
In Groovy language creator Strachan's last ever email on its mailing list 2 days later at http://groovy.329449.n5.nabble.com/Paris-write-up-tt395560.h... "Note that no other dynamic language I'm aware of has any concept of dynamic name resolution in the way you suggest; names are always statically bound to objects in a sensible way in all dynamic languages I'm aware of (lisp, smalltalk, python, ruby etc). I see no argument yet for why we have to throw away decades of language research and development with respect to name resolution across the language as a whole"
Very nice video, great use of slides as mentioned elsewhere.
In case someone was going to take encyclopaedic knowledge away from this, the Greek semi-colon is actually a mid-dot-like period character (not exactly mid-dot, as I think that's quite fat). And since we're on the subject, is no separate unicode "Greek questionmark" character, so you'll have a hard time compiling BS 1.0.
Although the fix is obvious - through the UTF256 character submission website ;)
Great talk. I love how pointing out all the horribleness of languages makes me realize there must be something better to be had. Now we just need a language that removes all the nonsense he describes (and can of course be programmed by Business folks!)
Using goto as the only loop construct is too easy, you can statically know where your code is going all the time. It would be more fun to use setjmp longjmp from C, where you can dynamically assign where a label is pointing to, and it works across scopes.
Awesome talk (I've seen only the first 15 minutes, gotta get back to work). BTW Although it's not a joke language and people actually use it (and some apparently like it), K was worth mentioning there right after APL.
BS could take from Haskell (libraries) and go a bit further by requiring every function to be an infix operator. Because being pronounceable and Googleable is overrated.
His one complaint about Python (the whitespace problem) is fixed in Python 3, which will raise a TabError when the module is loaded if you mix tabs and spaces.
It's common knowledge that reading off your slides is generally bad and makes for terrible presentation, but he does better than just avoiding that. The slides are always something he can look at at the end of a point, like a 'turn' of sorts. Sometimes it's to demonstrate what he's talking about; sometimes it's comic effect to show what he was referring but not by name before; sometimes it's a segue to a new heading that fills the beat after the previous one.
Not that 'giving good presentations' is so rare a skill that it needs to be called out, but, he's so consistently good about slide usage that I think his style is worth explicitly noting and copying.