Hacker News new | past | comments | ask | show | jobs | submit login
Programming paradigms that change how you think about coding (brikis98.blogspot.com)
523 points by niels on April 10, 2014 | hide | past | favorite | 199 comments



The aurora language seems very interesting. Too bad there is already another language called Aurora...

I makes me think of Elm [1] and (functional) reactive programming. Reactive programming is fantastic. It's kind of like how a spreadsheet program works. If a variable changes, all variables who depend on it change as well. Given "a = b + c", if c increments by 1, so does a.

It has many advantages over event based systems, like Javascript. Reactive programs don't need callbacks. The changing values propagate the "event" through the system.

I'd love to hear what you guys think about this direction of programming. It seems very natural to me.

Edit: I also see reactive programming as the golden way of having changing state in functional languages. Functional languages have no problem with data or state. They have a problem with change of state. The reactive paradigm solves that problem. All change is implicit and code can be exactly as functional as before.

[1] http://elm-lang.org/

[2] http://en.wikipedia.org/wiki/Reactive_programming


we're actually in the process of changing the name :)

EDIT: re FRP, you might find this Lambda The Ultimate post insightful: http://lambda-the-ultimate.org/node/4900

FRP has issues with openness and isn't real great at dealing with collections. It also forces you to express things kind of unnaturally (e.g. instead of "click this and increment x", you say "the counter is the count of all click events"). There are other methods of managing time, like Glitch[1] and Bloom[2] that seem more promising :)

[1]: http://lambda-the-ultimate.org/node/4910 [2]: http://boom.cs.berkeley.edu/


"It also forces you to express things kind of unnaturally (e.g. instead of 'click this and increment x', you say 'the counter is the count of all click events')"

I suppose it's personal preference, but I actually think that the latter of the two statements is much more understandable. This is why I like functional programming in general; a single value can be expressed as a declarative transformation of other pieces of data, making it clear how the value is derived. Using the imperative model, it's up to variable names and documentation to make it clear what 'x' is. It's like looking at one of those pictures that contains multiple shapes simultaneously; suddenly you see the one you haven't been seeing, and then you wonder how you didn't see it all along.


Explicitly manipulating events as a stream has performance problems; mainly time leaks. Another issue is that it makes debugging really difficult, as you've now converted much of your control flow (that you could step through) into data flow (that you cannot), while there is very little progress on building good data flow debuggers.

Finally, events are often manipulated in very tricky ways. Take the problem of implementing a drag adapter: you have to remember to capture the mouse on the down event and release your capture on the up event, but then you also need to capture widget and mouse positions on the down event so you can compute deltas that avoid sampling resolution problems on the move events. Rx always gets these two points wrong, which is very annoying, but they have to fudge it otherwise event stream processing can't win in elegance.


I haven't done any serious FRP, but thinking about your example of the drag adapter, wouldn't it be possible to do something like the following?

  - Declare a stream consisting of the composite of two stream: a mouse down and a mouse up
  - Map over the mouse down portion, transforming into an (x, y) coordinate
  - Map over the mouse up portion, transforming into an (x, y) coordinate
  - Produce a tuple of the two values
  - Use the resulting signal that determines what to do with the drag
In Bacon.js, I think it would look something like this (haven't tested it):

  var makeCoordinate = function(v) { return {x: v.clientX, y: v.clientY}; }
  var mergeStreams = function(v1, v2) { return {down: makeCoordinate(v1), up: makeCoordinate(v2)}; };

  var $html = $('html');
  var mousedownStream = $html.asEventStream('mousedown');
  var mouseupStream = $html.asEventStream('mouseup');
  var dragStream = mousedownStream.flatMap(function() {
    // Ensure that we only sample a single mousedown/mouseup pair.
    return Bacon.zipWith([mousedownStream, mouseupStream], mergeStreams).
        takeUntil(mouseupStream);
  };
I don't mean to be pedantic - your point is well taken. This was definitely a mental exercise to write, and I have no experience debugging (though I imagine it would be difficult).

EDIT: For this particular example I actually made it more complicated than it needs to be. Example JS Fiddle here: http://jsfiddle.net/w6mCK/


Compare with the control flow heavy managed time [1] version:

  on widget.Mouse.Down:
  | var pw = widget.position
  | var pm = widget.Mouse.Position
  after:
  | widget.Mouse.Capture()
  | widget.position = pw + (widget.Mouse.Position - pm)
  | on widget.Mouse.Up:
  | | widget.position = widget.position # freeze widget position
  | | break                             # stop the after block so capture/dragging stops
[1] http://research.microsoft.com/pubs/211297/managedtime.pdf


Well, I'm not a fan of the "use folds over time" concept either. FRP is too complex, but there's some great ideas there.

Would you say Aurora and these other languages are "reactive"? Or do you have a better term for it?

BTW: I'm reading "Toward a better programming" right now :)


Our current design takes a lot of ideas from Functional Relational Programming (http://shaffner.us/cs/papers/tarpit.pdf) and Bloom (http://bloom-lang.net/) as well as building on the ideas from Light Table.


This is a substantial piece of writing with information many here would find interesting; putting it behind a buzzfeed list style headline does it a disservice. One step short of calling it "Six weird programming paradigms that will blow your mind".


I thought it was a tiny but deliberate master-stroke of tongue-in-cheek irony. Or maybe the author was really after that buzzfeed crowd ;) I any case, I bet I wasn't the only one who thought "well, HN is going downhill fast" first, and came out glad my prejudices were tested.


Just because Buzzfeed uses list style headlines and has vacuous content does not mean that list style headlines are an indicator of vacuous content. If you read this and you liked it, does the title really matter?


Exactly. Plus, while the content might be vacuous it's often quite entertaining and the editorial style is genius, in my opinion. I think writers can probably learn 41 things from Buzzfeed.


Changing paradigms has to change the way you think about things, by definition. By the scientific definition of "paradigm", at least, but I think it's no less true for programming paradigms.


I think the idea is that we usually don't change paradigms. However, by broadening our perspective we can gain insights into our existing code. An example of a paradigm that won't change the way you think about code is a cyclic tag system. You might find it interesting to know about, but I doubt that knowledge would change the way you write code.

As a concrete example, learning pure functional programming gives me another perspective on the OO code I write; ie. that methods are functions where the first argument is the implicit "this" and the body of the function contains a top-level pattern-match on the class tag. This perspective has helped me avoid over-complicating my code at times.


My first thought was "How this 36-year-old mom learned to code with one neat trick!"


It also includes quite some information that isn't exactly right.


In the "concurrent by default" section I would add the hardware description languages like VHDL and Verilog.

Learning Verilog was an eye opening experience for me. It reminded me of the time I switched from unstructured BASIC to C when I was a kid. At first it seems complex and weird then suddenly it clicks and it all starts making sense.


... and then suddenly you realize what a horrible, horrible language it is. I'm not exaggerating, it isn't even well-suited for the domain it is mainly used for (i.e., designing digital hardware circuits). For example:

1) Synthesis/simulation mismatch: Your design might work in simulation but not in hardware, and vice versa. Often, this is due to X-value (representing unknown/invalid values) problems.

2) Signed datatypes: If you mix signed and unsigned values in an expression, the result is unsigned. So, if a is unsigned and has the value 4, b is signed and has the value -3, then

  (a * b) < 5
will return 0 (false), while

  ($signed(a) * b) < 5
will correctly return 1 (true). That's because signed datatypes are just an afterthought and weren't officially introduced until 2001, more than 15 years after the language's inception.

3) If an undeclared net is used in an module instantiation, the compiler silently creates a 1 bit net with that name. No error message, no warning. You can turn off that stupid behavior with a compiler directive, but you have to turn it back on at the end of your file, because otherwise its effects are active in all files compiled afterwards, and many third-party sources don't work correctly with this setting.

VHDL is also a bad language, but for very different reasons. And yet, alternative HDLs have a hard time getting traction, for various reasons and non-reasons.


Oh I agree, while writing verilog I was amazed people used it to develop something as critical as ASICs. The language is way too forgiving. And it's not like you can release a patch once the thing is on the silicium unless you're working with FPGAs. It was still very interesting to see how hardware was designed and definitely an enlightening experience as far as I'm concerned.

I found verilog very similar to C in a way, if you're not familiar with all the quirks of the language it's very easy to write code that looks completely benign and straightforward and yet ends up blowing up in your face because of an implicit cast or undefined behaviour...

On the other hand there's not much of a "hacker scene" for driving up the innovation for HDLs, unfortunately. And I don't expect the big companies making ASICs and FPGAs to come up with a radical new solution any time soon.

So basically, if one of you HNers want to try their hand at writing a new programming language, consider designing a modern Verilog instead of yet an other Javascript transpiler :)


This seems true for other areas of EE. I'm continually surprised that our FPGA and board designers use some adhoc horrible homegrown revision control scheme. When I ask them about tools of the trade they usually complain that the solutions are proprietary and not working any better than making numbered backups.

The same is true, maybe even worse, for our MEs.


I really enjoyed writing VHDL. I don't remember having problems with unknown values: we used only '0' and '1' for anything that's supposed to be synthesized.

(For testbenches, everything goes..)

     Value set is ('U','X','0','1','Z','W','L','H','-')
     TYPE std_logic IS ( 'U',  -- Uninitialized
                         'X',  -- Forcing unknown
                         '0',  -- Forcing 0
                         '1',  -- Forsing 1
                         'Z',  -- High    impedance
                         'W',  -- Weak    unknown
                         'L',  -- Weak    0
                         'H',  -- Weak    1
                         '-'); -- Don't care

     SIGNAL stdLogicName : STD_LOGIC:='0';
     stdLogicName <= 'Z';


VHDL has the same problem:

  signal x, y: std_logic;
  ...
  x <= 'X';
  ...
  y <= '0' when x = '0' else '1';
will result in y getting the value '1' in simulation, but you don't know what it'll be for synthesis. So if you're unlucky, it'll simulate correctly but fail in hardware. Which is exactly what simulation should protect you from.

And even if you assign only '0' and '1', you might get an 'X' from third-party IP cores.


Yep, I agree about IP cores, never really used something like that.


I wholeheartedly agree.

I'm a CSE Major and I took a hardware design class last fall, we programmed and Xilinx FPGA using VHDL.

It was a huge change from everything I had learned before, I had no idea what a latch was and why it was bad to imply them. The thing that bothered me the most was the simulation/synthesis disconnect, the only efficient way to debug a program is by using the simulator because debugging it on the board takes a very long time when changing code. But even after you've debugged it in the simulation it might not even synthesize, which is an even harder "bug" to debug.


> It was a huge change from everything I had learned before, I had no idea what a latch was and why it was bad to imply them.

Exactly. VHDL and Verilog make hardware design twice as hard as necessary: first you have to think about what structures you need, and then you have to figure out how to describe it with the inappropriate tools your HDL gives you (in such a way that the synthesis software correctly recognizes your intent).


Not knowing what a latch is or why it is bad seems like more a failure of the class than an inherent problem with Verilog. There are times when a latch is useful and desired (though most often not and most often a normal flop structure is better).

With hardware design, you have to keep in mind that you are modeling hardware. A language that did it all for you might be nice, but you still have to know what framework you are working in. The simulation/synthesis disconnect is inherent to that. You can think of a ton of structures that seem fine at a code level but just don't have a workable analog in silicon. Synthesis in that sense is just another debug tool.


> Not knowing what a latch is or why it is bad seems like more a failure of the class than an inherent problem with Verilog.

You misunderstood the problem: It's not a problem that you can describe a latch, but that it's so easy to make the mistake of describing a latch instead of combinational logic. It's a common mistake and clearly a failure of the language, because two very different intents are described by very similar code.

> The simulation/synthesis disconnect is inherent to that. [...]

No, you're confusing simulation/synthesis mismatch with non-synthesizable code. The former means that simulation gives you different results than the hardware at runtime, which is a very bad thing and should be avoided wherever possible (that's the point of simulation, after all). The latter is unavoidable to a certain degree if the HDL should be usable for simulation.


On latches: From experience, I've never had an issue accidentally defining a latch. Simple guidelines (fully describe any case statements or always have an else statement in if branches, use * from Verilog 2005 rather than trying to list all relevant signals) make it more an issue of typos than true unintentional latches and a basic lint flow will identify them quickly.

While I'd accept that perhaps exposing this so easily might be an annoyance in the language, it is hardly a failing.

Synthesis: Again, from experience, this is an issue with your synthesis flow, not the HDL. Synthesis is absurdly complicated, to the point where any team I've been on has at least one guy where that is the entirety of his job. If the generated netlist breaks simulation when your HDL made it all the way through all other flows, more than likely it's an issue with synthesis.


The problems you mentioned are all easily bypassed by design guidelines.

1) (At least at the X's point) Don't build designs that rely on X behavior or could result in X behavior. Where I work, X's are completely unacceptable and a sign that something is broken. In general, if writing Verilog to model hardware, don't write non-synthesizeable constructs.

2) Signed datatypes don't make any sense when designing hardware. Verilog treats everything as bit vectors. While 2's complement can implement signed behavior, hardware doesn't know or care about signed types. Again, if modeling hardware, write code that models hardware.

3) This is annoying, but another issue that is stepped around with design practices and not a major problem.

Overall, when talking about Verilog, you have to keep in mind its a language built to model hardware. Most features of modern languages don't make much sense in that context.


1) > Don't build designs that rely on X behavior or could result in X behavior.

Although I agree, you can't completely avoid 'X' values, because third-party cores might return them in output signals.

2) > Signed datatypes don't make any sense when designing hardware.

Now that is just plain and utterly wrong. Of course signed datatypes make sense when designing hardware: You can (and have to) perform arithmetic in hardware, and signed arithmetic is a part of arithmetic.

> Verilog treats everything as bit vectors.

By your argumentation, unsigned datatypes also wouldn't make sense when designing hardware. And yet you need it all the time.

> hardware doesn't know or care about signed types

But the language should care about it. "It's all 0s and 1s in the end" is not an argument for not providing proper abstractions.

3) > This is annoying, but another issue that is stepped around with design practices and not a major problem.

That's like saying "you can avoid bugs if you're really careful". Great. A "feature" which doesn't offer any advantage (because you're supposed to avoid it anyway) and which makes it so easy to make mistakes which won't be seen until simulation, should have never made it into the language.

> Overall, when talking about Verilog, you have to keep in mind its a language built to model hardware.

Exactly. Verilog and VHDL were intended to model hardware, not to design it. Nowadays, they are mostly used to design it.

I don't mean to attack you, but attitudes like yours are the main reason why we're stuck with such crappy, ill-designed, inappropriate languages for hardware design. You're so used to the problems that you don't even see them anymore.


Well, #1 is a problem of the tools, not the language.


No, it's a problem of the language. If the tools wanted to avoid this problem, they'd be non-compliant.


It also kind of reminded me of Pi calculus (http://en.wikipedia.org/wiki/Pi_calculus). It's pretty fun to play around with and try to understand and replicate those examples on the page.


A thought on dependent types:

Can a dependent type system catch all type errors at compile time? For example, suppose I write the following (in pseudo-code):

  // Variable x is an integer greater than or equal to 0 and less than 256.
  int x (>=0, <256) 

  x = 128 // This is valid.
  x = x * 3 // This violates the type.
I can imagine how a compiler could catch that kind of error. But that's trivial. What happens in programs like this:

  int x (>= 0, <=10)
  x = parseInt(getKeyboardInput)
Now the compiler can't know for sure whether the type has been violated, because the value of getKeyboardInput could be anything. To take a page from Haskell, you could do something like this (which is still pseudocode, not valid Haskell):

  // x is a value that is either 1) an int from 0 to 10, or 2) nothing at all.
  maybe (int (>= 0, <=10) x
  
  // applyConstraint recognizes that parseInt may return a value violating x's contraints.
  // Thus it transforms the return type of parseInt from int to maybe (int (>= 0, <=10)
  x = applyConstraint(parseInt(getKeyboardInput))
Or perhaps applyConstraint wouldn't have to be called explicitly, but would be implicitly added by the compiler as needed. I'm not sure which is better stylistically.

Either way, applyConstraint would be required any time a computation could return an invalid value. That would get tricky, because the compiler would have to track the constraints on every variable, even where those constraints aren't declared. For example:

  int w (>= 0, <= 10)
  int x (>= 0, <= 2)
  int y
  int z (>= 0, <= 20)

  y = w * x
  z = y
Here, the compiler would have to infer from the assignment "y = w * x" that y is always between 0 and 20.

Do any languages currently take the idea this far (or farther)?


Your idea about `maybe (int (>= 0, <= 10))` is exactly correct, but there's an important point to emphasize around it:

In a DT language when you express such a type the compiler will demand that you write a type like maybe which can fail at runtime and demand that you handle that failure.

To the compiler, `int` and `int (>= 0, <256)` are different types which require an explicit conversion (though the particular choice of conversion may be implicitly defined, viz. Idris' Coerce typeclass). In order to do multiplication like you suggested you would have to provide enough information to the compiler (either at compile time via constants or via a possibly failing mechanism which generates proof requirements at runtime) that your multiplication will not violate the constraints of the type.

To you final point about tracking constraints, even those defined implicitly, I think the answer is no-ish. Most frequently, DT languages have such a richness of types that there's no way to infer them—you're forced to write out all of the top-level types. But the idea of passing around constraints like you're suggesting here is not impossible.

You might imagine a pair type like (in no particular notation)

    (x, x => [Constraint])
where the first is a number and the second indicates a type-level function taking that particular number to a set of proofs that certain constraints are satisfied. We could indicate all of this at the type level and demand the compiler ensure that constraints are not violated (i.e. we end up coercing between constrained types like this and some coercions, those that coerce to supersets of the current constraints, are valid only) and we could also define a loose kind of (+), (*), (-) etc on this pair type such that the result passes along the proper constraints.


Agreed in general, but it is possible to enable this style of constraint propagation by being restrictive about what kind of contraints you can write. In particular, the "LIQUID" family of languanges that have been developed at UCSD allows properties which are conjunctions of LInear IneQUlites, and can infer them automatically. The programmer would annotate one declaration of a variable with a constraint like (0 <= x < 10), and the system tries to infer suitable constraints for all other variables (calling out to an SMT solver, and using invariant inference techniques from the program analysis community).

Their latest research language is "Liquid Haskell", I think it could deal with jarrett's example out of the box (with the caveat that yes, you still need to write the test explicitly). They used to have an online demo where you could just type programs into a web form, but that doesn't seem to work right now...



Thanks!


Yeah, less power means greater inference. You can model liquid types in a DT language I believe, but you'll never have the same convenience.


"Dependent types" is a quite broad term. The Liquid types people certainly call their work dependent types, even though it has a different flavour than e.g. Coq or Agda.


Yes, that's true—I lapsed in terminology. Maybe calling it ITT would be better? There's a lot of ambiguity there, of course. And less marketing.


The compiler doesn't necessarily have to have that level of inference; it can simply require the code to prove the constraints. For instance, parseInt will return an unconstrained int, so you'll get a type error trying to assign that to a range-constrained int. For your first and last examples, the multiplication operator on range-constrained ints can easily propagate the range constraints. In your last example, `z = w * x` would work, but assigning through the unconstrained int y may simply cause the last assignment z=y to produce a type error, forcing the code to re-establish the constraint explicitly.


As others have noted, it's less about the compiler running code and making determinations as it is using extremely robust, information-rich types. For instance, the Vector type in Idris has two type parameters (not sure that's the correct term to use in this case): the first is the type of the element contained by the vector, and the second is the length of the vector. Operations that increase the size of the vector are guaranteed to return a non-empty vector. They might look something like this:

    (::) : a -> Vector a n -> Vector a (S n)
This says that an element cons a vector of any size results in a vector that is guaranteed to be non-empty. This means that you could safely write a function to retrieve the head of a list by restricting the type to lists that are guaranteed to be non-empty.

    head : Vector a (S n) -> a
Something like the following pseudo-code in a REPL(I don't actually know a thing about Idris):

    let nums : Vector Int n = []
    head nums
...would fail to compile, since the second type param to nums is simply `n` rather than `(S n)`, as required by our `head` function. All of the things you would expect from Haskell like the power of ADTs and pattern-matching apply here, so you get even more compile time type-soundness.


> For instance, the Vector type in Idris has two type parameters (not sure that's the correct term to use in this case): the first is the type of the element contained by the vector, and the second is the length of the vector.

In case anyone's wondering, only the first of these (the element type) is a type parameter, since it has a constant value in all sub-expressions (ie. the cons cells all contain elements of the same type).

The second is a 'type index', which is a more general term (type parameters are a special case of type indices), since its value changes in the sub-expressions (the cons cells have different lengths).

The difference is that we don't need to pattern-match parameters on every recursive call, since they're guaranteed to be constant (although we can if we like).

I didn't know this until I asked http://cs.stackexchange.com/questions/20100/what-are-the-dif...


Makes sense; thanks for the clarification!


Without really addressing your question, notice that the power of dependently typed languages is not in the intelligence of the compiler, but in the expressivity of the type system. That is, the question becomes not "can the compiler catch this error on its own?", but rather "can the compiler verify my proof that I have not made this error?" A language with a very powerful type system can still have a very dumb compiler (although it will generate extremely slow code).

EDIT: It occurs to me that this is well summarised by the apparently facetious remark "Ask not what your compiler can do for you; ask what you can do for your compiler."


In Ada you have subtype, so you would write

subtype my_type is Integer range 0..10; my_type x = 9; my_type y = 10; my_type a = x + y;

So, obviously there is enough information to deduce at compile time that the sum operation yields an incorrect value. In practice, as many of the replies point out, you can determine no such thing (my_type x = user_input()).

In that case Ada will throw a runtime exception. All this checking is expensive, so it ends up being much like C/C++, where you have debug and release builds, where the release builds do not do the checking.


Yes

  int x (>= 0, <=10)
  x = parseInt(getKeyboardInput)
Dependent Type systems don't have to know what is possible keyboard input. They have to know what type the getKeyboardType function returns (and the parseInt), and if it's possible for that type to be out of the range of your constraint.

If that's so, it throws an error. At that point you simply have to compose your types and functions so they match the constraint, and you know you are safe.


> if it's possible for that type to be out of the range of your constraint....If that's so, it throws an error.

If I'm understanding you correctly, that would seem to take a lot of the power out of dependent type systems. Almost all values will eventually depend on input from the outside world, so it seems you'd rarely have a chance to use your dependent type system.

Haskell (which doesn't have dependent types baked in) has a good approach to possibly-invalid data via the Maybe type. E.g. Maybe Int means "either an integer, or nothing." So it forces you to explicitly handle the possible error cases.[1]

To spell it out in Haskell terms: In the example of getting an integer from the keyboard, x is a Maybe Int. If the input is invalid, the value is set to Nothing, which is a unique constant. If it's valid, the value is of type Just Int.

> At that point you simply have to compose your types and functions so they match the constraint, and you know you are safe.

But how do you get from untrusted, potentially invalid data into the world of trusted, guaranteed-valid data? How do you bridge the gap? I was suggesting some combination of an applyConstraint function and a Maybe type to achieve that. Are there other ways?

[1] Actually, it will let you bypass the usual compile-time protections with the fromJust function. But if you want compile-time safety, either refrain from using fromJust, or use it only where you're absolutely sure the value won't be Nothing.


> But how do you get from untrusted, potentially invalid data into the world of trusted, guaranteed-valid data?

The main point is that you can write provably correct validation functions. Then your program can branch on the result of that function, and in the "success" branch you can process data further, as usual, assuming (justifiably) the validity of the data. It doesn't make any difference whether some particular piece of data was validated runtime or compile time, so far as the validation does what you want it to do and the types reflect the result in sufficient detail. Verified programming is mostly not about trying to check everything compile time, but rather checking that when our program eventually runs, it does the right thing. Take the following pseudo-Idris-Agda function:

    boundsCheck : Nat -> (bound : Nat) -> Maybe (n : Nat ** n < bound)
    boundsCheck n b with n <? b 
        | yes proof    = Just (n , proof)
        | no  disproof = Nothing
This either returns the input natural number (if the input is within bounds) along with a proof of being in bound, or returns failure. The type of the function makes it impossible to write a function body such that the returned number is not within the supplied bounds. Any subsequent code in the program may work with the output of this function and be utterly sure that the "n" in "Just (n, p)" observes the bound, and the may also safely omit any further checks for the same bound or greater bounds. Or here's a standard Idris function:

    filter : (a -> Bool) -> Vect n a -> (n : Nat ** Vect n a)
    filter p [] = (_ ** [])
    filter p (x :: xs) with (filter p xs)
        | (_ ** xs') = if p x then (_ ** x :: xs') else (_ ** xs')
This is a plain old filter, except that it works on a length-indexed list type "Vect". We have no idea (compile time) how long the resulting filtered Vect will be. So we return a dependent pair containing some Nat and a Vect such that its length is equal to that Nat.

You might think that this is rather boring, since we could have used a list type not indexed by its length, and then just computed the length whenever we needed it runtime. Except there is a big difference. It is impossible to return a value of type

    (n : Nat ** Vect n a)
from any function, such that in the value the Nat index does not equal the length of the vector! You could slap together a plain list and a Nat in a tuple, and then say: "This a list coupled with its length. Let's try to preserve this invariant by writing code carefully". You could also trying hiding the possibly unsafe internals behind some OOP class firewall, but even that doesn't protect you from writing the internals incorrectly. Dependent types can protect you even then; they only fail to prevent you from writing the wrong types in the first place.


As a sort of side note, why is it

    filter p (x :: xs) with (filter p xs)
        | (_ ** xs') = if p x then (_ ** x :: xs') else (_ ** xs')
rather than

    filter p (x :: xs) = if p x then (_ ** x :: snd (filter p xs)) else (_ ** snd (filter p xs))

?


I think a better example is the partial functions in Haskell like head or tail. They throw an exception on empty lists. You could make it return maybe [a] or a default value, but a common usage of head might be where you know yourself that the list is non-empty. Like:

head . sort . f

where f is a function that you know returns a non-empty list, perhaps because that's just the way it operates or because you checked it further up. You know that head won't crash since any reasonable implementation of sort returns a list which is just as long as the one that was given to it. Similarly:

tail . sort . f

won't crash. Moreover, the list will still be sorted, since you just removed the head of the list.

This is all known to you, but you can't necessarily easily show it to the compiler.

I don't know the motivation for wanting to know the input before it is given. Putting it in a Maybe is the only sensible option. Some things are just not known/unknowable before running a program. You wouldn't even need to run the program if you could prove absolutely anything about the properties of a program.

Will the user input bleed through to all corners of the program? Well, why would it? You could have a sanitizing loop that refuses to pass "wrong" data to the pure (and dependently typed!) parts of the program, just looping over and over until the correct input is given.


"You wouldn't even need to run the program if you could prove absolutely anything about the properties of a program."

That's not really true.


> Can a dependent type system catch all type errors at compile time?

I'm probably not clever enough, but someone could probably prove that'd be equivalent to solving the halting problem. It seems impossible.

> because the value of getKeyboardInput could be anything

That makes my brain hurt. I think implicitly narrowing the type is stylistically better. Maybe just adding a dependent type declaration:

    x = (parseInt(getKey) :: Int(>=0, <=10))
Then you type-annotate everything you can, marking every non-annotated expression as wild:

    w = ... :: Int(>=0, <=10)
    x = ... :: Int(>=0, <=2)
    z :: Int(>=0, <=20)
    y = w * x                 -- :: Wild
    z = y                     -- compiler should Scream!
Asking the compiler to infer arbitrary type declarations seems hard. I think y probably shouldn't be inferred as :: Int(0<=..<=20) unless there is a rule somewhere that Int(a<=..<=b) * Int(c<=..<=d) = .. :: Int(f<=..<=g).

The work still needs to be done (no magic bullet here), but we should be able to use the computer to generate well-defined type combinations.


> I'm probably not clever enough, but someone could probably prove that'd be equivalent to solving the halting problem. It seems impossible.

IIRC dependently-typed languages dodge this bullet by not being completely Turing-complete (as 'twere).


Not quite. Many dependently-typed languages aren't Turing-complete, but that's no how they 'dodge this bullet'.

Think about the following Java code:

    public int getMyInt() {
      return getSomeOtherInt();
    }
How hard does Java have to work to figure out whether getMyInt is well-typed? Does it have to solve the halting problem? No. It just checks the information that you have given it. If you wrote that getSomeOtherInt has return type "int" then getMyInt will type-check. If you gave it some other type, it won't type-check. If you didn't give it a type, it will complain about a syntax error. At no point will Java try to write your program for you (which would hit against the halting problem). The same is true in dependently-typed languages, except the types happen to be much stronger. You still have to write them down, write down values of those types, write conversion functions when you need to combine different types, etc.

Incidentally, if a language is Turing-complete, it actually becomes really easy to get a program to type-check; we just write infinite loops everywhere, which leads to logical fallacies ;) That's why many dependently-typed languages aren't Turing-complete (although many are; eg. those which distinguish compile-time terms from run-time terms, like ATS).


You can compromise on turing-completeness, too. Idris, for example, lets the programmer decide whether a given definition should be required to provably terminate or not.


The real way dependently typed languages dodge the undecidability problem is taht they "give up" on some valid programs. The output of the typechecker is either "proved the program is correct", "proved the program is incorrect" or "incondusive". If the result is "inconclusive" then it means that the programmer needs to manually provide a proof of correctness.


> I'm probably not clever enough, but someone could probably prove that'd be equivalent to solving the halting problem. It seems impossible.

Guessing you're plenty clever...just didn't think about it enough :)

Simply transform a program into one which halts on a type error rather than whatever else it might be doing. Done.


Consider the case of a simple type system, where any statement in the language is of either type "true" or type "false"....


This proof is missing a few steps, at least if it's meant to convince slow thinkers like myself. b^)


"Can a dependent type system catch all type errors at compile time?"

Provably not, if you want a type checker guaranteed to finish.


Could you give a brief overview of why? Does this come down to the halting problem?


Basically, yes. If you want your type checker to be decidable, you're going to have to restrict the type language to be much smaller. In general, dependent types are undecidable to type check.


Note though that undecidability is not really an issue in practice. You can always just refuse to run any function compile time that has not yet passed termination check. In fact Idris does exactly this. This already puts Idris ahead of C++ in terms of compilation termination, and it's not even like people are frequently disgruntled about diverging C++ templates.


"You can always just refuse to run any function compile time that has not yet passed termination check."

I don't understand this sentence.


Both Idris and Agda check whether functions provably terminate. Halting problem notwithstanding it is possible to have conservative checks that never return false positives, so that an OK from the checker implies termination, but a FAIL doesn't imply non-termination.

Idris is not total by default, so it is not required that all functions pass the termination check. However, when Idris does the type checking it has to occasionally run Idris functions (that's what makes it dependent). This is in fact the only source of potential non-termination in the checker. So, whenever Idris needs to run a function compile time, and it cannot prove termination via the checker, it just throws an error.

(Side note: it is never required to run a function in order to the typecheck the very same function. All major dependent type checkers has the property that they never have to run a function that is not already checked).


So that was "You can always just refuse to run any function [at] compile time that has not yet passed termination check."?

All of this is certainly the case, and useful, and interesting. It doesn't contradict the point that 1) guaranteeing termination and 2) guaranteeing you return "no" on every incorrectly typed program are incompatible, which was approximately the original question.


Maybe I'm misunderstanding, but Kutta's description seems to handle 1) and 2) just fine. An additional requirement of "guaranteeing 'yes' on every correctly typed program" would lead to contradiction, but the point is that you don't need that requirement in practice.


You're not misunderstanding. In short, we should rightfully expect our dependent type checker to be sound, if not complete. Weaker, non-dependent type systems can have complete checkers though, see for example

http://www.mpi-sws.org/~neelk/bidir.pdf


Hmm. Conceivably. I will have to revisit when I have more bandwidth.


"1) guaranteeing termination and 2) guaranteeing you return "no" on every incorrectly typed program are incompatible"

You do need more bandwidth. Trivial counterexample (for any reasonable inference of the semantics of the source code of this made-up language):

  define isCorrectlyTyped( program P)
  as
    return "no".
Of course, a practical version would have to have the function return "yes" on more valid programs :-)

By the way: compilers for languages such as Java and C# do something similar: they reject all programs that may use a variable before a value is assigned to it, at the price of rejecting some programs that properly assign a value to each variable before use on the grounds that the compiler cannot prove it. Typically, the language specification describes exactly how smart compilers can be (or rather, how stupid compilers must be) when checking this, to ensure that all compilers agree about what programs are valid.


Fair point! I was wrong above - "Can a dependent type system catch all type errors at compile time?" Yes, so long as you're okay with rejecting some correctly typed programs! If you're not okay with that, then your type checker might not halt. Shame it's too late to add an addendum above...


I can write a compiler in 5 minutes that handles both #1 and #2. It returns no on every program. Therefore it always terminates and always returns "no" on every incorrectly typed program.



>Basically, yes. If you want your type checker to be decidable, you're going to have to restrict the type language to be much smaller. In general, dependent types are undecidable to type check.

In dependently typed programming languages like agda and Idris typechecking is decidable. You are wrong here.


You're right. I was being handwavey, and I apologize for that.

Let me clarify my position: In theory yes, you're right, it is decidable. In practice Agda and Coq force you to prove pretty much everything about your program using expensive annotations, in order to satisfy the typechecker. It's like pulling teeth. The type checker must be taught that your program terminates, and it must be taught that you maintain even the most simple of invariants.

There's a reason dependent types haven't caught on in industry, and even the "fancy" types that are cited by proponents as being used in practice (like ML's Hindley-Milner type system) are MUCH simpler than full blown dependent type systems. Because the burden is shifted to the programmer, and that makes it super unusable.

I'm not a dependent types hater. (I have dependent type friends :) ). But the best way to get a usable type system is to restrict it's expressivity to make automation easier, and the annotation burden much smaller. Sure, we're giving up something there. But it's better than having a system that can prove any theorem in the Calculus of Constructions, but used by about thirty people.


>Let me clarify my position: In theory yes, you're right, it is decidable. In practice Agda and Coq force you to prove pretty much everything about your program using expensive annotations, in order to satisfy the typechecker. It's like pulling teeth. The type checker must be taught that your program terminates, and it must be taught that you maintain even the most simple of invariants.

That's why we need gradual typing. Some critical parts should be written with full dependent types with termination checker, etc, but most of the code should be just type checked.

P.S. I have some dependent types experience, including applying Coq in commercial projects.


You're right with a strict reading. "In general, dependent types ..." should be a statement about all instances of "dependent types" or (weakening to one common English usage) a statement about typical instances of "dependent types". In neither case, are they undecidable.

A loose reading (probably overly loose, but possibly what was intended rather than what was expressed) would permit something closer to "Dependent types, in full generality, are undecidable to type check." which I believe to be the case, where actual implementations are more constrained (though decreasingly so!).


Yes, Depently Typed languages can. They are full blown theorem provers...


Gödel guarantees that full-blown theorem provers, even with human guidance, cannot prove all true statements about sufficiently rich systems. 'Sufficiently rich' here just means 'includes Peano arithmetic', and you can find tons of tutorials for doing just that (Church encoding) in even the simplest type systems.

(This bit of pure-mathematics wonkery ignores the substance of jarrett's question, though; one doesn't need the full Gödelian power to observe that a priori theorems that must be verified at compile time can never account for data that are provided only after compilation.)


When I write a program, I don't want it to be just correct (true); I want it to be _provably_ correct; I want to be able to be convinced that it is correct, at least in principle, given enough time and whole specification of the system. Programs which are correct, but not provably so, should not pass code review and might as well be lumped together with those which are wrong. It doesn't matter if you are using a full-blown theorem prover or thinking about the code in your head; Gödel's theorems are not really relevant to programming, even when the code uses deep mathematics.

I'm not sure I agree with "theorems that must be verified at compile time can never account for data that are provided only after compilation". At compile time, you prove the assertion "for every x, the program outputs correct answer for x". Now, you don't know that the user will enter say x=5 at runtime, but since you proved a theorem about _every_ x, this includes x=5. You cannot predict the future (the path that will be taken by the program), but once you prepare for all possible futures, you're safe.


The problem with theorem provers like Adga and Coq is they are brittle to refactoring and thus hard to make changes to an app. Something seeming small can ripple through the whole system.

I hear Idris is more reasonable in this regard and has better general purpose programming properties.


>The problem with theorem provers like Adga and Coq is they are brittle to refactoring and thus hard to make changes to an app. Something seeming small can ripple through the whole system. I hear Idris is more reasonable in this regard and has better general purpose programming properties.

You can say the same for any strongly typed language, but reality is it's the opposite. If your program has a good design, it's easy to refactor.



> can never account for data that are provided only after compilation

It seems to me they could. Suppose my program depends on external input D. That input could be a file it loads, keyboard input, network input, or anything else. Could I declare that I expect D to have properties P? And then, could the language force me to implement a code path for cases where D does not satisfy P? An example of such a code path would be to print an error message and then terminate the program.


Yes, this is a good point. I hope that what I meant was reasonably clear, but a more precise would have been something like "cannot prove certain assertions about data that are provided only after compilation".


>Gödel guarantees that full-blown theorem provers, even with human guidance, cannot prove all true statements about sufficiently rich systems. 'Sufficiently rich' here just means 'includes Peano arithmetic', and you can find tons of tutorials for doing just that (Church encoding) in even the simplest type systems.

They don't prove theorems. You do this by writing a program.


Sure, but, regardless of agency, human or automated, it is still not possible to prove all true theorems. (The human / automated distinction comes in more sharply in the halting problem, where any potential halting decider will choke on some program that proveably terminates.)


I wonder; have mathematicians found something that they know with every fibre of their being is true, but that they can't prove? And do they then try to prove it in another formal system instead?

Or do these unprovable-but-true facts mostly elude them, since they are operating in a formal system that 'hides' them?


There are a couple of statements that have been proven to be unprovable in the standard axiom set (ZF). The provability of the unprovability of unprovable statements is not guaranteed, but happens to be true for many of the more useful axioms.

The most common is the Axiom of Choice, which if true is incredibly convenient. For example, the Axiom of Choice implies that all Vector Spaces have a basis, which makes linear algebra tractable. Mathematicians generally accept it as true because it's useful and intuitively true, but it does have some consequences that are intuitively false, like the Banach-Tarski paradox. We even have a saying: "The Axiom of Choice is obviously true, the Well-Ordering Principle is obviously false, and Zorn's Lemma no one even knows." The implicit punchline being, these three things are logically equivalent because each implies the other.

Other axioms that are independent of ZF include the Continuum Hypothesis and the Axiom of Foundation. The Continuum Hypothesis says that there is no type of infinity that is larger than the cardinality of the Rationals and also smaller than the cardinality of the Reals. It doesn't make a damn lick of sense for such a type of infinity to exist, but it's not technically possible to disprove it. The Axiom of Foundation, in very simple terms, disallows objects like "the set that contains itself." Such sets are not constructable in ZF, and are logically impossible, but they are not technically disallowed.

Using "stronger" axiom sets to investigate the properties and consequences of these unprovable statements is common. Additionally, many automated theorem-provers include several of the more convenient axioms that aren't included in "regular" mathematics.

More reading: http://en.wikipedia.org/wiki/List_of_statements_undecidable_...

(Note: ZFC is ZF plus the Axiom of Choice; Choice is so damn convenient that the mathematics establishment usually accepts it by default, although proving that you don't need Choice to prove something is often seen as a worthwhile accomplishment.)


Mathematicians say that formula F is true in system S iff one can find a proof of F in the context S. So truthness is always considered with respect to some system. Thus, there are no formula which is "true" but that one can't prove, by definition. (That's the kind of smartassery mathematicians like.)

But the Godel's incompleteness theorem states that for all system that includes (Peano's) arithmetic, one can find a formula in this system that has cannot be proved and whose contrary cannot be proved either. And to answer your question, Godel's proof explicitly shows such a formula.


Words in mathematics mean what they are defined to mean, so there's no point arguing about the correctness of definitions. Yours is essentially the constructivist perspective on truth; but it is not the only one. It will declare that certain statements are neither true nor false, which some find distasteful.

A more model-theoretic approach says that a formula is true in a theory (not a system, although the word choice doesn't matter) if it is true in all models of that theory—a statement that can (in principle) be concretely verified simply by testing in each such model. Then it may (and will) be true that a statement is true in "all possible worlds", but that there is no way of proving it from the rules that constrain 'possibility'.


| int x (>= 0, <=10) | x = parseInt(getKeyboardInput)

Will be a compile time error because getKeyBoard input returns integers that are between 0-256(?) not 0-10 like 'x' wants.


Exactly. That's why I proposed a Haskell-ish Maybe construct to represent the uncertainty of the input's validity. The code snippet:

  int x (>= 0, <=10) | x = parseInt(getKeyboardInput)
...was meant as an example of what won't work.


I think an underrated non-standard approach to programming is graphical programming. Though this approach doesn't seem to received significant uptake amongst professional programmers, there is an application called max [0] that is popular amongst musicians and artists and quite surprisingly powerful and effective.

There's an interesting article [1] on how Jonny Greenwood of Radiohead uses it extensively, in there you can see some examples of how it works - modules wired together visually.

I think there is a lot of potential for a really nice mix between text-based programming and graphical programming to work for general programming too.

[0]:http://cycling74.com/products/max/ [1]:http://thekingofgear.com/post/25443456600/max-msp


I've used graphical programming in the past. It sounds amazing in abstract, but ends up being a mess when it is implemented. It basically requires a "sufficiently smart editor", which, even if implemented perfectly, would not leave a lot of room for a third-party ecosystem to be built around the language.

There are many solved problems in text-based programming that would need to be resolved in order for a graphical programming language to be as useful.

How would one post a "snippet" to StackOverflow? How would diffs work? Consequently, how would source code management work?


The solution that comes to mind is that the graphical elements are backed by text for these purposes. Then, the graphical interface amounts to an editor for this meta-language.


Inevitably, that is what is attempted. But it doesn't solve the use cases I mentioned.

To take a diff for example, not only do you need to illustrate the diff version of (I drew a new line from here to here and made this other box green), you would also need to illustrate patches to textual details and how they related to the graphical code.

Maybe I can imagine a solution that could do that (at great expense). I cannot imagine the language being friendly enough that third-party compilers, IDEs, or static-analysis tools would be feasible.


The digital media department at my alma mater used Max/MSP to teach programming to art students with a lot of success. It's really fascinating to watch the programs as they run, and it makes for a great way to visualize abstraction and separation of concerns in a concrete way.


My degree included a "Computer Music" option, which were a few classes that were half music students and half computer science ones. We used Max/MSP quite a bit.

For anyone that wants to play around with a very similar language, PureData is an open-source application that is a relative of Max/MSP (they share a creator, and are quite similar). Not wanting to come in to do the homework or purchase the software myself, I used it quite a bit.

http://puredata.info/


Graphical programming is also popular for animation/cgi/special FX, e.g. XPresso in Cinema4D.


Some notes:

- parallel and concurrent are 2 different things

- the 'symbolic languages' definition seems off. Wikipedia puts it right:

> symbolic programming is computer programming in which the program can manipulate formulas and program components as data

So it's not "using graphs & such to program"


Take this[1] for example. That's a regular Mathematica notebook. You can insert images (like drag & drop them from your desktop) directly into notebooks. It's actually very convenient for image processing. The same thing holds true for things like the graphical output of plotting functions, which retain their underlying vector data.

The symbolic paradigm isn't necessary for this, but it makes it easier. Implementation-wise these things requires less "special case-ing" than you would think because the symbolic paradigm in essence encourages your representations to be naked. Thus images, plots, etc are not "black box" items. They're structured data represented in the same language, but which the frontend happens to display in visual forms.

(Code itself is also data, except that if a symbol in the data has an associated transformation rule, that transformation rule is applied. This mechanism is how you make data do 'variables' and 'functions'. These variables and functions work more or less the same as they would in a common programming language, but the underlying mechanism is nothing but data transformations.)

[1] http://reference.wolfram.com/mathematica/ref/EdgeDetect.html


Mathematica is a good example of a symbolic language (can't get more symbolic than term rewriting), but he calls that....knowledge oriented or some other nonsense.


Mathematica[0] and The Wolfram Language[1] are two different things. The Wolfram Language is a sort of extension of their Wolfram Alpha service, which Wolfram itself describes as "Knowledge Based." Mathematica is still just Mathematica.

[0]http://www.wolfram.com/mathematica/

[1]https://www.wolfram.com/language/


No.

Mathematica is a commercial piece of desktop software that uses the Wolfram Language. Just like how RStudio uses R.

There are other product platforms (coming soon) that employ the Wolfram Language, both in cloud and desktop incarnations: http://www.wolframcloud.com/


I'm not great at following markitecture, but didn't Mathematica exist and have a language before this language was rebraded as the Wolfram Language? There is nothing really wrong with this, but its understandable that people might be confused about it right now.


That's right, and what seems to be confusing people is that the language underlying Mathematica-the-product was always, implicitly, "Mathematica".

We were a one-product company, and it didn't make sense to distinguish Mathematica-the-product and the language it ran.

The last time we'd tried to branch the underlying language off into its own thing was in the early nineties, where a certain intern by the name of Brin was in the middle of refactoring the code before he went off to do other things :).

My prediction is that the confusion will pass soon once we have these concrete products actually out in the world.

A new generation of people will be using and seeing the language for the first time. And they'll be doing stuff that old-school Mathematica-the-product (and most of its user base) would find quite alien.

Things like trying code out in online sandboxes, coding things in the cloud IDE (or locally, via Eclipse or "Wolfram Desktop"), deploying public APIs for other people to use, building websites totally within the language, doing internet-of-things type stuff, embedding chunks of WL code in Java or Python or whatever, writing natural language parsers on top of Alpha's technology, creating data science workflows, making private clouds, designing interactive visualizations, tweeting code at our executor-bot... and on and on...



If you want concurrent by default, try an HDL. :)


I had the same thought. I find it odd he didn't mention them.


>parallel and concurrent are 2 different things

That should be an implementation detail as far as the programmer is concerned. And languages should strive for parallelism anyway, not merely concurrency.


The idea that parallel code is executed through a concurrency mechanism should be hidden, but it doesn't mean that the distinction between the two should be.

http://existentialtype.wordpress.com/2011/03/17/parallelism-... http://existentialtype.wordpress.com/2014/04/09/parallelism-...


Thanks for the feedback. I updated the post with notes that mention these corrections.


My first and only encounter with "Concatenative Languages" was programming the HP48GX[1] graphing calculator in highschool. Thinking back to it, I'm amazed by what you could do with it. It was very powerful even by today's standards. Whereas other kids had Gameboys, I had an "HP". I even got in trouble playing tetris on it during my German language class. My calculus teacher never knew that you could do symbolic integrals and derivatives with it (using a free computer algebra library). Sadly, the only program of note that I wrote for it was an implementation of the The Game of Life[2].

[1] http://en.wikipedia.org/wiki/HP-48_series [2] http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life


I was a little disappointed that Factor[1] didn't get a mention in the 'Concatenative' section. Its stack effect checker takes care of a lot of the problems he mentions, IMO.

1: factorcode.org


Postscript is another excellent example of the type.


“Stack checking” is definitely the ticket to maintainable stack programs. I have been working on a statically typed concatenative language called Kitten[1] off and on for a while, and while the error messages leave something to be desired, they can at least help you understand how data is flowing through a program, and of course rule out some bugs.

[1]: http://github.com/evincarofautumn/kitten


very cool, I will definitely take a look at this


QML [1] is an interesting example of declarative programming. It allows constraints and relationships to be defined and the runtime will do the rest. Perhaps it's not as powerful as other languages but in its domain it does very well.

[1] https://en.wikipedia.org/wiki/Qt_Modeling_Language


QML is simply brilliant for developing UIs.


LabVIEW is concurrent by default; control flow is done by linking the outputs of one function to the inputs of another. This makes writing concurrent loops ridiculously easy: just put two loops next to each other.

I rarely use it because organization is such a pain, but its "data-flow" paradigm does simplify a lot of logic.


Happy to see another LV user here.

Organization gets much, much easier once you get the big architecture concepts - produce/consumer patterns, messenger-queues, event structures, actor framework. Yes, it can be painful, but the newer functionalities (clean-up, riddiculously easy sub-vi creation,..) subtly improve it.

Btw: If you are using LabVIEW and have never used Quickdrop, try it. It is the most underutilized and amazing feature.


Another example would be Bloom language. Here is a great talk explaining motivation for using "disorderly" languages in distributed systems: http://vimeo.com/53904989


Yeah there's a pretty large class of data processing tasks that are made pretty trivial by LabVIEW-likes visual dataflow languages. One I've seen for XML work that was pretty good was called "SDE"

http://www.futurepointsystems.com/?page=products


I wish APL/J/K were on here, but I guess that doesn't really change the way I think about coding... it just blows my mind.


I'm a huge fan of the declarative programming paradigm, but outside of Regexp and SQL and a handful of other DSLs, it's dead. Its death should be a case study in Open Source strategy: It died because it became boring before it became useful. SQL and Regexp have stuck around because they did something useful immediately.

I think that any future that the Declarative paradigm has within general purpose languages is the kind applied by compilers. For example, x = (a + b) - a can be reduced to x = a or even eliminated altogether with subsequent in-scope references to x being replaced with a. Another example is dead code elimination. These forms of declarative let you use an imperative or functional language immediately but gently introduce you to declarative benefits without having to deal with all the mind bending that is necessary to optimize pure declarative code.


You are very mistaken. Prolog is still very much alive. You can find us on ##prolog in freenode. You can build a web application with prolog. When the 2048 madness was going on, I implemented in prolog in about 200 lines in 2 hours and I had about 3 weeks of prolog under my belt at that time. It's a very powerful concept. I didn't have to figure out the how to implement it, I just broke 2048 down into rules, declared it, and bam, I had a game.


Prolog is an interesting exercise and occasionally useful as a tool, but most implementations leave a lot to be desired as a general purpose programming language. An example of this is SWIPL's 3 string types and their crazy behavior. You are also stuck in a first order language that is essentially pure, and have no abstractions for common patterns except meta programming which makes code even more difficult to reason about.

That being said I love having logical variables and I would love having them in other situations.


How/where do you recommend to learn prolog? The concept looks really interesting and useful, at least for me.


There are a lot of good Prolog resources on the web. I gathered up a little list some time ago. You can find it here:

http://fogbeam.blogspot.com/2013/05/prolog-im-going-to-learn...


I never said it wasnt useful, I said that it was dead...much in the same sense as COBOL. I use datalog and constraint programming extensively, but I'm a black sheep.


It's not dead, until no one is using it. You can say APL is dead and I might believe you. So long as there's still people using prolog or other form of logic programming. It's not dead, all it needs is one killer application and everyone will jump on it. Erlang's popularity has surged since Whatsapp got acquired. Lots of people are learning about Erlang's root from Prolog and likewise getting somewhat curious about Prolog.

I might be wrong, but I think where the logic community failed is in taking advantages of multiple processors. Goals can be broken down and shared across multiple core/processors. So without changing code, the same program can be parallelized. It's just seems no one has implemented it yet.


APL is far from dead, there is a lot of living legacy code and old timers with APL skills still actively using it commercially.

Prolog on the other hand...I'm not aware of any commercial projects actively using it, but it is still alive as a hobbyist/learning language.


Well, if you're only looking for logic languages instead of Prolog itself, then there's http://www.princexml.com/ written in Mercury.


Of course. Datalog also gets alot of use in industry.


Since 'functional' is not mentioned, I will assume that it is mainstream now!


According to Wikipedia, functional programming is an example of declarative programming [1], which is mentioned in the article.

This is not immediately obvious to me, here's why (from the same wikipedia article [1]):

While functional languages typically do appear to specify "how", a compiler for a purely functional programming language is free to extensively rewrite the operational behavior of a function, so long as the same result is returned for the same inputs. This can be used to, for example, make a function compute its result in parallel, or to perform substantial optimizations (such as deforestation) that a compiler may not be able to safely apply to a language with side effects.

[1] http://en.wikipedia.org/wiki/Declarative_programming


Seems pretty obvious to me, since the above description could also apply to your typical declarative languages such as SQL, where you also rely on the database to do optimizations based on a description of the result you're looking for, rather than procedures for calculating it.


Just to clarify, it wasn't obvious to me before I read the part of the article that I have quoted.

But I still think that functional programming goes beyond simply specifying "what you want". Take the problem of sorting an array for example. A declarative specification of the sorting problem would be: Given an array T, compute a permutation of T such that for all i, it's true that T[i] < T[i+1] (some languages support this type of specification e.g. [1]).

Clearly here, you don't describe "how to do it". In functional programming you can't do this, you have to explain "how", implementing quicksort, for example.

[1] http://www.minizinc.org/


IMO the way that functional programming is declarative is that you can spell functions out like definitions. Like "fac(x) is

¤ 1 if x == 1

¤ x * fac(x-1) otherwise"

This is at least more declarative than describing the function with a loop. With an imperative program you kind of have to describe as a series of steps, because the order matters so much.

Personally I prefer this kind of declarative programming over something like Prolog since I get a simple way of describing what/how the program is, while still feeling declarative. With Prolog I still have to learn how it works, and the way that it works is more removed from imperative and functional programming (while functional programming can be done (maybe with, but in principle) in most imperative languages as long as you restrict your use of certain features). Plus Prolog has un-declarative things like the cut operator. I don't really have much experience with declarative programming outside of these two.

Something more declarative might indeed just be to give invariants and let the program find an implementation for it.


> Something more declarative might indeed just be to give invariants and let the program find an implementation for it.

EDIT: turns out that there kind of is:

http://nautilus.cs.miyazaki-u.ac.jp/cgi-bin/MagicHaskeller.c...


The second paragraph:

> This is not your grandma's "functional programming will change the world!" blog post

Presumably because a lot of people write blog posts about how FP will change your approach. That's true but, while I wouldn't say FP is mainstream, it's well-known in circles such as ours; whereas the topics he mentions are much less run-of-the-mill. It makes for a more interesting and original read.

I notice he didn't include array-based languages, like APL and J. They were flavour of the week here a couple of weeks ago, but you still don't get much written about them and they would fit nicely in this article.


I think your perception of what is "well-known" strongly depends on your age. For example neither concatenative nor declarative programming are exotic to me and I think many other older programmers.

Forth (concatenative) used to be pretty big, back then software was simpler and you did not need a Qt wrapper, XML parser etc. Forth was very practical for the things it was originally used for. Like that embedded control software for a telescope where the "user interface" consisted of a few buttons, as in physical buttons, no screen, no mouse, no harddisk, no network connection to worry about. In that environment Forth made a lot of sense, on a modern computer, as a tool to write a modern GUI application it makes no sense at all if you ask me.

And "declarative".. Prolog is old, back then it was usually called "logic-based programming", though.

Really, Forth and Prolog, every older guy who cares about programming knows those fellas.

And Prolog did not change the way I think about programming at all. I considered it breathtakingly clean and elegant (compared to C and Pascal).. for the few things it was good at (e.g. expert systems).. but utterly unsuitable as a general-purpose language / paradigm. I felt the same way about Standard ML (it was the dominant FP language before Haskell became big). "Wow, writing a simple input/output transformer (e.g. academic compiler) or doing some basic math is so clean and elegant with this! Everything else.. not so much."

In contrast, Forth actually changed my thinking. According to Charles Moore, Forth's creator, proper software development with Forth meant finding just the right set of words, the right vocabulary, for the problem domain. Once that was accomplished writing down the program logic became trivial. Note that in Forth functions are called "words", are usually very short, and indeed word-like. So you end up with code like "ARM 90 DEGREES ROTATE".

As I said my approach to programming was changed by the exposure to Forth. I aim to discover/define an efficient "vocabulary" for the problem domain in every language. That is not the same as designing a "domain-specific language" by the way. DSLs usually go much further, making up a syntax of their own. I do not like DSLs at all. Mostly because of the associated learning curve forced upon you by every new DSL and by the usually poor tool support.

EDIT: Removed needlessly trollish part


Good point. To be honest, I was somewhat surprised to see 'declarative programming' in the list, as well, given the ubiquity of SQL and regular expressions; plus, as others have said, FP's declarative nature.


I'm pretty happy hoping that it's because we're beginning to realize that "functional" is not a terrifically descriptive term.


Or maybe it just won't 'change the way you think about coding'


I think that depends a lot on what you mean by "functional". If "functional" means "everything is immutable" then I can guarantee that you'll learn a lot by programming that way. If you mean "first-class functions" then I cannot -- although you still might. You could probably program in Scheme (say) as if it were C with side effects all over the place, but that wouldn't teach you anything except a different syntax. OTOH, coding something non-trivial in Haskell would teach you a lot -- even if you don't end up using/liking it.

(Aside: I was somewhat disappointed by the blogger calling out the "Wolfram Language" as something special or to be admired. It's a ridiculous ad-hoc hodgepodge of a language. I stress the word language.)


If you don't know FP and you learn FP, then it will almost certainly change the way you think about coding.


One could say that it goes under declarative


And concurrent.


Other languages to add to this list would be Dijkstra's Guarded Command Language, and Promela. Promela is especially interesting because of the (nondeterministic) execution semantics, which provide an extremely interesting way to model parallelism. In a similar vein, TLA+ is worth a look.

Both Promela (Spin) and TLA+ have active communities and have found fairly wide use in industry. They are generally used for model checking, model extraction by guided abstraction, and development by refinement, but can be used in a much more adhoc way to just experiment with parallel ideas.


10 Ways Buzzfeed-style Headlines Will Forever Be Annoying


Honestly, my facebook feed has trained me to simply never click on any article that begins with a number.


Where is Aspect Oriented Programming and all the other offspring of the Inversion of Control pattern (Dependency Injection, Dependency Inversion, ...)?

Is this line of evolution in languages considered dead?


Yes, thank heavens, yes!


What, you don't like computed non-local come-from's?


These are more tactical (i.e. low-level) techniques that big strategies that encapsulate entire languages.


Came here to say the same thing. Learning about Inversion of Control utterly changed how I program on a day-to-day basis.

It may not be a language feature, but the things in the article seem rather academic by comparison.


Pointed out elsewhere, but ANI appears to be dead according to its own tutorial[1]. However Funnel[2] by Martin Odersky/EPFL does a similar job, with a more explicit nod to Petri nets which are usually used as the basis for concurrent systems.

[1] https://code.google.com/p/anic/wiki/Tutorial [2] http://lampwww.epfl.ch/funnel/


Code Search is a better way to do Declarative Programming for non-optimized solutions. I've been obsessing over this topic for the last few months. Right now there's limited versions in a few places: Python's howdoi and Visual Studio's Code Search.

A true Code Search would work like this: 1. Type in your search term in your code in a comment line. i.e. "Bubble sort StampArray by name" 2. Google/Bing/StackOverflow searches for your string. Replaces your terms with generics. Searches for "Bubble sort [an array of objects] by [string variable]" 3. Takes code results and shows them to you. Replaces all instances of [string variable] with getName() and all instances of [Object[]] with StampArray. 4. You pick your favorite. 5. Your IDE adds the code to a "code search module" which you can edit. 6. Your edits get added to the search database.

The best part? You could even put your Declarative Programming engine -inside- of the Search just by populating initial search results. What about better code coming to exist in the future, you say? Well, you don't necessarily have to keep the same result forever. If it's been deprecated, you can re-do the search.


I've been thinking a lot about agent-oriented programming. I had a General Magic device back in the day and later thought the concept of a Telescript like language as applied more for code organization than code mobility might be interesting. I guess APIs won, but I still think there is something there.


http://erights.org/ was not technically like Telescript, but aimed at a similar-enough vision you might find it interesting. http://erights.org/elib/capability/ode/index.html for that vision. The section on mobile almost-code at http://erights.org/elib/distrib/pipeline.html should help to relate it to Telescript.


Dependent types are a wonderful idea so long as I can do a couple things with them: 1. Copy paste them without complications. i.e. "non-null" requires no code that relies on specific variables unless there's a logical conflict (which variable?) 2. If it's a known failure, give me a warning. If it's an unknown failure, let me choose how to deal with it, again in a neutral fashion that I could simply say @Notnull<skip> and it would just skip the code if the variable is null.


Declarative programming is a great one, almost a magical experience.

"It feels like I am sitting at the controls of a quantum computer. I've got all my qubits (terms) all wired together in some complicated expression and when power is applied every qubit will instantly collapse out of superposition and crystallize into a perfect answer."

(From something I've been working on, http://kmkeen.com/sat/ )


The concurrent by default paradigm looks like it could be really useful for some cases. Does anyone know of any more well-used languages that support it?


As I mentioned in my other comment Verilog and VHDL are "concurrent by default" since that's how hardware works anyway.

If you want to experiment with them you don't need an FPGA, you can just start with a simulator such as Icarus Verilog[1] and a waveform viewer like gtkwave[2] and get a feel of the language. There are a bunch of tutorials on the net.

[1] http://iverilog.icarus.com/ [2] http://gtkwave.sourceforge.net/


Verilog is Turing complete and can do standard IO, so you probably don't even need a simulator/wave viewer.


Yeah, but using waveforms is half the fun! :)

Also, debugging verilog using only $display doesn't sound very fun...


http://www.edaplayground.com/ is a great way to play with hardware description languages too. You don't need to install anything.


It's a "well-used language" in academic contexts, but probably not exactly an answer to your question ;) Anyways, in OZ[1] you can just add thread <x> end around code x, and it's concurrent. It will wait for all bindings to be resolved before continuing.

It's a pretty cool language with a lot of paradigms one can easily try out.

[1] http://en.wikipedia.org/wiki/Oz_%28programming_language%29#D...


Haskell's Par monad gives you a nice eDSL that lets you lay out your data dependencies and have things evaluated in parallel

http://hackage.haskell.org/package/monad-par-0.3.4.6/docs/Co...


Is Labview concurrent by default?


Yes. I enjoyed watching LV code execute graphically.


Would be good to add Mozart/Oz.

Dealing with process coordination using the resolution of logical variables gave me a refreshing new perspective. The finite domain constraint system design in Oz is an awesome example of this in action.

An interesting tidbit - the Mozart/Oz team invented "pickling" before it caught on with Python.


I'm in the midst of something else and can't pull out my C++11 book now, but doesn't C++ have custom types that would let you declare something like "this must be a positive integer"?

I might be confusing this with custom literals.


Well, there's unsigned integers. This means that it can't be less than 0. But if you have a 32 bit unsigned integer, and you decrement 0, you get 0xFFFFFFFF. That may or may not be what you want or expect.

You also could create a class that wrapped an integer, that would have a range, and would maintain that range as a class invariant. It could either clip or throw an exception when an attempt was made to go out of the range. But that's just regular class stuff, so I don't think it's what you had in mind...


Would be pretty cool to have concurrency by default and then a lock declaration I could do on a particular function to fix any concurrency issues. Would need a new style of debugger/code view though specifically for this purpose.


Isn't "Dependent types" just re-inventing how Fortran handles non allocatable array and character variables i.e. those who's length is declared at compile time using a parameter?


It's a bit more than that. Let's assume Vector is a typed list type of know length

    [1,2,3] : Vector 3 Int
we can write `append` which combines Vectors

    append : (Vector n a, Vector m a) -> Vector (n + m) a
which works as you expect because the expression in the type (n+m) is written in the value language.

Here's another (slightly pathological) dependent type. Normally `if` statements require that the then and the else branches result in the same type, but in a DT language you can write

    if' : (cond : Bool) -> a -> b -> (if cond then a else b)
In other words, `if'` takes the type where you first provide it a boolean, we'll call it `cond`, and then two differently type continuation parameters. The type of the result is an expression: "if cond is true then this is the type of the first continuation, otherwise this is the type of the second continuation".

Both of these examples would be proud to call themselves trivial. The DT rabbit hole goes very deep.


The article unfortunately does quite a poor job of explaining what dependent types are and why they're valuable. They're a lot more complicated than just "storing the length of a vector in its type". Dependent types allow you to create types where one type in a signature 'depends' on the value of another type in the same signature. This is very powerful, but unfortunately easily understandable examples are still somewhat hard to come by.


Short answer: No.

Longer answer: Sort of. An array has an element type and length associated with it (and it ATS at least, the type-level length isn't kept at run-time) and that length is set when the array is created, and modified if elements are added or removed. However, functions can require parameter types or provide return values of "array T n | n > 0 && n < m" --- the array cannot be empty and has to be smaller than some limit.

There's a good deal more to dependent types, but they are much more flexible than simply attaching the length of the array to its type.


Common Lisp also supports specialized type declarations like this. (In Common Lisp type declarations are optional.)


Common Lisp does not have dependent types. The example given is unfortunate because it often leads people to think they already have (and understand) dependent types, when that isn't the case.


That's just one use of dependent types (and the easiest one for an audience to understand).


But it is the only one described in the article.


If a programming paradigm does not change how you think about coding, it isn't a programming paradigm. Good article though.


I read it, and how I think about coding remains the same as before I read it.


I don't think the idea was that reading the article would change how you thought about coding. I believe the point was that trying out these unusual programming paradigms would change how you thought. For the cases on his list that I have tried (declarative programming and Forth/Joy) I have found that to be true.

I would add object oriented programming (eg: smalltalk), macro programming (eg: lisp) and functional programming (eg: Haskell) to the list of things that change your thinking, but since most of the article's prospective audience has already heard of those I was happy that he stuck with paradigms that are LESS well known.


.... did .... did you crosspost this from reddit ???

http://www.reddit.com/r/programming/comments/22nhb2/six_prog...


> If you've used SQL, you've done a form of declarative programming

This is so wrong I don't know where to begin.


Per Wikipeda, "Common declarative languages include those of database query languages (e.g., SQL, XQuery), regular expressions, logic programming, functional programming, and configuration management systems."

http://en.wikipedia.org/wiki/Declarative_programming

"Wikipedia is mistaken" is totally a defensible position, but it should be defended, so please find a place to begin.


Wikipedia is mistaken.

To me, in a declarative programming language you tell the computer what you want without exactly telling it how to achieve it. It's the compiler's job to figure that out. That classic example of a declarative programming language is Prolog.

In databases there are two traditional approaches to extracting queries: the relational algebra and the relational calculus. One of these (the relational calculus) is very clearly declarative: you essentially are saying "give me all students whose ID numbers are less than 1000 and who took a class from someone who is no longer a member of the faculty." The interpreter figures how what relational operations should be performed to do this.

The other option is the relational algebra, which to me is very obviously not declarative: you are telling the system exactly what to do, and giving it an implicit ordering (though just like in any procedural language it can change the ordering if it thinks it's a good idea). Thus in the relational algebra you'd say "get the table of students. Reduce it to those whose ID numbers are less than 1000. Join it with those students who took a class. Take the list of faculty. Reduce it to those who are longer teaching. Join that with the previously generated students relation. Project out just the student names."

The primary language for the declarative relational calculus is Datalog.

The primary language for the (non-declarative) relational algebra is SQL. Though SQL has a few extra gizmos added to compensate for the fact that it's less expressive than the relational calculus.


I would appreciate it if you'd at least add a section to the Wikipedia talk page with your concerns. I've not decided exactly how I feel on the matter, but they're certainly not entirely groundless.


Why do you think SQL doesn't qualify? it's certainly not the only example of declarative programming, and maybe not the most interesting, but it is one which most developers have come across.


In theory, SQL is a declarative programming language, in practice, past the simplest of examples, there are dozens, if not hundreds of ways to ask the same question, and each one of them performs differently. The moment one starts to do things like add anonymous views, the declarative facade tends to disappear.

Let's also remember that many companies stuck with big RDBMs systems have entire teams of people whose job includes turning declarative statements into extremely procedural ones.

So in practice, writing a SQL statement has little to do with making a query readable, but with abusing knowledge of internals to make it work fast. So declarative, not so much.


SQL-the-language is declarative - it defines meaning and does not define execution strategy. Particular implementations have significant differences in implementation strategy depending on the way the SQL was structured, leading to crazy results when people try and control operational semantics. But that doesn't mean "SQL is not declarative" - it's just a weakness of declarative languages when we care about operational semantics.

Has anyone made an RDBM that let you write SQL and separately control how it is evaluated?


Sure, but there is continuum here. It's not a binary decision.

For example, I consider the sublanguage in printf to be more declarative than the rest of C as well.

SQL as it stands, especially in contrast to what the existing models were at the time it was introduced, such as CODASYL.

SQL is the one language we use that somewhat operates at the level of thought.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: