Hacker News new | past | comments | ask | show | jobs | submit | pdexter's comments login

Haskell doesn't require top level type definitions.


True, it warns you when compiling with -Wall. I still think it's helpful.



How were the diagrams made?


With https://excalidraw.com/

Believe me: a whole new world will open once you've discovered it.


Pretty cool! I like the idea of edit a file -> save in your repo -> edit -> save. Export as needed.


charles schwab did that up to about 4 or 5 years ago


Random question, but how did they get the source code highlighting in that doc?


Code Blocks addong from G Suit Market (new doco -> addons -> Install -> Code Blocks)


We're software developers after all.

I feel sad knowing that what I've trained for and my passion aren't helpful at all to fixing this crisis.


That sounds more like an implementation detail, in most cases. By this metric Haskell wouldn't be a functional language.


It's a very important implementation detail. Explicit loops tend to imply mutation, which is contrary to idiomatic functional programming. Recursive calls don't require mutation but do require TCO to achieve equivalent space complexity. Constant-factor optimizations are one thing but failing to perform TCO turns constant-space algorithms into linear-space algorithms (or linear ones into quadratic, etc.). It's less a matter of "optimizing" the calls and more a matter of not wasting limited stack space on data which is clearly not required to execute the remainder of the program. One might as well label the practice of freeing stack frames when a function returns "Function Return Optimization" (FRO) and consider it a mere "implementation detail". After all, wouldn't it be much simpler to grab new memory every time the program needs some storage space and never bother with cleaning it up? It would certainly make debugging easier with all those old variables retained for the life of the program and not constantly overwritten by new data. However, programs written for a language without guaranteed "FRO" would look very different from normal programs, much as programs designed to compensate for the lack of guaranteed TCO look very different from idiomatic functional programs.

Haskell uses a different (data-centric, non-strict) evaluation model where recursive definitions don't result in recursive calls, so traditional TCO isn't as relevant. Recursion is used very heavily in Haskell—which has no first-class looping constructs—but the resulting programs generally do not require large stacks. It's not unusual to be able to run even large Haskell programs with a 1 KiB maximum stack size (+RTS -K1k). Space leaks are possible, of course, but they take place in the heap.


Common Lisp was designed without requiring TCO because:

* various platforms don't support TCO. It was designed such that it can be implemented by a simple non-TCO interpreter, transpiled to a non-TCO C compiler, compiled to a non-TCO Lisp Machine CPU, or to a non-TCO virtual machine (like the JVM). Many languages don't support TCO on the JVM and may only implement explicit tail recursion or have a compiler detecting tail recursion - which is far from supporting TCO. Thus a portable conforming Common Lisp program will run in ABCL (a full Common Lisp implementation on top of the JVM) - because it will not depend on TCO to not blow up the stack, or similar.

* another reason was that Lisp has a bunch of features with don't work that well with TCO. For example Lisp always supported various dynamic scoping constructs and made extensive use of those - something which Scheme in its core does not, but provides via libraries or language extensions. Using dynamic scoping constructs makes TCO more difficult, may require a different language design, etc.


I agree with you on the first point. There are good technical reasons why TCO can't be implemented on some platforms. However, that just punts the issue one level down the stack: These platforms should have built in support for guaranteed TCO.

As for language features like dynamic scoping, I would say that a function call which needs to be followed by some cleanup activity is not in tail position, so TCO would not apply. The cleanup code could be in tail position, however, if implemented as a function call. In Common Lisp most forms of dynamic scoping or unwinding are explicit anyway, so this shouldn't come as a surprise as it might in languages like C++ and Rust where destructors are called implicitly when objects go out of scope.


They are often explicit, but they are widely used and often generated behind the scenes by macros or declarations.


Macros would need to specify whether any expressions will be evaluated in tail position. However, expressions in macros don't look like they should be subject to TCO, so the default assumption should be that they aren't unless declared otherwise. Do you have any examples of cases that would be likely to cause confusion—in particular where a function call appears to occur in tail position in the code but can't be TCO'd because of a macro?


For example I see sometimes macros which generate code with compilation quality (speed, ...) declarations for all or parts of their code. Depending on the combination of qualities TCO might be enabled or disabled in code sections.


If TCO is guaranteed at the language level, as in Scheme, then it will always be enabled regardless of compilation settings. Debug builds are no more tolerant of stack space leaks than release builds. The fact that TCO isn't guaranteed is the problem here.


> If TCO is guaranteed at the language level, as in Scheme, then it will always be enabled regardless of compilation settings

https://www.gnu.org/software/kawa/Compatibility.html


Your point? The page you linked to specifically says that Kawa only implements a subset of modern Scheme—by which I mean R5RS or later. Early versions of the Scheme language standard didn't require TCO, but all the recent ones do. This doesn't affect the core point that if TCO is guaranteed by the language standard, as in modern Scheme, then it cannot be selectively disabled because doing so would break perfectly compliant code.


Haskell implements call-by-need reduction, in which all calls are tail calls.


Is this automated?


Sounds like datalog could fit this bill


Interesting. Can you give an example of generic algorithms plus custom types, in practice? Off the top of my head I thought that any dynamic language or static language with good genetics would have this property, but maybe there's something that Julia does differently.


As an additional example, I really like the combination of unitful and diffeq [1]. But you're right that the core feature that allows this stuff is duck typing, but by itself it's not enough. Your notduck had to not only quack, but other animals have to look at it and act like it's a duck sometimes and like a notduck when you want it to do more than a duck. Multiple Dispatch (plus parametric subtyping) allows you to trivially define both notduck + A (notduck.+(A) in OOP languages) and A + notduck (extending A whatever A is) and it's really fast. That allows for the core Julia concept of specialization, easily customizing the particular behavior of any agent at any point to get both the common behavior right and the extended behavior.

For static languages you can implement part of it with, for example, interfaces (you'll face the same restrictions if the language is single dispatch), but even if you can extend the interface freely for already existing objects, there must be an agreement between the multiple packages to comply to the same interfaces (and you might either end with tons of interfaces since there are tons of possible behaviors for each entity and purpose or giant interfaces to fit all). In Julia you can use specialization to surgically smooth the integration between two packages that had no knowledge of each other and didn't even decide to comply to any (informal) interface (which do exist in Julia, like the Julia Array and Tables.jl interfaces).

[1] https://tutorials.juliadiffeq.org/html/type_handling/03-unit...


DiffEq on ForwardDiff Dual numbers to calculate sensitivity via forward mode AD. DiffEq on Tracker's TrackedArray to calculate sensitivity via reverse mode AD.

Measurements.jl's numbers that track measurement error input any algorithm to compute the transformed measurement error after the algorithm is applied.

NamedDims.jl + Flux.jl to give everything that PyTorch's awesome Named Tensors feature gives.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: