Hacker News new | past | comments | ask | show | jobs | submit | codebje's comments login

I write a bit of literate Haskell, sometimes. It's one of the most well supported literate programming systems out there: the compiler supports it, the language server supports it, using VSCode as an "IDE" means full support for all the things you mentioned. Haskell code formatters don't seem to support literate Haskell, though, and GitHub Copilot, at least, gets confused between prose and code (but that's fine, if I'm taking the time to make my code extra readable and understandable the last thing I want is for an AI to get involved).

Maybe a tool like the one presented here could work as a language server proxy to the underlying language's server. The presence of literate text alone doesn't seem to be the main issue, it's getting the code portions parsed, checked, and annotated with references that matters.


The languages people already use are inconsistent (in a logic theory sense) and lack formal semantics. Efforts to try and prove anything useful about programs written in those languages don't get far, because the first fact requires you to restrict everything to a subset of the languages, and sometimes that restricts you to an unworkable subset, and the latter fact is a massive hill to climb at whose peak is a note saying, "we have changed the implementation, this hill is no longer relevant."

The only* practical way to approach this is exactly Dafny: start with a (probably) consistent core language with well understood semantics, build on a surface language with syntax features that make it more pleasant to use, proving that the surface language has a translation to the core language that means semantics are clear, and then generate the desired final language after verification has succeeded.

Dafny's about the best of the bunch for this too, for the set of target languages it supports.

(It's fine and normal for pragmatic languages to be inconsistent: all that means here is you can prove false is true, which means you can prove anything and everything trivially, but it also means you can tell the damn compiler to accept your code you're sure is right even though the compiler tells you it isn't. It looks like type casts, "unsafe", and the like.)

* one could alternatively put time and effort into making consistent and semantically clear languages compile efficiently, such that they're worth using directly.


Dafny is great, and has some advantages compared to its competitors, but unequivocally calling it "the best" is quite bullish. For example, languages using dependent types (F*, ATS, Coq/Adga/Lean) are more expressive. And there are very mature systems using HOL.

Truth is that everything involves a tradeoff, and some systems are better than others at different things. Dafny explores a particular design space. Hoare-style invariants are easier to use than dependent types (as long as your SMT solver is happy, anyway) but F* also has that, except that in F* you can also use dependent types when automatic refinement proofs become inadequate. And F* and ATS can target low-level, more so than Dafny.

Probably I would not use ATS for anything, but between F* and Dafny, there isn't such a clear cut (I'd most likely use F*).

And if I don't need (relatively) low-level, I wouldn't use either.


Ah, my apologies, I wasn't particularly clear. The "best" I am referring to is only code generation into popularly used languages.

I'm currently using Lean myself, but Agda's probably going to win out longer term for me simply because it compiles to GHC Haskell.


Can't I just work on a subset of the language? I don't care if the linked list implementation I use isn't verified. I want to verify my own code, not because it will run on mars, but as an alternative to writing tests. Is this possible?


Yes. See SPARK for an example of this. It is a subset of Ada plus the SPARK portions to convey details and proofs. You can use it per file, not just on an entire program, which lets you prove properties of critical parts and leave the rest to conventional testing approaches.


Unless you use a language designed for formal verification (Lean/Idris/Agda/F*/ATS/etc), no, it is not possible.

You can get pretty far in Haskell (with various extensions) and Scala. But for Go/TypeScript/etc, forget about it.


On types, I think there's a philosophical argument about whether types can be values, and a related one on whether types should be values, but I think I disagree with the author about what a value is, because as far as I understand it a value is a piece of data manipulated by a program during execution.

Languages with type erasure obviously never have types as values. Languages without type erasure have _something_ as a value, but is that something a type, or a representation of a type, and is that a valid distinction? I don't feel well qualified to make a judgement call, largely because I favour type erasure. Types are specification, not implementation.

What the author calls values, I tend to call terms: bits of program that can be judged to be of a particular type or not. "1" is a term: you could judge "1" to be of type "int", or of type "unsigned". "factorial 8" is a term. A value is just a term that's represented at runtime. But is there a type of a type? Commonly, no: types are in a different universe, specified with a different syntax, and understood under a different semantics. The type of "factorial", eg, is a function from int to int. In a language with generics, you might have "List T" that takes some type T and produces the type of lists of Ts.

There's no particular reason why you can't say that the type of List is a function from Type to Type, why you can't use the same syntax to specify that function, or why the semantics of what that function means should be any different to the semantics of "factorial". Consider:

    def factorial : Nat → Nat :=
        fun n => if n = 0 then 1 else n * factorial (n - 1)

    def natty : Type := Nat → Nat
This doesn't imply any fancy-pants type system features. The above two terms are expressible in the simply typed lambda calculus. If you add only outer-most universal quantification, ie, parametric polymorphism, you get Hindley-Milner. If you an arbitrary quantification, you get System F; with coercions, System Fw.

Universal quantification, HM style, gives you functions from types to values (terms that can exist at runtime). If you have, say, "length : List a -> Int" you have a function that's universally quantified over "a": you tell "length" what type "a" is and you get back a function from one concrete type to another, something you can have at runtime. (All the functions producible here are identical, thanks to type erasure, so while this type-to-value thing actually happens in order to perform type checking the compiler will only ever produce one implementation.)

The last option, railed against in the article, is to have a function from a value to a type. This is, to agree with the article, generally pretty weird and hard to get a handle on. Where the article deliberately picks pointless examples, though, I'd like to point out two much more commonly encountered kinds: "Vector a n", the type of vectors of type a and length n. The type argument is a type, but the length argument is a value. The other example comes from C: tagged unions. You have a union type, that might be one of a handful of type, and you have a value that distinguishes which one it is. In C, there's no type system support and you just have to get it right every single time you use it. If you have functions from values to types, though, you can have the type checker get involved and tell you when you've made a mistake.

The whole point of static types is to specify your program's behaviour in a way that the compiler can reject more incorrect implementations. We know from way back in Gödel's day that no type system can be complete (accepting all correct programs) and consistent (rejecting any incorrect programs). Most type systems we can practically work with are inconsistent (you can get any program to be accepted by the compiler with the right type casts) because the incompleteness of the type system hinders expressivity. I believe it's possible to have a consistent type system with sufficient expressivity to allow all _useful_ programs to be written, and those type systems will roughly correspond to the Calculus of Constructions, ie, they will be dependently typed. I am not yet sure I believe that the cognitive load of working in such a language will make it worth doing - which as far as I can tell is the point the author of the article is making about type systems.


On running code at compile time, what this reads as to me is evaluating constant expressions at compile time. We're thoroughly used to compilers optimising constant expressions away for us: "if 4 < 2 then a else b" will usually result in identical compiler output as just "b": "4" is constant, "2" is constant, "<" is constant, and the application of "<" to "4" and "2" is constant.

What about "factorial 8" ? Assuming the compiler knows that "factorial" is a pure function, it also knows that "factorial 8" is a constant, and in theory it could evaluate that function and substitute in the result. In practice this won't generally happen automatically, because compile times would blow out something fierce.

But in, say, C++, you can do this:

    constexpr unsigned factorial(unsigned n) {
        return n == 1 ? 1 : n * factorial(n - 1);
    }

    constexpr unsigned fac8 = factorial(8);
And it will compile to the static int value. This looks very much like the author's proposal; reading the various rules around constant-initializer probably will help expose pitfalls.

More interesting for me is partial evaluation. Constant folding evaluates fully static parts of a program to produce a static result, shifting that execution cost from run time to compile time. Partial evaluation also allows some of the program's dynamic inputs to be fixed to a static value, slaps a virtual "constexpr" on everything that now only has static inputs (and is a pure function), and reduces everything it can.

Partial evaluation gets fun when you start applying it to certain types of program: partial evaluation of an interpreter with a given program's source as an input acts just like a compiler. Partial evaluation of a partial evaluator with an interpreter as fixed input produces a compiler. Partial evaluation of a partial evaluator with a partial evaluator as fixed input produces a compiler generator: feed it an interpreter for any language, get a compiler for that language.


It’s a confusing (and confused) article. The parts on type systems are expressing IMO a fair observation that dependent types are weird and hard, but without any sense of understanding what they are or what they can do that might justify their cost.

The parts on compile time execution are the better parts of the article, IMO. There’s food for thought here. The author might enjoy reading up on partial evaluation.

Then, fulfilling Greenspun’s 10th, the article reinvents Lisp macros.


If someone cold calls me and asks me to verify myself, I refuse.

If it’s an expected call or they give me a good reason to, I’ll call their listed contact number back.

So far I have not missed out on anything of consequence by refusing to identify myself to someone who initiated contact with me.


I likewise refuse the bank’s call and they’re always really confused why I’d do such a thing - so clearly they have successfully trained all their other customers to be morons - and then they will no doubt blame them when they get conned.


They’re all double the last dimension plus two, without skipping any in that sequence - but that offers no insight into why it wouldn’t hold for 254.


> They’re all double the last dimension plus two, without skipping any in that sequence - but that offers no insight into why it wouldn’t hold for 254.

Wikipedia at least gives a literature reference and concise explanation for the reason:

> https://en.wikipedia.org/w/index.php?title=Kervaire_invarian...

"Hill, Hopkins & Ravenel (2016) showed that the Kervaire invariant is zero for n-dimensional framed manifolds for n = 2^k− 2 with k ≥ 8. They constructed a cohomology theory Ω with the following properties from which their result follows immediately:

* The coefficient groups Ω^n(point) have period 2^8 = 256 in n

* The coefficient groups Ω^n(point) have a "gap": they vanish for n = -1, -2, and -3

* The coefficient groups Ω^n(point) can detect non-vanishing Kervaire invariants: more precisely if the Kervaire invariant for manifolds of dimension n is nonzero then it has a nonzero image in Ω^{−n}(point)"

Paper:

Hill, Michael A.; Hopkins, Michael J.; Ravenel, Douglas C. (2016). "On the nonexistence of elements of Kervaire invariant one"

* https://arxiv.org/abs/0908.3724

* https://annals.math.princeton.edu/2016/184-1/p01


I once updated a little shy of 1mloc of Perl 5.8 code to run on Perl 5.32 (ish). There were, overall, remarkably few issues that cropped up. One of these issues (that showed itself a few times) was more or less exactly this: the iteration order through a hash is not defined. It has never been defined, but in Perl 5.8 it was consistent: for the same insertion order of the same set of keys, a hash would always iterate in the same way. In a later Perl it was deliberately randomised, not just once, but on every iteration through the hash.

It turned out there a few places that had assumed a predictable - not just stable, but deterministic - hash key iteration order. Mostly this showed up as tests that failed 50% of the time, which suggested to me a rough measure of how annoying an error is to track down is inversely correlated with how often the error appears in tests.

(Other issues were mostly due to the fact that Perl 5 is all but abandoned by its former community: a few CPAN modules are just gone, some are so far out of date that they can't be coerced to still work with other modules that have been updated over time. )


At booking.com? :)


No, but they do (did?) have a vast ocean of Perl, and I did know a hacker or two who got hired to work there on it.


It's probably https://www.good-display.com/product/440.html which is also 1mil refresh cycles and a fast refresh time of 1.5sec - around 185 hours of screen updates, so ~3 months of 5hrs a day typing or a few years of e-reader style usage.

I can't really see a device like this getting used super heavily every day, so expecting it to still be usable from time to time for a few years seems reasonable to me.


That seems like a really short lifespan...? Like you can use it for a couple years, but only if you don't actually use it.

There's an obvious aesthetic draw to e-ink, but it seems like passive-matrix monochrome LCD (like the Playdate) would be similarly power-efficient (or better [1]), longer lasting, usable in full daylight, and with better refresh rates.

How good are the best monochrome LCD screens now? Like... most reflective background and feel the flattest? (The vertical offset between the liquid and the background always bugs me.) Playdate (a Sharp Memory LCD [2]) seems pretty good but surprisingly low contrast. I suppose because the liquid crystal still blocks light even when its off? (I'm unclear here)

Looks like the OLPC transflective LCD screens are actually manufactured now [3] (5" display for $50 [4], maybe 10" but I'm skeptical [5]). The overall OLPC design was actually pretty great [6], even if many of the components weren't so great; it would be cool to see that revisited by some hobbyist.

[1] https://electronics.stackexchange.com/a/403725

[2] https://www.adafruit.com/product/4694

[3] https://www.kingtechlcd.com/product-category/transflective-d...

[4] https://www.alibaba.com/product-detail/5-inch-transflective-...

[5] https://www.alibaba.com/product-detail/10-inch-1200-1600-Tra...

[6] https://en.wikipedia.org/wiki/OLPC_XO


For e-ink, does the refresh count include the entire screen or individual pixels?


From what I can tell, a partial refresh of the display (updating a smaller portion of the screen) performs less wear on the display than a full refresh, but it can still accumulate over time. Additionally some displays will require a full refresh after a certain number of partial refreshes to deal with ghosting.


I read an article from someone doing a similar project and if you don't do a full refresh every so often then you'll actually wear out the display faster (he burned one out real quick). It actually needs those full refreshes after so many partial updates.


Huh, interesting. I don't know anything about it, but my Kobo Libra has settings for how often to refresh the whole screen (e.g. every N pages, at the end of the chapter, etc.).


it does not matter in practice, let's say you do a full refresh once a second, it would take more than 11 days to do 1 million refreshes, if you do full refresh once a minute, it would take 2 years


Those numbers don't seem high, at all, for me. Typing would probably cause a refresh more often than every second and even if it's delayed to be once, every second, it's still only 11 days.


For the usage being discussed here, 1 million is extremely low. For its original intended usage, which might cause one to dozens of refreshes per day, it's more than it'll ever need.


For eink reader is more than enough, for digital typewriter eink is not good choice for display, there are high contrast lcd displays with in-pixel memory that can be used


I don't know about this display but I've tinkered a bit with Waveshare E-Ink, that has this disclaimer:

"For e-Paper displays that support partial refresh, please note that you cannot refresh them with the partial refresh mode all the time. After refreshing partially several times, you need to fully refresh EPD once. Otherwise, the display effect will be abnormal, which cannot be repaired!"

https://www.waveshare.com/wiki/1.54inch_e-Paper_Module_Manua...


The main advantage of recursive descent parsing is readability. Forget the machinery of the parser, which is (a) trivial enough that AI will generate correctly it for you with next to no prompting, and (b) widely available in libraries anyway. The win is writing a parser that reads like the grammar it implements, which means changes to your grammar are easy to apply to your parser.

Regexes are effectively write-only. If you change the grammar parsed by a regex, you're probably going to discard the regex and make a new one.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: