Hacker News new | past | comments | ask | show | jobs | submit | wk_end's comments login

"Let's create a world where all TV and movies have the production values of Public Access" is a poor pitch. Even if you don't mind that, you have to understand that, politically, it's a non-starter.

Yea sounds dumb like that I agree.

The 68K is still a tiny bit awkward, with its 24-bit bus and alignment restrictions and middling support for indexing into arrays (no scale factor). The 68020 is about as close to C as an ISA can get - it’s extraordinarily pleasant.

While I agree that 68020 felt like a great improvement over 68000 and 68010, scaled indexed addressing is an unnecessary feature in any ISA that has good support for post-incremented and pre-decremented addressing.

Scaled indexed addressing is useful in 80386 and successors only because they lack general support for addressing modes with register update, and also because they have INC/DEC instructions that are one-byte shorter than ADD/SUB, so it is preferable to add/subtract 1 to an index register, instead of adding/subtracting the operand size.

Scaled indexed addressing allows the writing with a minimum number of instructions of some loops where multiple arrays are accessed, even when those arrays have elements with different sizes. When all array elements have the same size, non-scaled indexed addressing is sufficient (because you increment the index register with the common operand size, not with 1).

However there are many loops where scaled index addressing is not enough for executing them with a minimum number of instructions, while using post-incremented/pre-decremented addressing still allows the minimum number of instructions (e.g. for arrays of structures or for multi-dimensional arrays).

Unfortunately not even MC68020 has complete support for auto-incremented addressing modes, because besides auto-incrementing with the operand size there are cases when one needs auto-incrementing with the increment in a register, like provided by CDC 6600, IBM 801, ARM, HP PA-RISC, IBM POWER and their successors (i.e. when the increment is an array stride that is unknown at compile-time).

On x86-64, using scaled indexed addressing is a necessity for efficient programs. On the other hand on ISAs like ARM, which have both scaled indexed addressing and auto-indexed addressing, it is possible to never use scaled indexed addressing without losing anything, so in such ISAs scaled indexed addressing is superfluous.


The 24-bit bus means that you can use the top bits of a pointer as a tag. In a small system we don't need that much memory, this can actually be a great advantage. We are rediscovering the value of tag bits in 64-bit systems.

Narrowing, as TypeScript calls it, isn't dependant typing - it's basically just a form of pattern-matching over a sum type.

(ETA: speaking strictly about anonymous functions; on rereading you might be talking about the absence of parens and commas for function application.)

That's not ML syntax. Haskell got it from Miranda, I guess?

In SML you use the `fn` keyword to create an anonymous function; in Ocaml, it's `fun` instead.


I believe the `\` character for functions is original to Haskell. Miranda does not have anonymous functions as a part of the language.

The \ is a simplified lambda, because most programmers can't type λ easily.

I would be very in favour of making λ a keyword, though. Maybe a linter could convert \ to λ.

it would be confusing since it's not part of UnicodeSyntax

https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/unic...


Well, even better to add it to that! But I was thinking generally, to be honest. I really want Python to synonimise the keyword lamdba with the λ symbol so I can golf my Advent of Code code better.

Argh. Synonymise.

Well, ML (or at least the first versions of it) used a λx • x syntax [1] for λ-abstractions, the same (excluding the use of • over .) notation as used with the Lambda Calculus, and I've always assumed \ was an ASCII stand in.

[1]: https://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/... (can be spotted on page 353)


That paper isn't showing real ML syntax itself; it's a mathematical presentation to demonstrate how the type system algorithm works. The actual original LCF/ML syntax would differ. I don't believe it used an actual lambda character, although for the life of me I can't find any evidence one way or another, not even in the LCF source code (https://github.com/theoremprover-museum/LCF77)

But yes, the slash is just an ASCII stand-in for a lambda.

ETA: I tracked down a copy of the Edinburgh LCF text and I have to eat crow. It doesn't use a lambda, but it does use a slash rather than a reserved word. The syntax, per page 22, is in fact, `\x. e`. Similar to Haskell's, but with a dot instead of an arrow.

https://archive.org/details/edinburghlcfmech0000gord


Well if you're going to eat crow, I may as well eat pigeon, for I didn't realise that paper wasn't showing real source.

Thanks for the link to the LCF text though :^)


(spoilers)

It was really good at building up a mystery over the course of the first season, but I've been a little disappointed in the second so far.

The pacing's become glacial; the first couple of episodes worked mostly to undercut the dramatic significance of the events of last season's finale.

And I feel like the way that the satire is slowly being replaced by self-serious "lore" is hurting the show; it was very funny and disturbing to see the way the innies are "raised" in a cult and view the CEO as a kind of Messiah (and observe the parallels to real-world corporate culture); Lumen really being an evil cult - as opposed to just an evil company - in "reality", feels less satirical and more ham-fisted.

The ending of the most recent episode suggests promising things to come at least.


I think season 2 will end up doing a lot tbh. It got great reviews from critics and they allowed critics to watch the full season before reviewing, which isn't as common. Usually it's only one or two episodes. It makes me feel like they had a big story that they wanted to be witnessed in its entirety for the critics.

It did the thing I hate, which is a cliffhanger climax, and instead of picking up the thread where it left off and providing resolution/denouement, it just sort of ... resets?

The gold standard, IMO is something like the TNG episodes "The Best of Both Worlds" pt 1 and 2 -- an end-of-season cliffhanger that rewards you returning to the show by telling you what happens next!

I think the lacuna here is meant to add to the tension and mystery, but I agree that the new season has started off frustratingly slow. You gotta wrap up stuff to move forward with a plot, otherwise it's all just treading water for the sake of atmosphere.


Counterpoint: I don’t think it was a reset (after watching more of season 2), I think it was supposed to look like a reset intentionally, but it won’t end up being one.

It becomes much clearer in episodes 2 and especially 3. They strongly and directly start picking up the pieces of the season 1 ending and carrying it through. Without spoiling anything, episode 3 (of season 2) had some massive movements I wasnt expecting to see until later in the season (at soonest).


I thought it pretty directly started from the cliffhanger. It took three episodes (at a much faster clip than Season 1 episodes) to deal with the consequences of that cliffhanger, but that's the nature of the severance procedure itself, half the characters can't directly talk to the other half.

> Lumen really being an evil cult - as opposed to just an evil company - in "reality", feels less satirical and more ham-fisted.

Agreed. The 'banality of evil' horror of the first season was the show's strongest point.

Sadly, I expect it will eventually suffer from the same thing that torpedoed Lost:

1. Fans are originally attracted by the mystery and unexplained.

2. Those same fans then clamour for explanations.

3. Then when the show explains things, it loses its mystery and/or people complain the explanations aren't good enough.

To me, the only winning plot move is not to play: drip just enough teasy but mysterious stuff that nothing is ever explained, but everyone stays on the edge of their seats.

Then it can be incredibly successful, and people can bitch about the finale 30 years from now.


If anything the type system improves the run time speed, because the static analysis enables better code generation.

But I think what OP meant was more about the "functional programming" side of things than the "HM-typed" side of things. Naively, anyway, you might think that "the FP-style" of avoiding mutation and preferring recursion would require lots of garbage and high-latency garbage collection, copying, function call overhead...of course, that's not the whole story, but having Jane Street to point to as a crushing counter-example is nice.


Without getting into any specifics of it - I'm sure there's people with much more experience with these tools who can comment - I'll point out that neither buck nor bzl existed when JS decided to start building their own tool in 2012. Bazel's first release was in 2015, Buck's was in 2013.

JS does have a bit of a NIH culture, but I'm not sure if that was really at play here. There just...weren't very many good build tools available at the time, particularly for a company using an unorthodox tech stack.


> I'll point out that neither buck nor bzl existed when JS decided to start building their own tool in 2012. Bazel's first release was in 2015, Buck's was in 2013.

But Dune started (according to this blog post) in 2016 and JS started seriously improving and adopting it last year. So to me Jenga sounds like a reasonable step in 2012, but pouring significant effort into migrating from Jenga to Dune (and improving Dune) in 2024 sounds more weird


Jenga and dune are the same thing, it was just renamed.


The blog post clearly describes them as two different systems, and how Jane Street migrated from one to another.


Yes and no. This is all spelled out in the post, but it's a little thorny.

Dune is a rename of Jbuilder (2016). Jbuilder uses Jenga (2012) configuration files.

> By 2016 we had had enough of this, and decided to make a simple cross-platform tool, called Jbuilder, that would allow external users to build our code without having to adopt Jenga in full, and would release us from the obligation of rewriting our builds in OCamlbuild [...] Jbuilder understood the jbuild files that Jenga used for build configuration.

So in 2012 it made sense for them to build Jenga, because there weren't any good alternatives - Bzl etc. didn't exist, so they couldn't have solved their problems.

And in 2016 they had open-source code they wanted others to be able build; those people didn't want to use Jenga, and JS didn't want to rewrite their builds so that they could use something else. Thus, Jbuilder was a shim so that JS could still use their Jenga builds and others could build JS' code without using Jenga. Bzl etc., even though they existed, wouldn't have solved these problems either.


My bad, dune is a rename of jbuilder indeed. Not Jenga. But the other reply provides more context that's important.


AFAICT this is dead (no updates in four years) and "alpha software...only recommended for production use cases with careful testing, and if you are willing to contribute fixes or to work around issues you will encounter." Unfortunate, because it's pretty cool.


the repo says the code is in the mypy repo and it looks like it is maintained in there. so I think it has had updates the last four years?


Yeah, mypy uses it themselves for their releases. I think they get 5x speedups empirically


Just to throw my anecdote in: I used to work at the mypy shop - our client code base was on the order of millions of lines of very thorny Python code. This was several years ago, but to the best of my recollection, even at that scale, mypy was nowhere near that slow.

Like I said, this was many years ago - mypy might've gotten slower, but computers have also gotten faster, so who knows. My hunch is still that you have an issue with misconfiguration, or perhaps you're hitting a bug.


My current company is a Python shop, 1M+ LOC. My CI run earlier today completed mypy typechecking in 9 minutes 5 seconds. Take from that what you will.


Ditto, same order of magnitude experience; at least for --no-incremental runs.

Part of the problem for me is how easily caches get invalidated. A type error somewhere will invalidate the cache of the file and anything in its dependency tree, which blows a huge hole runtime.

Checking 1 file in a big repo can take 10 seconds, or more than a minute as a result.


I guess that there is something with the cache that we don't do right. Thanks for your return.

Likewise, the 6502 and Z80 had 16 address lines, but we never describe them as 16-bit.


Right, it is generally about the ALU.

It is hard to give registers too much weight, because both Moto designs used large registers.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: