Hacker News new | past | comments | ask | show | jobs | submit | matt_kantor's comments login

> Write each task on a sticky note. When you finish the task, crumple the note into a ball and throw it into a clear jar.

I independently stumbled upon this same system. It sounds like the author takes it further than me (I don't create notes for routine/repeating tasks, and don't break them down so much—in a typical day I only complete around 2-3 notes).

I've used sticky notes for years, but ~6 months ago added the jar (prior to that I just recycled them). It helps to have a visual reminder that I'm making progress over the long term, even if it doesn't feel like it some days.

My current jar is just about full and I've been debating what to do with it. Save it as desk art? Find a bigger jar and transfer the notes? Burn them in a cathartic ritual? I'm open to suggestions.


My gf and I are the stereotypical software nerd meets astrology girlie (but doesn't take it too far). She likes to do rituals every so often and I've found that it does create a moment of reflection that you can use however you like. The last time we were camping she told me she wanted to do one so I bought some of that fire color changing powder (mostly just fine metal shavings of various types) and then glued a bit of that on some paper and made sort of a paper packet and then I had the whole group write on a packet something they wanted to let go of and then we all tossed them into the fire at the same time and the fire changed colors. It was pretty magical and I scored a lot of brownie points! So you could sprinkle some of that into your jar and just toss a handleful into the fire and watch the magic of your productivity ascend to the heavens. Or, you know, just watch physics at work.

Your position is confusing to me as well.

> if you put the work in you can get to a point where you are fast enough at reading and reviewing code

Please correct me if I'm wrong, but "fast enough" here would still be slower than writing the code yourself (given equal amounts of practice at reading and writing code), right? To throw some made-up numbers around: if it would take me 20 minutes to write code to do X, it might take me 30 minutes to read/review code that does X written by somebody else (or an LLM), so I'm at a net loss of 10 minutes. Can you explain the mechanism by which this eventually tips into a productivity gain?

Personally, I think "reading code is harder than writing code" lacks nuance. While I believe it's true on average, the actual difficulties vary wildly depending on the specific changeset and the path it took to get there. For example, writing code can involve exploring many solutions before eventually discovering a concise/simple one, but when reading the final changeset you don't see all those dead-end paths. And reviewing nontrivial code often involves asynchronous back and forth with the author, which is not a factor when writing code. But please take the "reading code is harder than writing code" claim for granted when responding to the above paragraph.


Maybe you're imagining a team context and accounting for the total time spent by the entire team, and also imagining that LLM changesets aren't first reviewed by you and submitted in your name (and then have to be reviewed again by somebody else), but rather an agent that's directly opening pull requests which only get a single review pass?

It's more like it takes me five minutes to read code that would have taken me an hour to write.

Ah, then it seems like you don't agree that reading code is harder than writing code (for you). Or maybe you're decoupling hardness from time (so it's five difficult minutes vs an easy hour).

For the first 15 years of my career I found reading code much harder than writing code. Then I invested a lot of effort in improving my code reading and code reviewing skills, with the result that code reading no longer intimidates me like it used to.

That's why I think reading is harder than writing: it takes a whole lot more effort to learn code reading skills, in my experience.


Thanks, now I understand your perspective.

It seems like your answer to sarchertech's upthread "if you put in equal amounts of practice at reading and writing code you'll get faster at reading code than writing code" question might be "yes". Either that or you've intentionally invested more in your reading skills than your writing skills.


I'm not sure if "equal practice" is exactly right, but my opinion is that code reading and code review are skills that you can deliberately strengthen - and strengthening can help you get a lot more value out of both LLMs and collaborative development.

Oh course reading code is a skill that can be strengthened. Just like writing code can.

But if reading code is indeed harder than writing code, it stands to reason that if you put in equal effort to improving reading and writing abilities, your writing abilities would improve comparatively more.

If you spent all this time and effort learning to read code, such that you can read code 6x faster than you can write it, how do you know that you couldn’t have spent that effort improving you’re writing abilities such that you could write code 6x faster.

On the other hand if you did spend the same effort deliberately trying to increase you’re writing abilities as you did reading and the result is that you can read 6x faster than you can write, I’m unsure how you can support the conclusion that reading code is harder than writing it.

My gut feeling is that people on the far pro AI side of the spectrum tend to be people who are early in their career who don’t have strong writing or reading abilities (and so don’t really see the flaws) or people who have a reached a level where they aren’t really ICs anymore (even if that is their title). The latter have better reading than writing abilities because that’s what they spend all day doing.

Not that reading code is something that has an inherently higher skillcap than writing it.

I think there’s also a 3rd type of heavy AI proponent—people who spent most of their time cranking out MVPs or one offs that don’t require heavy maintenance (or they aren’t the ones doing the maintenance).

That’s not to say that I don’t think AI isn’t useful in those cases. I use AI pretty often myself when I’m writing in a language I don’t use everyday. Or when I’m doing something that I know has existing algorithmic solutions that I can’t quite remember (but I’ll know it when I see it) because it’s faster than googling. But I also recognize that there are many styles of programming, people, and domains where the productivity gains aren’t worth it.


No matter how hard I train my fingers will never be able to move fast enough to output 100 lines of code in 15 seconds.

When I get to writing actual production code with LLMs I treat them more as typing assistants than anything else: https://simonwillison.net/2025/Mar/11/using-llms-for-code/#t...


As someone who has had RSI issues in the past, I empathize with this, and I tend to use AI similarly.

But that’s not a fair comparison. You typed the equivalent of 15-20 lines of code to generate 100, and you also needed additional for reading/understanding that code.

I have no doubt that a programmer who worked with the relevant APIs frequently enough could have written that function faster than the total time it took you to do all those things.

Now that programmer in a less familiar domain could probably benefit from AI, but I get where people with different experiences are coming from.


I've been prototyping a programming language[0] with Haskell-like function conventions (all functions are unary and the "primary" parameter comes last). I recently added syntax to allow applying any "binary" function using infix notation, with `a f b` being the same as `f(b)(a)`[1]. Argument order is swapped compared to Haskell's infix notation (where `a f b` would desugar to `f(a)(b)`).

Along with the `|>` operator (which is itself just a function that's conventionally infixed), this turns out to be really nice for flexibility/reusability. All of these programs do the same thing:

  1 - 2 - 3 + 4

  1
    |> -(2)
    |> -(3)
    |> +(4)

  +(4)(
    -(3)(
      -(2)(1)
    )
  )
It was extremely satisfying to discover that with this encoding, `|>` is simply an identity function!

[0]: https://github.com/mkantor/please-lang-prototype

[1]: In reality variable dereferencing uses a sigil, but I'm omitting it from this comment to keep the examples focused.


Say I want to tell someone about how I parse things in my programming language. It happens to use parser combinators.

If I want to avoid the term "parser combinator" because it sounds too math-y, what should I say instead? I could spend a few sentences describing what parser combinators are, but if they've already heard the term (or have the gumption to look it up) then I've wasted our time and made the discussion more convoluted for no reason. If not, they can ask me and then I can describe what they are.

> It's weird to insist that they call something by a specific name

I don't see anyone in this thread doing so.


> Chrome blocks that because of their CORS policy

Firefox has the same behavior (I just tested using Firefox 136 on macOS). The error message says "CORS request not HTTP"[0]. You might have disabled `security.fileuri.strict_origin_policy` in about:config?

[0]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...


Yeah, happened long ago so I didn't remember it. Still for dev env I think it's a reasonable tradeoff.


My favorite is how when I close my macbook with an external display plugged in, the laptop screen remains on (and lit up!) with seemingly no way to configure this behavior. Sometimes a window will end up on that (non-visible) screen which can be very confusing.


That seems like a misconfig or a broken lid sensor or something. I’ve been using MacBooks with a single external monitor as my only display (MacBook closed) for over a decade and I’ve never had the laptop display stay on when closed with an external display. Maybe time to visit the Genius Bar?


My M1 has been like this since I got it so assumed it was by design, but perhaps not. My old macbook doesn't behave this way.


Huh.. maybe that's why it always runs out of battery on the rare occasion I put it in a backpack and take it somewhere...


This one affects me too. It's maddening.


The widest possible function type is `(...args: never) => unknown`. This is because parameters are contravariant, and `never` is the bottom type. Using that type works in the author's example[0].

I've got an issue open about TypeScript's provided `ReturnType` type which is somewhat related to this[1].

[0]: https://tsplay.dev/Wy0Ogm

[1]: https://github.com/microsoft/TypeScript/issues/55667


> it's just a set of all possible types

`unknown` is the set of all possible types (it's the top type[0]). `any` goes beyond that—it basically turns off the type checker[1][2].

[0]: https://en.wikipedia.org/wiki/Top_type

[1]: https://tsplay.dev/mA9vXm

[2]: https://www.typescriptlang.org/docs/handbook/2/everyday-type...


It's a "null" and "undefined" discussion all over, but now with transpiler.


Your 1 isn't equivalent to my example, line 11 is not constraining the type.


I know it's not equivalent, it was just an example to show what `any` does (and that it's more than "just a set of all possible types").

The `T extends Record<any, any>` on line 11 is a type parameter constraint though. Are you referring to something else when you say "constraining the type"?


It works differently based on where those anys are and what that Record<any, any> refers to due to type variance.


In the first paragraph of the article there's a link to the relevant WHATWG issue[0] (they could have used better link text though; of all people Google employees should be cognizant of link accessibility).

[0]: https://github.com/whatwg/html/issues/9799


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: