Hacker News new | past | comments | ask | show | jobs | submit | bitwizeshift's comments login

Tech interviews in general need to be overhauled, and if they were it’d be less likely that AI would be as helpful in the process to begin with (at least for LLMs in their current state).

Current LLMs can do some basic coding and stitch it together to form cool programs, but it struggles at good design work that scales. Design-focused interviews paired with soft-skill-focus is a better measure of how a dev will be in the workplace in general. Yet, most interviews are just “if you can solve this esoteric problem we don’t use at all at work, you are hired”. I’d take a bad solution with a good design over a good solution with a bad design any day, because the former is always easier to refactor and iterate on.

AI is not really good at that yet; it’s trained on a lot of public data that skews towards worse designs. It’s also not all that great at behaving like a human during code reviews; it agrees too much, is overly verbose, it hallucinates, etc.


This is a good read on how to commit concepts to long-term memory and building skills.

I think there is a typo in the article though; there is a point that says:

> work out the problem by computing 6 × 5 = 5 + 5 + 5 + 5 + 5 + 5 = 30 (or 6 + 6 + 6 + 6 + 6 = 21)

The second parenthetical statement should be 30, unless I’m missing something?


Whoops, yeah, that's a typo. I think I originally had 7x3 as the example. Thanks for catching it.


The article never talked about bot-generated products, only bot generated comments and upvotes. How does manual review address this exactly?


What a strange and subjective take… I am genuinely struggling to understand the author’s viewpoint here, and why this post needed to exist at all.

The author proposes that braces are somehow subjectively harder to read for matching, and then says to just use a different delimiter of “end”. At which point, when you read nested code, you just see lots of “end” statements which are no different visually to seeing “}” closing braces, so what problem was solved exactly…?

I’m not saying it’s bad, it just doesn’t solve any practical problem, and it doesn’t improve anything objectively. This is just like debating why call a builtin type is “int” instead of “Int”. Most language-nerds I know tend to discuss more important details that can theoretically improve a language, and this is just stating a preference for Ruby “end” over C-style braces.

I feel like this needs to be reposted on April 1st


I chuckled when it said that curly brackets are bad because they don't look nice when rotated by 90 degrees.

> This is just like debating why call a builtin type is “int” instead of “Int”.

It's even a bit worse because any text editor knows out of the box about balancing brackets (e.g. Emacs in the text mode) and doesn't know about the "end" syntax.


This hasn’t been my experience at all in the slightest.

Been programming since I was in elementary school, and current Copilot, OpenAI and even Gemini models generate code at a very very junior level. It might solve a practical problem, but it can’t write a decent abstraction to save its life unless you repeatedly prompt it to. It also massively struggles to retain coherence when it has more moving parts; if you have different things being mutated, it often just forgets it and will write code that crashes/panics/generates UB/etc.

When you are lucky and you get something that vaguely works, the test cases it writes are of negative value. Test cases are either useless cases that don’t cover edge cases, are incorrect entirely and fail, or worse yet — look correct and pass, but are semantically wrong. LLM models have been absolutely hilariously bad at this, where it will generate passing cases for the code as written, but not for the semantics of the code being written. Writing it by hand would catch it quickly, but a junior dev using these tools can easily miss this.

Then there is Rust; most models don’t do rust well. In isolation they are kind of okay, but overall it frequently generates borrowing issues that fail to compile.


But I guess, and this is dangerous to say I do realize, is that the tooling around the prompts and around the results is key to getting the best results. Just prompts without guards is not how you want to do it.


Are you from Canada, or was that just an uncanny description of Canadian healthcare?


Hey everyone, I created a C++ "`result`" monad type with functionalities and behaviours much like Swift or Rust's equivalent `Result` type.

I've been working on this project for quite some time, and just wanted to share it with everyone because it's at a point that i feel is complete for a 1.0 release.

A little bit about it:

It's completely zero-overhead (no paying for what you don't use), constexpr supported, C++11 compatible (with more constexpr in newer versions), it optimizes extremely well (https://godbolt.org/z/TsonT1), it's extremely well-tested (https://coveralls.io/github/bitwizeshift/result?branch=maste...) both for static validation and runtime validation, and despite being based on the feature-sets from other modern languages, the design has kept a focus on keeping this idiomatic for modern C++

The design was originally based on the P0323 `std::expected` proposals, and over time grew into a standalone type that better modelled `result` in the end.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: