Hacker News new | past | comments | ask | show | jobs | submit | enneff's comments login

I’m a leftist but not an American and I would prefer not to hear from any of those billionaires again, nor any others for that matter.


That’s the thing, isn’t it? The craft of programming in the small is one of being intimate with the details, thinking things through conscientiously. LLMs don’t do that.


I find that it depends very heavily on what you're up to. When I ask it to write nix code it'll just flat out forget how the syntax works half way though. But if I want it to troubleshoot an emacs config or wield matplotlib it's downright wizardly, often including the kind of thing that does indicate an intimacy with the details. I get distracted because I'm then asking it:

> I un-did your change which made no sense to me and now everything is broken, why is what you did necessary?

I think we just have to ask ourselves what we want it to be good at, and then be diligent about generating decades worth of high quality training material in that domain. At some point, it'll start getting the details right.


That doesn't work in the tech industry, because almost nothing is decades old, for obvious reasons.


What languages/toolkits are you working with that are less than 10 years old?

Anyhow, it seems to me like it is working. It's just working better for the really old stuff because:

- there has been more time for training data to accumulate

- some of it predates the trend of monetizing data, so there was less hoarding and more sharing

It may be that the hard slow way is the only way to get good results. If the modern trends re: products don't have the longevity/community to benefit from it, maybe we should fix that.


Perhaps it should be prompted to then?

Ask it to review its own code for any problems?

Also identify typical and corner cases and generate tests?

Question marks here because I have not used the tool.

The size & depth of each accepted code step is still up to the developer slash prompter


I use Chatgpt for coding / API questions pretty frequently. It's bad at writing code with any kind of non-trivial design complexity.

There have been a bunch of times where I've asked it to write me a snippet of code, and it cheerfully gave me back something that doesn't work for one reason or another. Hallucinated methods are common. Then I ask it to check its code, and it'll find the error and give me back code with a different error. I'll repeat the process a few times before it eventually gets back to code that resembles its first attempt. Then I'll give up and write it myself.

As an example of a task that it failed to do: I asked it to write me an example Python function that runs a subprocess, prints its stdout transparently (so that I can use it for running interactive applications), but also records the process's stdout so that I can use it later. I wanted something that used non-blocking I/O methods, so that I didn't have to explicitly poll every N milliseconds or something.


Honestly I find that when GPT starts to lose the plot it's a good time to refactor and then keep on moving. "Break this into separate headers or modules and give me some YAML like markup with function names, return type, etc for each file." Or just use stubs instead of dumping every line of code in.


How long are you willing to iterate to get things right?


If it takes almost no cognitive energy, quite a while. Even if it's a little slower than what I can do, I don't care because I didn't have to focus deeply on it and have plenty of energy left to keep on pushing.


As my mother used to say, "I love work. I could watch it all day!"

I can see where you are coming from.

Maintaining a better creative + technical balance, instead of see-sawing. More continuous conscious planning, less drilling.

Plus the unwavering tireless help of these AI's seems psychologically conducive to maintaining one's own motivation. Even if I end up designing an elaborate garden estate or a simpler better six-axis camera stabilizer/tracker, or refactoring how I think of primes before attempting a theorem, ... when that was not my agenda for the day. Or any day.


I'm constantly having to go back and tell the AI about every mistake it makes and remind it not to reintroduce mistakes that were previously fixed. "no cognitive energy" is definitely not how I would describe that experience.


Sounds like the context window is getting pruned. Start a new chat fresh after you make significant changes.


That's presumably what o1-preview does? Iterates and checks the result. It takes much longer, but does indeed write slightly better code.


And what exactly do you think lila, lila-ws, and redis are if not microservices (or as they should be called, “services”)? Lichess could easily be implemented as a single monolithic process but it is not.


They are services, but not micro. lila-ws spun off of Lila for a good reason (fault isolation) and not because "let's make everything a service". And they don't follow any standard microservice pattern - a reverse proxy isn't a microservice.



this architecture diagram shows that lichess is a traditional monolith with a handful of functions separated.


So that the client knows the message has been delivered and handled by the server, which can make the UI indicate the state of the connection.


What do you mean? If you open a web socket connection it should behave like a normal TCP connection. All sent data guaranteed to be delivered complete and in order, unless the connection fails.


Unless the connection fails, at which point you have no idea when it failed. You know that the other side received all stream offsets within [initial, X] with X ≥ last received ACK, but other than that you have no idea what X is. Even getting the last received ACK value out of whatever API or upper-level protocol you’re using could be nontrivial, because people rarely bother.


I think I had it set up to auto reconnect. So I suppose the packets sent between "failure occurs" and "socket disconnected" were lost.

At any rate my conclusion was disappointment that if I actually want reliability, I need to implement my own ACKs anyway, meaning I'm paying a pretty high overhead for no benefit.

At least now there's UDP in browser with WebTransport. I haven't tried it yet, but I hear it's a lot more pleasant than the previous option WebRTC, which was so convoluted (for the "I just want a UDP socket" usecase) that very few people used it.



If you look at the dismissals in this HN discussion you’ll find none of them are coming from wheelchair users or even people familiar with wheelchairs.


I remember finding this funny back in the 90s but it’s just kind of sad to me now. Today he just seems like a very unhappy, psychopathic character.


I expected to experience something similar, but on reading, it's so over the top that my brain has no problem realizing it's a joke. Violent movies on the other hand I can't really enjoy much anymore.


To be fair, the ecosystem is kind of inextricably tied to the protocol. I’m not aware of any other production grade Go gRPC implementations besides the official one.


But grpc isn't limited to go. Criticizing gprc, as a whole, for the http library used with go isn't valid. However, it's fair to take issue with the choice of http library used by the most popular go codegen tool.


Connect [1] is one and it's fantastic. The Go implementation in particular is much nicer than grpc-go.

[1] https://connectrpc.com/


Wow that’s awesome! I wasn’t aware of this.


Hell you might even make something better, which is I suspect one of the unstated reasons why the source is not released.


I can't tell if you're serious.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: