Hacker Newsnew | past | comments | ask | show | jobs | submit | ruszki's commentslogin

What is the harm in this case? Shit people are shit even without information. They would be snark about something else then.

I think it was covered during a discussion about immigrants that are easily rejected - because they're immigrants.

The points was that it added another layer of issues for immigrants because they didn't understand the neighbourhood they "should be living in" with their revenue.


Why is this not the “shit people do shit things” category? This happens even without being immigrants. Large part of my family lives in a way poorer neighborhood than what we can afford, because we don’t care to move. People who have problem with this had other problems even before we got richer. There is exactly zero difference. The exact same people are snark as before, just for something else now. They were and would be snark even without this.

This seems to me a very bad attempt to hide xenophobia.


The problem is that average people cannot tell even now. Heck, I'm quite sure that /r/all is completely bot driven, yet I still check it occasionally. I'm not even sure about HN, but I didn't find yet so obvious manipulation than on Reddit.

It's funny when people start accusing eachother of being chatGPT.

That sounds exactly like the kind of thing chatGPT would say to hide the fact it’s chatGPT… :)

Even cutting edge models are not very good. They are not even on mediocre level. Don’t get me wrong, they are improving, and they are awesome, but they are nowhere near good yet. Vibe coded projects have more bugs than features, their architecture and design system are terrible, and their tests are completely useless about half the time. If you want a good product you need to rewrite almost everything what’s written by LLMs. Probably this won’t be the case in a few years, but now even “very good” LLMs are not very good at all.

Not sure why you're being downvoted, this is very much my experience. When it matters (like, customer data is on the line) vibecoded projects are not just hilariously bad, but put you in legal danger.

We've so far found that Claude code is fine as a kind of better Coverity for uncovering memory leaks and similar. You have to check its work very carefully because about 1 time in 5 it just gets stuff wrong. It's great that it gets stuff right 4 times in 5 and produces natural code that fits into the style of the existing project, but it's nothing earth-shattering. We've had tools to detect memory leaks before.

We had someone attempt to translate one of our existing projects into Rust and the result was just wrong at a fundamental level. It did compile and pass its own tests, so if you had no idea about the problem space you might even have accepted its work.


With Claude Code now having a /plan mode - you can take your time and deliberate through architecture and design, collaboratively, instead of just sending a fire-and-forget. Much less buggy and saves time if you keep an eye on the output as you go, guiding it and catching defects, imho.

For that you need to create something which you know exactly how you want to code, or what architecture is needed. In other words, you would win basically nothing, because typing was never the real bottleneck (no matter what VIM and Emacs people would tell you).

LLMs also make mistakes even way lower level than those one pagers allow you to control with the planning mode. Which I use all the time btw. And anyway, they throw the plan out of the window immediately when their tried solutions don't work during execution, for example when a generated test is failing.

Btw, changing the plan after its generation is painful. It happens more than not that when I decline it with comments it generates a worse version of it, because it either miss things from the previous one which I never mentioned, or changes the architecture to a worse one completely. In my experience, it's better to restart the whole thing with a more precise prompt.


Ah, this is true - for my purposes, I've been directing the design and deliberating on the constraints and specifications for a larger system in tandem with smaller planning sessions.

That has worked well so far, but yes, you are totally right, there are still quite a few pain points and it is still rather far from being fire-and-forget "build me a fancy landing page for a turnkey business" and getting enterprise quality code.

edit: I think it is most important that you collaborate with Claude Code on quality in a systematic way, but even that has limits, right now - 1M context changes things a little bit.


You know, with all the babysitting needed, I wonder if effort is not better spent in just, you know, writing code.

Can you actually quantify the time & effort 'saved' letting LLM generate code for you?


For me, personally, I'm building things that would have been impractical for me to do as cleanly within the same amount of time - prototypes in languages that I don't have the muscle memory for, using algorithms i have a surface level understanding of but would need time to deeply understand and implement by hand, and, at my pace, as a retired dev, is probably quantified in terms of years worth of time and effort saved.

edit: also, would I take the time to implement LCARS by hand? No. But with an LLM, sure, took it about 3 minutes or less to implement a pretty decent LCARS interface for me.


That’s not inherent of universal healthcare at all. In Austria, you can go to a different doctor if you wish.

> Couple this with the fact that models respond better/worse to certain prompts depending on the stylistic composition of the prompt itself.

Do we really know this, or is it just gut feeling? Did somebody really proved this statistically with a great certainty?


I hit limit of Pro in about 30 minutes, 1 hour max. And only when I use a single session, and when I don't use it extensively, ie waits for my responses, and I read and really understand what it wants, what it does. That's still just 1-2 hours/5 hours.

What do you do to avoid that?


You're probably having long sessions, i.e. repeated back-and-forth in one conversation. Also check if you pollute context with unneeded info. It can be a problem with large and/or not well structured codebases.

The last time I used pro, it was a brand new Python rest service with about 2000 lines generated, which was solely generated during the session. So how I say to Claude that use less context, when there was 0 at the beginning, just my prompt?

So you had generated 2000 lines in 30 minutes and ran out of tokens? What was your prompt?

I’d use a fast model to create a minimal scaffold like gemini fast.

I’d create strict specs using a separate codex or claude subscription to have a generous remaining coding window and would start implementation + some high level tests feature by feature. Running out in 60 minutes is harder if you validate work. Running out in two hours for me is also hard as I keep breaks. With two subs you should be fine for a solid workday of well designed and reviewed system. If you use coderabbit or a separate review tool and feed back the reviews it is again something which doesn’t burn tokens so fast unless fully autonomous.


> I managed to fix my source code alone, like twelve months ago.

I’ve just mentioned to one of my friend yesterday, that you cannot do this anymore properly with new things. I’ve started a new project with some few years old Android libraries, and if I encounter a problem, then there is a high chance that there is nothing about it on the public internet anymore. And yesterday I suffered greatly because of this. I tried to fix a problem, I had a clearly suboptimal solution from myself after several hours, but I hated it, but I couldn’t find any good information about it (multi library AndroidManifest merging in case of instrumented tests). Then I hit Claude Code with a clear example where it fails. It solved it, perfectly. Then I asked in a separate session how this merging works, and why its own solution works. It answered well, then I asked for sources, and it cannot provide me anything. I tried Google and Kagi, and I couldn’t find anything. Even after I knew the solution. The information existed only hidden from the public (or rather deep in AGP’s source code), in the LLM. And I’m quite sure that I wasn’t the only one who had this problem before, yet there is no proper example to solve this on the internet at all, or even anything to suggests how the merging works. The existing information is about a completely separate procedure without instrumented tests.

So, you cannot be sure anymore, that you can solve it by yourself. Because people don’t share that much anymore. Just look at StackOverflow.


It looks like you could write that blogpost and get some traffic, on the other side. Very interesting how the flow has changed direction based on your example.

You can write in it like in imperative languages. I did it when I first encountered it long time ago, and I didn’t know how to write, or why I should write code in a functional way. It’s like how you can write in an object oriented way in simple C. It’s possible, and it’s a good thought experiment, but it’s not recommended. So, it’s definitely not “enforced” in a strict sense.

Isn’t code in Haskell pure by default and you have to use special keywords to have code with side effects?

There's no special keyword, just a "generic" type `IO<T>` defined in standard library which has a similar "tainting" property like `async` function coloring.

Any side effect has to be performed inside `IO<T>` type, which means impure functions need to be marked as `IO<T>` return. And any function that tries to "execute" `IO<T>` side effect has to mark itself as returning `IO<T>` as well.


It's pure even with side effects.

You basically compose a description of the side effects and pass this value representing those to the main handler which is special in that it can execute the side effects.

For the rest of the codebase this is simply an ordinary value you can pass on/store etc.


Chat Control was nowhere near to become law. Last Fall, they would have voted about whether they really talk about it, but even that failed. Even if it would have been successful, nothing would have guaranteed that it becomes a law, or that it wouldn’t have been watered down completely. And the whole topic was kept alive somehow when it was well known even in September that the proposal was dead, because Germany doesn’t support it, and they didn’t have the necessary number of votes. Yet, there were articles even in November which stated that Germany decided only at that time that they don’t support it, which was obviously not true. It seemed to me like an artificial bubble of outrage. It was a bad proposal, so the outrage was needed until September. It’s just strange that people still pretend that it’s not dead half year after it became impossible to even consider it in the parliament.

https://en.wikipedia.org/wiki/United_States_Department_of_De...

Stoping and questioning why somebody uses DoD or DoW is way more telling than using any of those. Especially that both are perfectly fine, even officially.

A square was renamed in my home city about 20 years ago. We still use the original one usually, even teens know that name. I use a form of the original name of our main stadium which was renamed almost 30 years ago. Heck, some people use names of streets which are not official for almost 40 years now. Btw, the same with departments of the government. Nobody follows how they called at the moment, because nobody really cares. That’s the strange when somebody cares.


Or it could have just been a genuine question. I'm not American and I've seen DoW used in newspapers and thought the name change was official. Personally I've thought it a more apt and honest name for what they do.

But the backlash in the commments here show how ideologically charged the question seem to be.


I wasn't aware of how ideologically charged the question was. I'm also not American, but I'm glad I made the question. It's a clear sign for us not Americans to just leave them be.

> It's a clear sign for us not Americans to just leave them be.

Depending on where you live in the world that might be quite hard to do soon.


I agree. I live in Brazil and even though tariffs and interventions weren't directed at us, they influence the economy and political decisions. Also, Venezuela is right next to us, so instabilities there do tend to affect the whole region.

> Or it could have just been a genuine question.

Yes, exactly that’s why I wrote several examples to support why the chance for that is very-very slim.


Easier to work in hypotheticals than to do a bit of research like read the other comments. Just explained it was an honest question and why.

Do you really trust in random comments on the internet which states something to which the possibility is slim, because literally nobody cares why somebody calls the way it is, when that somebody knows both names, and when it's not political? I don't think that's optimal, and it's a hefty understatement of course.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: