Hacker Newsnew | past | comments | ask | show | jobs | submit | nottorp's commentslogin

And how much does Discord commit to paying in damages if my face scan or ID scan leaks from their servers? Via security vulnerabilities or employees making some money on the side?

> But all with AI-generated JavaScript, not cool Rust and SIMD stuff

Heh, I almost hit back at the "in Rust" mention.

Would the end result have been different if it were done in python calling C libraries for performance? I strongly doubt it.


Isn't cross language function calling expensive? I assume it is significant.

Well it's becoming developer hostile enough already. Maybe drop python and all command line tools while they're at it.

Would do wonders for that mythical year of the linux desktop...


Google could afford to manually exclude the content farms if they didn't morph from a search company to an advertising company.

Hmm? Why do you restart your computer often enough to notice?

Even Windows (or at least my install that doesn't have any crap besides visual studio on it) can run for weeks these days...


My work laptop decided probably once a week to not go to sleep and just run its battery to 0.

My work PC will decide to not idle and will spin up fans arbitrarily in the evenings so I shut it down when I’m not using it.


> My work laptop decided probably once a week to not go to sleep and just run its battery to 0.

That reminds me that if Apple annoys me enough to switch back to linux at my main OS it will hurt on laptops :(


Btw, in the wannabe indie gaming scene it doesn't matter what language or tool or framework you use. It matters if you finish the fucking thing.

I've noticed this in my small scale tests. Basically the larger the prompt gets (and it includes all the previously generated code because that's what you want to add features to), the more likely is that the LLM will go off the rails. Or forget the beginning of the context. Or go into a loop.

Now if you're using a lot of separate prompts where you draw from whatever the network was trained on and not from code that's in the prompt, you can get usable stuff out of it. But that won't build you the whole application.


> In fact, LLMs will be better than humans in learning new frameworks.

LLMs don't learn? The neural networks are trained just once before release and it's a -ing expensive process.

Have you tried using one on your existing code base, which is basically a framework for whatever business problem you're solving? Did it figure it out automagically?

They know react.js and nest.js and next.js and whatever.js because they had humans correct them and billions of lines of public code to train on.


If its on github eventually it will cycle into the training data. I have also seen Claude pull down code to look at from github.

Wouldn't there be a chicken and egg problem once humans stop writing new code directly? Who would write the code using this new framework? Are the examples written by the creators of the framework enough to train an AI?

There's tooling out there 100% vibe coded, that is used by tens of thousands of devs daily, if that codebase found its way to training data, would it somehow ruin everything? I don't think this is really a problem, the problem will become people will need to identify good codebases from bad ones, if you point out which codes bad during training it makes a difference. There's a LOT of writings about how to write better code out there that I'm sure are already part of the training data.

How much proprietary business logic is on public github repos?

I'm not talking about "do me this solo founder saas little thing". I'm talking about working on existing codebases running specialized stuff for a functional company or companies.


Hmm? Where I live you have to renew your license (basically pass a few medical exams) every 10 years since the day you get your first license. Why wait until 70?

Exactly. Why did the article author think ads weren't scams before they were "AI" generated?

Where does the author claim or even remotely suggest that?

They only NOW assume all ads are scams. Suggests they didn't make that assumption before, doesn't it?

And in your mind NOW always means "since GenAI is a thing"?

Most of the time, when people realize something, it happens NOW. Also, AI isn't even mentioned in the headline at all, and not even in the first part of the article. It's just used as one hint that it might be scam, then followed up with further evidence.


In the headline - the word Now implies "Ads before weren't scams but they are now"

And where in the headline is "AI"?

"Here are three ads that are scammy; the first two were clearly generated by AI, and the third may have been created by AI."

Headline - aka. 4th paragraph of the article.

The "AI" evangelists are trying to explain to us that all ads are to be trusted because they're "AI" generated now...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: