Hacker Newsnew | past | comments | ask | show | jobs | submit | code51's commentslogin

I'm surprised these pockets of job security still exist.

Know this: someone is coming after this already.

One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"


> Know this: someone is coming after this already.

Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.

I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.

Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.


Lowballing contracts is nothing new. It has never ever worked out.

You can trow all AI you want, but at the end of the day you get what you pay for.


Exactly and this is hell for programming.

You don't know whose style the LLM would pick for that particular prompt and project. You might end up with Carmack or maybe that buggy, test-failing piece of junk project on Github.


You can tell it who's style to copy, it's actually decent at following instructions like that.


It's not bad at following my own style. I have longstanding quirks like naming any string that will end up in a DB query with a "q_" in front of the variable name, and shockingly Claude picks up on those and mimicks them. Wouldn't trust it to write anything without thorough review, but it's great at syntax.


this isn't shocking, they are very good at repeating patterns in the immediate context. they're just not very good at anything else. your quirk is part of the immediate pattern


My first experiments with LLM chat was to ask to produce text mimicking the style of a distinct, well-known author. It was also quite good at producing hybrid fusions of unique fictional styles, A + B = AB.


Can you just tell it it’s Carmack? :P


I doubt he's letting LLM creep in to his decision-making in 2025, aside from fun side projects (vibes). We don't ever come across Karpathy going to an LLM or expressing that an LLM helped in any of his Youtube videos about building LLMs.

He's just test driving LLMs, nothing more.

Nobody's asking this core question in podcasts. "How much and how exactly are you using LLMs in your daily flow?"

I'm guessing it's like actors not wanting to watch their own movies.


Karpathy talking for 2 hours about how he uses LLMs:

https://www.youtube.com/watch?v=EWvNQjAaOHw


Vibing, not firing at his ML problems.

He's doing a capability check in this video (for the general audience, which is good of course), not attacking a hard problem in ML domain.

Despite this tweet: https://x.com/karpathy/status/1964020416139448359 , I've never seen him citing an LLM helped him out in ML work.


You're free to believe whatever fantasy you wish, but as someone who frequently consults an LLM alongside other resources when thinking about complex and abstract problems, there is no way in hell that Karpathy intentionally limits his options by excluding LLMs when seeking knowledge or understanding.

If he did not believe in the capability of these models, he would be doing something else with his time.


One can believe in the capability of a technology but on principle refuse to use implementations of it built on ethically flawed approaches (e.g., violating GPL licensing laws and/or copyright, thus harming open source ecosystem).


AI is more important than copyright law. Any fight between them will not go well for the latter.

Truth be told, a whole lot of things are more important than copyright law.


Important for whom, the copyright creators? Being fed is more important than supermarkets, so feel free to raid them?


Conflating natural law -- our need to eat -- with something we pulled out of our asses a couple hundred years ago to control the dissemination of ideas on paper is certainly one way to think about the question.

A pretty terrible way, but... certainly one way.


I am sure it had nothing to do with the amount of innovation that has been happening since, including the entire foundation that gave us LLMs themselves.

It would be crazy to think the protections of IP laws and the ability to claim original work as your own and have a degree of control over it as an author fostered creativity in science and arts.


Innovation? Patents are designed to protect innovation. Copyright is designed to make sure Disney gets a buck every time someone shares a picture of Mickey Mouse.

The human race has produced an extremely rich body of work long before US copyright law and the DMCA existed. Instead of creating new financial models which embrace freedoms while still ensuring incentives to create new art, we have contorted outdated financial models, various modes of rent-seeking and gatekeeping, to remain viable via artificial and arbitrary restriction of freedom.


Patents and copyright are both IP. Feel free to replace “copyright” with “IP” in my comment. Do you not agree that IP laws are related to the explosion of innovation and creativity in the last few hundred years in the Western world?

Furthermore, claiming “X is not natural” is never a valid argument. Humans are part of nature, whatever we do is as well by extension. The line between natural and unnatural inevitably ends up being the line between what you like and what you don’t like.

The need to eat is as much a natural law as higher human needs—unless you believe we should abandon all progress and revert to pre-civilization times.

IP laws ensure that you have a say in the future of the product of your work, can possibly monetise it, etc., which means a creative 1) can fulfil your need to eat (individual benefit), and 2) has an incentive to create it in the first place (societal benefit).

In the last few hundred years intellectual property, not physical property, is increasingly the product of our work and creative activities. Believing that physical artifacts we create deserve protection against theft while intellectual property we create doesn’t needs a lot of explanation.


What you see as copyright violation, I see as liberation. I have open models running locally on my machine that would have felled kingdoms in the past.


I personally see no issue with training and running open local models by individuals. When corporations run scrapers and expropriate IP at an industrial scale, then charge for using them, it is different.


What about Meta and the commercially licensed family of Llama open-weight models?


I have not researched closely enough but I think it falls under what corporations do. They are commercially licensed, you cannot use them freely, and crucially they were trained using data scraped at an industrial scale, contributing to degradation of the Web for humans.


Since Llama 2, the models have been commercially licensed under an acceptable use policy.

So you're able to use them commercially as you see fir, but you can't use them freely in the most absolute sense, but then again this is a thread about restricting the freedoms of organizations in the name of a 25-year-old law that has been a disgrace from the start.

> contributing to degradation of the Web for humans

I'll be the first to say that Meta did this with Facebook and Instagram, along with other companies such as Reddit.

However, we don't yet know what the web is going to look like post-AI, and it's silly to blame any one company for what clearly is an inevitable evolution in technology. The post-AI web was always coming, what's important is how we plan to steward these technologies.


The models are either commercial or not. They are, and as such they monetise the work of original authors without their consent, compensation, and often in violation of copyleft licensing.

> The post-AI web was always coming

“The third world war was always coming.”

These things are not a force of nature, they are products of human effort, which can be ill-intentioned. Referring to them as “always coming” is 1) objectively false and 2) defeatist.


> Continuing the journey of optimal LLM-assisted coding experience. In particular, I find that instead of narrowing in on a perfect one thing my usage is increasingly diversifying

https://x.com/karpathy/status/1959703967694545296



Junior jobs will come back when blitz-pricing of AI coding products end. Current bosses think these prices with 200/mo to "leave it and auto-code for the whole month, day and night" will stay like this. Of course it won't.

Typical startup play but in massive scale. Junior jobs might come back but not in bulk, still selective, very slowly.


If that’s your thesis, $200/mo has a lot of room for price increases before you start reaching Jr dev salary/mo.


Anthropic is actually a good point to focus on since Claude is very good proof that it's not about the scaling. We are not quite there yet but we are "programming" through how we shape and filter the input data for training it seems. With time, we'll understand the methods to better represent.

Current situation doesn't sound too good for "scaling hypothesis" itself.


> Current situation doesn't sound too good for "scaling hypothesis" itself.

But the “scaling hypothesis” is the easiest, fastest story to raise money. So it will be leveraged until conclusively broken by the next advancement.


The underlying assumption is that language and symbols are enough to represent phenomena. Maybe we are falling for this one in our own heads as well.

Understanding may not be a static symbolic representation. Contexts of the world infinite and continuously redefined. We believed we could represent all contexts tied to information, but that's a tough call.

Yes, we can approximate. No, we can't completely say we can represent every essential context at all times.

Some things might not be representable at all by their very chaotic nature.


I did think that human mental modeling of the world is also quite rough and often inaccurate. I don't see why AI can't become human like in it's abilities but accurately modeling all the relativistic quarks in an atom is a bit beyond anything just now.


High probability your v2 voice will break with this.


How convenient that insiders jumped on INTC with this "rumor" 3 days ago.

They made about 25% from this in 3 days. Easiest money from a rumor.


The SEC really doesn’t like such plays.

No wait that was a couple months ago.


Is the SEC still around? I thought Elon announced he was eliminating all federal agencies this month containing a letter in the word Valentine.


The husk of the SEC with its brains scooped out is a useful tool to sic upon one's competitors or ideological enemies. I wouldn't be surprised by an SEC investigation into OpenAI in the next few years.


Can the SEC investigate private companies?


Yes. Public companies just have more reporting requirements, but as long as an organization deals with Securities like shares, they fall under the SEC's ambit, public or private.

I think the OpenAI's non-profit to profit transition would have warranted a looking into by any administration, I suspect there will be additional personal animus driving this specific set of players.


Yeah I was thinking the same thing. It's only a matter of time before Musk goes after Altman with the newly weaponized agencies.


The SEC will probably still exist to punish enemies.

So update your bio with a MAGA photo and something racist.


If this was a real thing that would play out, the stock would go up far more than 25%. Still time for normal laymen to get in. Or it's not going to happen, by which case, it's going to go back down.

Coinflips for everyone!


Sometimes the play is only about the announcement itself, not the implementation - news about a possible big news - just like upcoming Trump tariffs.


Daniel Kahneman, Nobel laureate in Economics, described this in his book, "Thinking, Fast and Slow".

Saddam Hussein's capture caused oil industrial stock to climb because of the association between oil and the Middle East. After people realized that Saddam Hussein actually had no relevance to the price of oil or the product, the socks fell back down.

Stock market is quite often pushed emotion and asinine assumptions versus logic and reasoning. Enough social media echo cambers can easily sway stock prices momentary to make a quick buck.


Despite all its worth, Suno doesn't approach the problem well.

After a few laughs and cheap replicas, people realize that it's damn hard to produce a good sounding, creative piece. Suno almost always adds noise and you can feel most of this is coming from fingerprinting.

With Suno and Udio, you lose control. You can generate starters and helpers but sooner or later, you want real control. That's not editing a section and having a conditioned piece of garbage seemingly fitting to the rest. No, control means completely changing progressions, sudden but calculated change of beats, removing any instrument for the shortest time and putting it back with razor-sharp studio detail.

I know a few of these are already addressable, you can take the output, separate into channels (if it's simple enough), quantize, edit and have a good one. Yet, you're not really supported anymore. What should have been was these other core music production software to get cheaper and/or far more effective.

Suno and Udio is a top-down approach. Maybe one day Logic Pro, Ableton, Melodyne etc. will fill in the details up to this point, coming from the ground up with AI, I don't know. We're not there yet and it just brings down the mask of mainstream music industry with its all-repeating shallow beats marketed to hell. Hearing mainstream was awful but it suddenly got even more awful.


I don’t believe any of these companies are seriously trying to create a tool for musicians/artists.

When they say they’re catering to “artists”, they want to say that you can become an artist by promoting their models for the small fee of X.99/mo.

Their goal is to train a model of the well-trodden one-shot text<->art form, and just sell prompting access directly to the mass market consumer audience. Way less work, way bigger market (and less discerning, too)


The tradeoffs between automation and empowerment are tricky to navigate.


Much of this "bad actor" activity is actually customer needs left hanging - for either the customer to automate herself or other companies to fill the gap to create value that's not envisioned by the original company.

I'm guessing investors actually like a healthy dose of open access and a healthy dose of defence. We see them (YC, as an example) betting on multiple teams addressing the same problem. The difference is their execution, the angle they attack.

If, say, the financial company you work for is capable in both product and technical aspect, I assume it leaves no gap. It's the main place to access the service and all the side benefits.


> Much of this "bad actor" activity is actually customer needs left hanging - for either the customer to automate herself or other companies to fill the gap to create value

Sometimes the customer you have isn't the customer you want.

As a bank, you don't want the customers that will try to log in to 1000 accounts, and then immediately transfer any money they find to the Seychelles. As a ticketing platform, you don't want the customers that buy tickets and then immediately sell them on for 4x the price. As a messaging app, you don't want the customers who have 2000 bot accounts and use AI to send hundreds of thousands of spam messages a day. As a social network, you don't want the customers who want to use your platform to spread pro-russian misinformation.

In a sense, those are "customer needs left changing", but neither you nor otherr customers want those needs to be automatible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: