Hacker News new | past | comments | ask | show | jobs | submit | dartos's comments login

I think that’d make for interesting experiments and fringe sites, I don’t really see like your average e-commerce site ever doing anything like that.

You’d want the A and B to be intentional, not automatically generated. Every VP thinks their idea for a feature will revolutionize the company.


> Every VP thinks their idea for a feature will revolutionize the company.

Now imagine that everyone of them is given a tool that could get them an POC quickly. I think a lot VPs are about to figure out that their ideas are shit.


This pre-supposes that said VPs have the self-awareness to realise their ideas are shit.


Probably because both anthropic and openai were on the whole AGI train where they were trying to heavily personify their products.

Google never seemed to personify theirs, IIRC. They always presented their AI tools in a utilitarian way.


really?

because of the current situation in the US...


Sorry, but have you paid attention to the legal system in the states?

Large corporations and their execs live by different laws than the rest of us.

That’s how it is.

Anything is else is, unfortunately, a fiction in this country.


And? Two wrongs don’t make a right.


There’s no “and.”

I’m just stating a fact. No discussion of wrong or right or whatever.

Just pointing out how there is no more rule of law in the US. Idk when exactly it disappeared, but it’s definitely not present anymore


Not everyone is a deontologist, some are moral naturalists and others still are utilitarians.


I long for the day when someone can give advice based on their own personal experience without someone else being like “well that won’t work for literally everyone”

Yeah obviously. It’s a personal anecdote.


What's the _point_ of the anecdote, though? You're taking up everybody's time to tell a story, do us a favor to have a relevant point.

"Have no fear" doesn't apply to the article, at all. You might as well write "what I learned was to not stick legos up my nostril". Also good advice. Also not applicable.

It's fine if it doesn't work for everyone, it's annoying if it isn't relevant to anyone.


You are reading Hacker News. You are literally here to waste time.


It's obnoxious behavior. For example, I decided when I was young to live in my car and be homeless. I saved a bunch of money, and I've been frugal most my life. I was also super focused at my work and climbed the ladder making real money.

I believe most people don't have discipline to endure less than and the discipline to really listen to what power asks of them. There is a lot of great advice for people to do well in a job, but they just... don't apply it.

These people are best to be ignored.


I long for the day when people don't try to pass off vapid generic advice for likes. Waste of bandwidth.


A bit cynical, no?


Giving generic feel-good advice is a decent strategy to farm likes from the naive. Some people have no shame.


Don't be afraid is excellent advice, sorry but you're coming off as very cynical.


I was watching a trial the other day and the prosecutor asks "And did you often see your nephews at your mothers house when you video called her?", and the defendant, a dentists, says "Yep, watching TV, brushing their teeth.[5 second silence] Don't forget to brush your teeth. Really important." The prosecutor smiles, laughs, and says "A little dull humor never hurt, eh?"

I'm not sure your average adult would find "don't be afraid" to be "advice", or some deeply meaningful advice that only a cynic would think was anything less than excellent.


It’s not just a personal anecdote. It’s telling people what they should do.

A personal anecdote would be saying this is what worked for me. Not this is how you should do it.

It comes off as telling you what your problem is and how you should fix it.


you're talking about specifically using genetic programming to create new programs as opposed to gradient decend in LLMs to minimize a loss function, right?

How would you construct a genetic algorithm to produce natural language like LLMs do?

Forgive me if i'm misunderstanding, but in programming we have "tokens" which are minimal meaningful bits of code.

For natural languages it's harder. "Words" are not super meaningful on their own, i don't think. (at least not as much as a token) so how would you break down natural language for a genetic algorithm?


> how would you break down natural language for a genetic algorithm?

The entire point is that you do not bother trying. From an information theory and computational perspective, raw UTF-8 bytes can work just as well as "tokens".

The program that is being evolved is expected to develop whatever strategy is best suited to providing the desired input/output transformation. Back to the bitter lesson on this one.


I’ll need to read up on genetic algorithms, I think.

That sounds really cool, but coming from training other statistical models, im having a hard time imagining what the training loop looks like.


I’d say this is more a tool for prototypes.

You could secure angel funding off a simple prototype.

You could also make something funny and share it with friends, if it doesn’t need to be monetized.

I wouldn’t use tools like this for long lived products though.


It’s only the bottleneck for orgs that don’t know how to keep focus and bloated orgs with too many teams/managers/engineers.

And scrappy startups made by businesses school dropouts (and grads)


> vibe coding was not made up

Here’s the tweet that literally made up vibe coding: https://x.com/karpathy/status/1886192184808149383?lang=en

One might say blindly following hype is silly and cope too.

I’ve seen no indication that relying entirely on AI can produce quality software.

It can produce prototype quality code, just as it has since gpt-3.5. Advantages of technology is never considered. Security concerns are often missed. And, from what I’ve seen, the codebases are bloated.

For your avg crud app, much of that doesn’t matter. It starts mattering when you start having real business constraints, like server budgets or data compliance. If you don’t see that, then you don’t have enough real world experience yet. That’s all.

Remember how crypto was going to change everything? Or the metaverse?

We live in a period of extreme technological hype backed by insane company valuations.

Don’t get too fooled by market.

These tools are useful. They are here to stay. And they do not replace the entire field of programming nor the work that programmers do.


I produce production quality code all the time with AI. I didn't believe the hype at first but here we are.


Either you're not relying as much on the AI as you think you are, or you're not really sure what "production quality" means.

It seems like you should know, so I'm going to bet that you're not entirely letting the AI drive.

Having the AI draft some code which you refine is a fine workflow. I didn't think it was before, but i've come around on that. I think it's also nice to have an LLM do a onceover to point out areas where I may have missed catching an error (like with JSON.parse in javascript or something.)

It's just not my cup of tea, personally. I've found that I'm faster writing code myself and treating an LLM as an assistant or rubber duck, but to each their own.

I'm referring to wholly AI generated code with no human input besides a prompt or "vibe coding." You literally can't put enough context into a prompt to have it write the exact code you'd need in every case. Your prompt would end up just being code at that point.

That's the whole point of writing code. Precise and exact instructions for a machine. You're not going to get that by adding a statistical natural language layer in the mix.


If you're using the right models (Claude 3.7 Sonnet, Gemini 2.5 Pro) and are good at prompting it very possibly can write and deploy thousands of lines of code to production without you needing to change a single thing.

Of course, odds are there in fact is something you need to change - maybe a poor design choice or a bug or missing logic. So you of course do need to always thoroughly review it. But reading 1000 lines is faster than coming up with 1000 lines you plan to write and writing them. And also, if you see a missing thing, you can just do a follow-up prompt in the same chat context rather than actually typing a single thing into the text editor.

I know it can feel alien, and I definitely still spend a lot of time manually writing and editing code, but I'm trying to outsource more and more to the model and trying to put myself into a mindset of "first try to see if I can accomplish all this with prompts, and then fallback to 'raw coding' if it fails after a few tries" for everything and I find it's speeding me up a lot.

You should try to give it another shot. Could maybe wait another year first for the editors and models to get even better than they are right now.

>I'm referring to wholly AI generated code with no human input besides a prompt or "vibe coding." You literally can't put enough context into a prompt to have it write the exact code you'd need in every case. Your prompt would end up just being code at that point.

True, but... you can do that! It may or may not be faster than writing the code you want, true, but sometimes I think it will be faster/simpler. Gemini 2.5 now (or soon?) supports a 2 million token context window. You can write a very precise spec in the prompt. Use formal language, or use a little DSL you invent on the spot, or say "it should do X and Y and account for Z and also try to cover other things if you realize there are more", etc. There's a lot you can do.

There absolutely will still be many scenarios where it's faster overall to just write the code or where it really is harder to express what you want to say in English vs. in code, but those scenarios may be less common for you than you currently think or expect.


> deploy thousands of lines of code to production without you needing to change a single thing.

I’m not saying this is impossible. I’m saying it leads to poor quality products. Deploying thousands of lines of code isn’t necessarily a good thing. Often it’s not.

> You can write a very precise spec in the prompt. Use formal language, or use a little DSL you invent on the spot, or say "it should do X and Y and account for Z and also try to cover other things if you realize there are more", etc. There's a lot you can do.

At this point, why use an LLM at all? Why introduce a black box? We can perfectly and tractably convert formal languages into machine code.

Things are never simpler when black boxes are involved…

These tools, again, are undoubtedly useful and sometimes (albeit inconsistently) magic.

But they’re not a silver bullet for making software.

I tried vibe coding literally yesterday, as I do every week or so. I used avvante.vim and code companion. I tried with gemma3 and Claude. It’s slow, boring,and I (someone with ADHD) lose all focus when the llm starts running.

The output is prototype quality always. It looks okay and mostly works correctly (granted I usually just make a todo list or a job board) but is obviously over complicated and bloated.

If you don’t care about quality or long term maintenance (like with a prototype or POC) then it’s fine.


Give it one more try with the latest Gemini 2.5 (not Gemma).


> Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

—Antoine de Saint-Exupéry

If you’re writing thousands of lines of code for what can be described in a few words, the code is probably horribly overcomplicated.


Or maybe you're really good at writing words?


If you use too many words, you’re not good at writing words.


The code I have AI generate matches the production quality code I've shipped throughout my career. SOLID code, no security flaws, unit-tested, documented, commented, fast, secure, composable vs inheritable, no magic strings, etc etc etc.

>You literally can't put enough context into a prompt to have it write the exact code you'd need in every case.

Yes you can. I do this every day.


I just can’t believe you. Nothing I’ve seen indicates that is possible right now.

Would you be willing to share your code and your workflow?

> >You literally can't put enough context into a prompt to have it write the exact code you'd need in every case. Yes you can. I do this every day.

It hasn’t been possible for maths and isn’t for programming.

I’ll defer to Dijkstra on this “foolishness”

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


One day it wasn't possible to fly in the air either and I'm sure was a person who wrote an article about that as well.


So you’re not willing to share anything?

Just anecdotes and platitudes?

Not even some reasoning?

Also note that I chose my words carefully when I said “Nothing I’ve seen indicates that is possible right now.”

I used “right now” to not rule out future possibilities.


I'm not here to prove anything to you, just to tell my story and share.

What do you want me to do, give you access to my works git repo?


I asked if you would be willing to share your work and workflow.

You could say nothing. You could just say “I can’t, it’s for my job, but my workflow looks like this…” or something.

You could say you’re not comfortable sharing.

You could share a snippet and how you reached that result… idk anything.

But instead you went for some weird statement about flying and people writing articles.

Didn’t respond to any points in Dijkstra’s essay, just some platitudes.


Let's see your github


I think it’s like any other tool. Some people get better mileage out of it than others.

There are even very skilled and accomplished engineers that don’t even use language servers.

Not programming, but even some legendary Disney animators still draw out their key frames by hand… on paper… in pen.

Build with what you build best with.

Personally, they help with little refactors and occasionally a quick, difficult to google, question, usually about syntax.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: