Hacker News new | past | comments | ask | show | jobs | submit | kranner's comments login

> Build one to throw away, as they say.

I believe Fred Brooks clarified this to mean you shouldn’t hesitate to throw away your first attempt if it comes up short despite your best effort.

If an LLM came up with the first implementation, not only did you not do the hard work of understanding the problem and how to solve it but, worse, you could be anchored to the LLM’s implementation thinking about how to patch it up so it performs adequately. Personally I think you should have LLMs have a go after you’ve had a solid try yourself, so you can actually compare and see what you might have missed.


Sorry, but your comment is practically a case in point. Your points would have been better made if they had been organised into separate paragraphs.

I'm assuming you dictated it off-the-cuff: I apologise if that wasn't the case.


I've seen something similar play out badly, in the context of young Indians (and other South Asians) who are underconfident in their use of English but need to use it in professional contexts. Some I know personally have gone from asking for editorial help on their drafts to completely surrendering to LLMs for all their output. I can't imagine it has helped their self-confidence. From the outside it almost seems they've given up on trying to express themselves entirely.


Surrendering is an excellent way to put it. With the increasingly ubiquitous access to LLMs that are (for all intents and purposes) good enough - I think we're going to see the average person's proficiency at any skill which is fundamentally a "craft" (i.e. writing, foreign language, music, coding, etc.) decrease dramatically.


I don't know about 1-4 and 5 seems an honest mistake.

Is there a source for the misleading GPU count? Dario cites Dylan Patel, and pt 4 cites Alexander Wang, but these are just claims so far.

As far as censorship regarding Chinese history, does it matter? The other models have other areas that are censored. Is anyone planning to use LLMs to look up historical facts anyway?

Stealing IP? That's rich coming from AI companies that mined the entire public internet. Did they get permissions from copyright holders in every instance?

And if DeepSeek did evade particular US sanctions whose only justification seems to be to prop up US's economic hegemony over the rest of the world (unlike sanctions against weapons, invasions and human rights violations), then good for them.


To represent the other side, I enjoy reading Urdu and Persian poetry but I will never be interested in reading anything generated by an LLM. No matter how 'high quality' it is represented to be, I'm aware that it was produced by a process that shares nothing with my own experience of the world. It has felt no hope, disappointment, fear, pain, mortality, loss of loved ones, lack of control over itself, a world model that has changed over the years, and doesn't know that it all doesn't amount to much in the end and yet this is all there is for itself. It may turn out to be sentient in some way, but it's almost certainly not sentient in the way that I am sentient. I know it's just mimicking being human as instructed; to take it seriously devalues everything about my own humanity. I'm not ready for that kind of enlightened insight, I think.


> I know it's just mimicking being human as instructed; to take it seriously devalues everything about my own humanity.

Since it is mirroring human culture, why do you see it in such a negative light? Instead see it like what it is, an interactive reconstruction, or maybe like a microscope to zoom into any idea.


I’m happy to use LLMs in all other contexts, quite enthusiastically actually. I’ve got DeepSeek 32B running locally on a beefy PC already.

It’s just in the context of poetry, and literary writing in general, that I feel differently about them. There’s also the fact that I haven’t read all that human writers and poets have already written (and will never be able to in this short life) so there’s no need to turn to synthetic output. No supply problem exists. Poetry in particular is something to ponder over and over. You can’t really run out.


Sure, that's your choice/preference, so good for you (sincerely).


Thanks, I respect your choice as well.


> It has felt no hope, disappointment, fear, pain, mortality, loss of loved ones, lack of control over itself, a world model that has changed over the years, and doesn't know that it all doesn't amount to much in the end and yet this is all there is for itself.

You can't know what any poet felt or didn't feel while writing a poem. Perhaps it was a commission piece, or an experiment or an emulation of something the poet had heard elsewhere.

And more generally, whether the specific emotion another man feels is similar or even comparable to your own is also unknowable. He might use the same word to describe it, but the subjective experience associated with it might be completely different, and completely impossible to share.


Yes but at least it was possible for that poet to have felt what I feel they might have felt while writing that poem. And the closer they are to me culturally the more likely it is that I am not misidentifying their emotions entirely.

Also poems are not really puzzles to be solved. If it produces an effect and is solid craft-wise, that is enough. There’s a lot to the craft side btw in the Urdu and Persian ghazal form which is what I had in mind while writing my original comment. LLMs can easily master the latter but have nothing to do with the former. Their output is pure form without substance.

Edit: I want to add that ambiguity (ابہام) is even a desirable property in Urdu ghazal, specifically. The more interpretations a couplet can have, the greater is the accomplishment in terms of craft.


An article in The Guardian described an alternative treatment called Avatar Therapy [1] that has the therapist create a digital simulation of the voices, interact with the patient using the simulated voice and work through a script that gradually gives the patient more power over the voice.

It can get surprisingly radical for a therapy session, at one point even inciting the patient to commit suicide! [2]

[1] https://www.theguardian.com/news/2024/oct/29/acute-psychosis...

[2] > “You should end it,” the avatar [therapist] said, casually. “What have you done that’s of any use to anyone?”


Hey my voice says that to myself all the time and I'm a pretty content person; it'd be pretty surprising if a second voice weren't capable of that.


My advice would be to investigate and address this, despite feeling ok with it.


There was that 2007 case of the French man missing 90% of his brain and still quite functional:

https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-...


Functional yes, but an IQ of 84 isn't "slightly below the normal range", it's the 14th percentile. Not to say that it's not an achievement with just 10% of a brain, but he wasn't an average intelligence person, he likely struggles with a lot of things.


This is really interesting from the perspective of gradual replacement/mind uploading: what is the absolute minimum portion of the brain that we would have to target?

Understanding this could probably make the problem easier by some factor (but not "easy" in any sense.)


While that's an interesting question…

I was going to write "I don't think this specifically is where we need to look", but then I remembered there's two different reasons for mind uploading.

If you want the capabilities and don't care either way about personhood of the uploads, this is exactly what you need.

If you do care about the personhood of the uploads, regardless of if you want them to have it (immortality) or not have it (a competent workforce that doesn't need good conditions), we have yet to even figure out in a rigorous testable sense what 'personhood' really means — which is why we're still arguing about the ethics of abortion and meat.



Literally the plot of Westworld season 2.


It wasn't missing. It was squished by untreated hydrocephalus.


So 90% percent of our brains are space capacity for the paper clip maximizers out there.


Or "normal life" is the intellectual equivalent of coasting as far as challenge goes.


What is the app, if you don't mind sharing?



Thanks! Clearly a lot of dev work, even sharing scores on the web. Sorry to hear it hasn’t been a success.


Possible contact with pedophiles, groomers, etc.

Once the child is over 16, they can add all their real-world friends again.


Could a possible solution there be to use the same language detection platforms used for detecting terrorist activity to also flag possible grooming for human moderator review? Or might that be too subjective for current language models leading to many false positives?


AKA stupid paranoia.


This is far too pat a dismissal of something which happens regularly. You can argue that it’s not frequent enough to justify this action or would happen anyway through other means but it’s a real problem which isn’t so freakishly rare that we can dismiss it.


Discord is for people over 13 years of age in many countries, yet there are many minors there. It is not working.


I’m not saying anything about specific services, only that there is a legitimate concern which can’t simply be dismissed without reason.


I am not sure I meant to reply to you, to be honest. It is an issue but so far the solutions are terrible. Outsourcing parenting to the Government or companies is also meh. I am sure there are parents who know of ways to reduce screen time for their children, it ranges from installing a program that does not let you on a website or start another program until and unless this and that, or take the phone from the kid's hand and go for a walk or study, whatever.


Tor Nørretranders' The User Illusion was a great read, hereby warmly recommended. One of the principal ideas was that our senses take in many orders of magnitude more of data than we are able to be conscious of. I believe he estimated something like 10 Mbps taken in and 80? baud being aware of — something like that, I should read it again sometime.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: