Hacker Newsnew | past | comments | ask | show | jobs | submit | byproxy's commentslogin

goddam, that's beautiful. thanks for sharing!

This is unfortunate, because (and probably much to the article writer's chagrin), I am a fan of making use of the formatting tools available for a given platform. When convenient, I love using an em/en-dash. And, when I'm up for it, italicizing and bolding words to convey prosody. It just feels right.

Now, though, I may be dismissed as an LLM (probably as an older model that can't quite use them formatting tools effectively).


I've been called out more than once for using too much italics in my writing.

But the trick is I usually write like I would speak. This leads to italicizing any word or phrase I'd speak emphatically. (Which, yes, I've also been called out for doing a lot when I speak. So what; I've also been told I'm good at getting my point across. I'll take it!) In any text important enough to go through multiple revisions, or to be written from the start with multiple revisions in mind, this characteristic is diminished. But most text is more throwaway, just like most speech, so it gets left a little rough.

This also tends to feel pretty natural. If you read LLM-written text out loud, or the prose TFA is talking about, it... does not feel natural at all. So what I'm trying to say is: some level of emphasis is just fine. Don't overthink it.


I'm speaking more of excessive headings for small sections, which are themselves just bullet point lists, all of this highly nested. That's the main thing demonstrated in the post here.

The LLM writing is basically infodump. Lists dressed up in different ways (sections, bullet points, nested bullet points, and sequences of "[bold]Blah blah[/bold]: blah blah blah" sentences.


Yea, probably. My "old-man" pet peeve is people using "lol"/"lmao"/etc. as punctuation lol.

As this post has been (to my sensibilities) obviously composed by an LLM, I can tell you: this does not read "human."

"AI use detection" is, like any test, not without cost. Meaning that, as a teacher, accusing a student of using an LLM, it may be prudent to consider the cost of a "false positive" accusation. I've seen a couple of examples now where students find sudden spurts of motivation and show unexpected talent on an assignment, to be accused of AI use after handing it in.

One should ask oneself: How many insults to the intelligence and creativity of an unexpectedly excelling student (that hasn't used AI) is it worth catching the shortcut-taking, LLM-using student? Is it 1/10? 1/1000? How much "demotivation of an unexpectedly excelling student" is the "rightful punishment of the cheating LLM using student" worth? And what is the exact cost of a false negative (letting the LLM using student off the hook)?

In other words, where on the Receiver Operating Characteristic (ROC) curve do you want to sit, as a teacher? I imagine it's quite the dilemma.


~30 years ago I sat down with two students and accused them of copying each others’ work, because they both made the same amusing mistake: they called their C functions without passing arguments, but they declared their variables in such a way that the values would coincidentally be in the right place on the stack. I have to imagine debugging their own code was a mystery.

They indicated that while they worked closely together while learning the material, they weren’t stealing from each other. I believed them then, and still believe them now, but I’m so glad I don’t have to deal with today’s AI world.


The non-LLM version of this happened to me in a high school English class, back in the 2010's. I was accused of turning in a downloaded story from the internet, but in reality, I had pushed past the incredible barrier I usually felt at the start of a task, got into the flow, and started enjoying it.

I'm not sure if it had any lasting effects. Maybe a burning hatred of Grammarly ads.


>To intentionally misspell a word makes me [sic], but it must be done.

LLM killed traditional poetry, what you are now seeing is post-LLM poetry.

Maybe you missed it, but this is clearly not an LLM, what prompt would even produce that.


Honestly, I don't think llms killed poetry.

The anti LLM crowd killed poetry by accusing every text on the net to be LLM generated and establishing generated text to be a mortal sin.

Neither of those two things would be very bad if people weren't so damn zealous about it. Nobody expected the Human Inquisition


I can already see it playing out. Some day (maybe soon) LLMs will come up with such quirks; I (and perhaps you?) will continue to insist that this does not make them "conscious" or "AGI" or "persons" or what-have-you; and I will be accused of goalpost-shifting.

What constitutes "this bullshit"?


Worst time to be an employee, as you are expected to work faster and faster. (The approach is very much quantity over quality.)

Best time to be a solo founder in underserved markets :)


From where did you hear this rumor?


Was a news article from a reputable source (Reuters or PBS?), around the time of the Greenland flap in early January, but Google Search now sucks and the results are all polluted with news articles from the strikes today so I wasn't able to find it again.

Yep, this, so far, proven the most promising use of LLMs, to me. I've read about people's Rube Goldberg machine-eqsue setups for getting agentic LLMs to work for them, but I find simply having a dialectic with an LLM to be more fruitful. Rubber-ducking with a duck that quacks back.


How do you prevent it from just taking the reins and writing an entire function or class for you when all you wanted to do was just talk about the code you already had.


"No coding. (Explain|Debug|Analyze|Talk through) this with me:"

"Talk with me first:" (Implying anything other than talking, like coding, would be a separate distinct step that is not to be done)

Proposal is the best keyword imo if it fits what you'd like.

"Propose changes you would make to (this repo|staged changes|latest commit)."

"Propose alternatives."

"Propose flaws." / "Propose flaws in my reasoning."


I keep Claude in "planning mode" (shift+tab) so it cannot touch my codebase.


"Do not write any code ..." If you are using LLMs for highly restricted work, it is rather trivial to keep them in check enough to receive useful responses.


You tell it to only talk about the code you already had.


You don't turn on editing mode.


> but this still allows arbitrary markup to the page (even <style> CSS rules) if I'm reading the docs correctly.

If that's true, seems like it's still a security risk given what you can do with CSS these days: https://news.ycombinator.com/item?id=47132102


You can use selectors to gain some information about things like input fields, e.g. https://www.invicti.com/blog/web-security/private-data-stole...

Or I guess you could completely restyle and change the text of UI elements so it looks like the user is doing one thing when they're actually doing something completely different like sending you money


Back in 2002 (?) I got banned from a certain auction site because I managed to inject HTML into my username that made it so once I had bid the "Bid" button disappeared for all subsequent users.


This is exactly what I’ve done for my Google settings after feeling more and more upset with what the Youtube homepage was showing me. Not so much in that it was necessarily contrary to what I may have wanted to watch, but more-so me thinking “My god.. is this what I’ve become?”

I now much prefer to open Youtube (quasi)tabula rasa. I still have subscriptions set up so I can follow “vetted” accounts, but for other things I now rely on that intentionality.

The one minor bummer is that Youtube won’t remember your spot in a video, say if you’re watching in your phone and want to continue on your desktop (or vice-versa). Not the biggest hassle to manually check the time and sync it up, though.


Yea - the not remembering a spot in a video IS a slight inconvenience, but for me it's a small price to pay in order to not get sucked into HOURS thrown away chasing the next 'hit' of dopamine from shorts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: