Hacker Newsnew | past | comments | ask | show | jobs | submit | oidar's commentslogin

> The fun in the job is not knowing where to place a semicolon.

If a person needs an LLM to figure where an semicolon goes, a LLM is not going to help them code.


I don't need one to know where it goes, but it certainly is better than I am at never missing one.

I disagree with this take. I get that LLM produced text is filled with crappy, over the top writing in pretty much all cases, but if a prompter/writer/blogger is using it iteratively, the LLM output is going to be way better than their writing. Also, if a person is using LLMs to write articles, do you really want to see their likely even worse writing?

Yes, I want to see the prompts. Yes.

But I won’t promise to read it, because it’s bad writing.

So maybe it would be better to not use the LLM to draft writing that pretends to be you. That would be easier on everyone who reads.

Instead we live in a world where all of us are reading through a cynical lens.

This comment was written without using any form of AI.


Was this written by an LLM?

> This comment was written without using any form of AI.

That's exactly what ChatGPT would write if it didn't want us to think it wrote that comment!


In this ever-changing world, it pays to delve beneath the surface of a casual claim— if you know what I mean.

It's absolutely nutso that we (the users) have to guess what the actually limits are. And now they through this into the mix. I love using Claude Code, but if they don't offer some transparency soon re:token limits (other than a status bar), ... I don't know what I'll do, but I will continue to not be happy.

Not sure I understand how passkeys verify humanity.

Bluetooth headphones too?

This is actually a really good response though. Because the act of having a device blaring demonstrates contempt for everyone one around them. It's hard to act in a hateful way to someone who just offered you something for free.


Exactly. To refuse the “gift” is an explicit statement of “I know I could do this silently but I want to bother everyone around me.”

The main issue is sota LLMs can only reason one way - forwards, and can't go back and revise a prior statement. That would remove a whole lot of "it's not this is that" and "the big takeaway here is" and so on. Those kinds of ideas are typically at the beginning of a human writer's output structure. An LLM can't go back and edit the first paragraph, because it has to reason (for whatever that means for LLM) it's way through it to get to the big idea of the paragraph/structure. I haven't played with diffusion text models enough to know if that's a remedy for that kind of output.

When LLMs are good enough to not be detectable, what happens then? They aren't that far away atm, so it's only a matter of time until _everyone_ is assumed to be an LLM.


Ultimate user here. I assume this doesn't kill "quick answers" for pro users - which I use frequently when I need a quick summarization. For assistant use directly, I've been thinking about stepping back from ultimate, as I use claude for AI rubber ducking; which works better than all of the LLMs available on assistant.

That's right:

> Kagi Assistant's web tool uses Kagi Search, and that has nothing to do with this subscription plans discussion, we're not changing anything there. The same applies to LLM-powered features in Kagi Search, like Quick Answer.


I have maxed out my ultimate usage before and when that happens the quick summarization tools did not function indicating I hit my limit. I assume it would affect those, but that might be part of how they break it up.

What's the latency on these like for music production?

Have you seen Decker: https://beyondloom.com/decker/


there's a great blogpost explaining the creator's inspirations in hypercard also: https://beyondloom.com/blog/sketchpad.html



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: