Hacker Newsnew | past | comments | ask | show | jobs | submit | sillyfluke's commentslogin

>To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries.

Obviously you're not a golfer. Human sectretaries don't have non-deterministic hallucinations and random critical ommissions in their summaries, which I've witnessed first hand with LLMs. More importantly, if they do you have more deterministic mitigations with them than you do with LLMs, as there are no mitigations with LLMs except praying that a new model in some unspecified future will be magically better with the summaries down the line.

The only way to stay sane when using these tools is to pretend that these things won't ever happen and just go about your business like the rest of the zombie workforce, because no one wants to stop the train and address the issue.

There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb".


To make things clear, even though I am not a golfer, I have a lot of respect for good secretaries, that's why I said they can do much more than write formal letters. Not only they don't hallucinate like LLMs but they can actually catch mistakes before they happen.

I just wanted to point out the seemingly ridiculous idea of formal communication that none of the interested parties actually read. It is as if two English speakers insisted for the discussion to be in Spanish, and they both brought an interpreter for that.


I think I understand that, which is why I specifically singled out your suggestion that "AI is not the problem" instead of your comment as a whole. Automated processes of Person A talking to automated processes of Person B is not a novel concept. It's been going on in various forms since computers have been invented. You call an Uber and get your phone to talk to Uber's servers and the driver's phone so you don't have to talk to the driver, same with Doordash and the restaurant, same with Amazon and the retail store personnel. In these cases the back and forth between two humans are, as in your words, a type of "formal communication that none of the interested parties actually read" or "two English speakers both bringing Spanish interpreters to talk to each other."

It's this way because us nerds want to remove the need to talk to other humans in order obtain what our hearts' desire at any given point. So we have been trying to remove non-deterministic roadblocks (humans) and replace them with deterministic automation from the very beginning. This is not new. If LLMs were the same deterministic processes but 1000x more versatile and capable then it wouldn't be a problem.

But even though LLMs have lowered the barrier on who can create these automated processes and the speed at which these automated processes are created, to achieve this they brought with them non-deterministic side effects that currently evade a holistic and deterministic fix. This is why it's an AI problem.


>There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb".

Indeed, and somewhat surprisingly some linguists have embraced even this analogy [1] without appreciating the subtext of the title.

https://arxiv.org/abs/2501.17047


I'm also often not desperate for a pun from a stranger, but I think a 'get' it.

Now that you got the balls rolling in an unwanted direction, I'm sure you're not the only one here with blue jewels.

literally almost everything I have bought on sale is something I wasn't looking to buy at that moment in time.

How many of those things cost more than 10,000 dollars?

I think you might be slightly misinformed on how many 10,000+ dollar purchases the average person makes in their lifetime to make sweeping statements of that nature. Advertizing sales on medical procedures or daycare could have the opposite effect I would imagine

>You can't train LLMs on proprietary data, at least not if you want to make that LLM as accessible as Gemini. Otherwise random people can ask it your home address.

If this is your only criteria I think you have a misunderstanding of what proprietary data is and ways companies can mitigate the situation in the inference stage.


No one should apologize for feeling conflicted while giving an issue considerable thought. Constantly reassessing your position based on the changing nature of the world should be encouraged to be the default approach.("Constantly" within reason of course).

I can imagine some Americans making a decision based on the threat of other authortarian states and being left completely bewildered when they have to grapple with the notion that their government may be the bigger threat to their own security.


Yes, once you look at the daylight times it's clear that UK time naturally fits Spain better.

>Kids walking, biking, and being driven to school in mornings in darkness ... that's also what permanent DST gives us.

I think this is the worst thing about it frankly, the kids. And you can't just push the school time back cause it interferes with the parents getting to work.


>it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote. Even if you choose to ignore all that, it's also not very reasonable to expect troves of juicier quotes after all the C-suites, lawyers, and HR departments showed up locked everything down with corporate speak. I'm sure if facebook were to be so kind as to leak all the messages and audio of zuck's internal comms since that time people would be able to have many other juicy quotes to work with.

It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.


Thank you for saying this. I would not find a better way to word the response myself.

"It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote.

It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.

These sentences are deliciously delightful to read in this era of writing whose blandness and sloppiness is only amplified by LLM-driven "assistance".

It is difficult to be pithy without being bitter, but your writing achieves it within the span of a single comment. If you have a blog, I hope you share it!


This looks like more OpenAI damage control. Since the Pentagon accuses Dario of having a "God-like complex" one can only surmise that the potential future guardrails Altman claims they can employ are purely optional for the Pentagon when push comes to shove.

But this made me chuckle:

>U.S. intelligence agencies including the C.I.A., which uses Anthropic’s A.I. technology,

...just cause it has amusing parallels with those stories about how the Pentagon were supporting the Kurds and the C.I.A was propping up ISIS in Syria. Now we have their AI agents fighting out in the field to look forward to in the future!

With Anthropic it's just the rest of the world that is fucked, with OpenAI it's the whole world including the US that is fucked. How relieving.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: