Hacker Newsnew | past | comments | ask | show | jobs | submit | Ucalegon's commentslogin

Thats when you reach out to your insurer and ask them their requirements as per the policy and/or if there are any contractual obligations associated with the requirements which might touch indemnity/SLAs. If it does, then it is critical, if not, then its the classic conversation of cost vs risk mitigate/tolerance.


That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term.

Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.


What was underestimated in the long term with nuclear power? I like nuclear power but I don't see what long-term effects were underestimated by people in the 50s.


I guess an example would be short term. "A pocked nuclear reactor in every car powering our commute to work" vs long term change "Nuclear power powering vast datacenters that do most of the work for us"


They just are not going to provide insurance to companies who use AI because the liability costs are not worth it to them since they cannot actual calculate risks, it is already happening [0]. Its the one thing that a lot of the evangelists of using AI for entire products have come to realize or they aren't actually dealing with B2B scenarios where indemnity comes into play. That or they are lying to insurance companies and their customers, which is a... choice.

[0] https://futurism.com/future-society/insurance-cyber-risk-ai


Its not a device/MTA issue, SMTP just is not a secure protocol and there is not much you can do in order to 'secure' human communication. Things like spoofing or social engineering are near impossible to address within SMTP without external systems doing some sort of analysis on the messages or in combination with other protocols like DNS.


SMTP isn't at fault, the social ecosystem is at fault. Every system where identities are cheap has a spam problem. If you think a system has cheap identities and no spam, it probably doesn't have cheap identities — examples are HN or Reddit.


Sigh... this is real life and I hate it as an American. The Danes had over 50 [1] Danish lives wasted in the NATO mission in Afghanistan and Iraq and this is how we pay the Danes back when they had America's back, paid in blood.

Its so disappointing and tragic.

[1] https://www.bbc.com/news/articles/crmjewpkje9o


Danes put up a courteous face right now to get through this, but the relationship to the US is permanently harmed. Even the most pro US politicians are saying the relationship will never go back to what it was before this.


Trump has no respect for anything. He even derided US veterans. I have no idea how any patriotic person can support him.

https://www.theatlantic.com/politics/archive/2020/09/trump-a...

This was 2020 and still some people who allgedely want to make America great again voted for him.


But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots?

Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself.


> But you can find that information regardless of an LLM?

Do you have the same opinion if Google chooses to delist any website describing how to run apps as root on Android from their search results? If not, how is that different from lobotomizing their LLMs in this way? Many people use LLMs as a search engine these days.

> Why are we assuming just because the prompt responds that it is providing proper outputs?

"Trust but verify." It’s often easier to verify that something the LLM spit out makes sense (and iteratively improve it when not), than to do the same things in traditional ways. Not always mind you, but often. That’s the whole selling point of LLMs.


That's not the issue at hand here.


Yes, yes it is.


The issue is the computer not doing what I asked.


I tried to get VLC to open up a PDF and it didn't do as I asked. Should I cry censorship at the VLC devs, or should I accept that all software only does as a user asks insofar as the developers allow it?


If VLC refused to open an MP4 because it contained violent imagery I would absolutely cry censorship.


And if VLC put in its TOS it won't open an MP4 with violent imagery, crying censorship would be a bit silly.


>So how does that tie in? Try and use any of these tools to make a parody about Trump blowing Bubba . It wont let you do it out of concern for libel and for because gay sex is distasteful. Try and make content about Epstein's island. It wont do it because it thinks you're making csam. We're living in exactly the time these tools are most needed.

You don't need an LLM to accomplish this task. Offloading it to an LLM is apart of the problem because it can be reasonable accepted that it is well within the bounds of human creativity, see for example SNL last night, that human beings are very capable of accomplishing this task and can do so outside of technology, which means that there is less chance for oversight, tracking, and attribution.

The offloading of key human tasks to LLMs or gen AI increases the boundaries for governments or 3rd party entities to have insight into protected speech regardless of if the monitoring is happening at the level where the LLM is running. This is why offloading this type of speech to LLMs is just dumb. Going through the process of trying to write satire on a piece of paper and then communicating it has none of those same risks. Trying to enforce that development into a medium where there is always going to be more surveillance carries its own risks when it comes to monitoring and suppressing speech.

>When you lose words, you lose the ability to express concepts and you lose the ability to think about that concept beyond vague intuition.

Using LLMs does this very thing inherently, one is offloading the entire creative process to a machine which does more to atrophy creativity than if the machine will respond to the prompt. You are going to the machine because you are unable or unwilling to do the creative work in the first place.


Qwen3-VL-8B is quite amazing and has made translating screenshots of text so much easier via LM Studio. Does have some issues and edge cases, but being able to take a screenshot of text or some website doesn't have copyable text, thats a solvable issue for less than 16GBs of VRAM or unified memory.


Please look into the Zero Data Retention policies of the subprocessors that you are using. For example, Open AI does not include files as falling under their ZDR [1], thus utilizing OAI as an LLM solution inherently adds unnecessary risk of data exposure that many enterprise clients do no want to onboard. Also, you have to think about those companies obligation to their clients/customers when it comes to data security, along with the risk of IP being exposed to 3rd party systems that they do not have control over, when they make their decisions when it comes to utilizing various business solutions.

[1] https://platform.openai.com/docs/guides/your-data#zero-data-...


This is just ignorant to how Congress has dropped the ball when it comes to funding a functional immigration system which provides for a speedy process. It is almost as if there is an economic interest to keep labor cheap via undocumented workers which is not alleviated at all through these raids, but does increase costs while undermining the US economy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: