Hacker News new | past | comments | ask | show | jobs | submit login
LLMs – I won't be fooled again (nortrup.dev)
7 points by mooreds on May 1, 2023 | hide | past | favorite | 6 comments



TL;DR;

The overall sentiment of the post is skeptical and critical of the promises made by new technologies, particularly Large Language Models (LLMs). The author recounts their experiences with various technologies that began with optimistic potential, such as GeoCities, social media platforms, and cryptocurrencies, but eventually resulted in the exploitation of user data, privacy breaches, and negative consequences. The author expresses concern that LLMs will follow the same trajectory, using people's data for corporate gains, and potentially leading to negative outcomes such as misinformation, spam, and phishing. They argue that the proposed benefits of these models are not worth the risks and potential drawbacks that come with giving up control over one's data and content.


I agree with the author and would more specifically lean into the risk of LLMs continuing on the dark pattern algorithms replete throughout social media platforms. Some people are still being manipulated by the simple algorithms but I can envision certain groups frothing at the mouth to utilize something more powerful and believable that can mimic human characteristics. Given the lack of auditable transactions companies will just point at the LLM instead of accepting responsibility when things go sideways. I think the legal fallout will be entertaining at very least.

To be clear I am not saying LLM's should not be utilized. Rather they are just not mature nor are they production ready. By production ready I mean they need a "debug last transaction" that gives anyone all data sources and what algo was used to reach the conclusion along with any human tuning involved. I fully expect future legal cases to demand this capability and that it be available to everyone.

It's just getting started [1] I expect these cases to evolve rapidly.

[1] - https://www.theregister.com/2023/05/01/eu_ai_act_adds_new/


It is not correct to say that people are being manipulated by algorithms.

People are being manipulated by people. Those people consciously chose to mislead and manipulate others. They do it with LLMs just like they used to with handbills and propaganda publications.

It's foolish to blame a technology when people are responsible for their own actions.


What may be lacking is full tamper-proof auditing and full transparency of all tools to see who was in the driver seat. Who armed the LLM with what data and what algo was used and tuned to manipulate the masses.

Anyone today could for example buy a portable high powered UV or IR laser. A malicious person could point said tool into a crowd of people and permanently blind them. We need to see who was holding the laser.


That's not what you asked. You asked to see how a laser is made, which is irrelevant. As you said, the person holding the weapon is fully responsible -- not the device itself.


That's because exploitation and misuse are classic characteristics of people. We do that with everyone and everything.

It's too late to put the tech genie back in the bottle. We won't be giving up LLMs just like we didn't give up the internet when that was invented. Your summary (and thank you) sounds like the author is making standard Luddite[0] complaints.

[0] https://en.wikipedia.org/wiki/Luddite




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: