Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok wow this is big news, did Facebook or OpenAI threaten them with a lawsuit?


On their GitHub repo (https://github.com/tatsu-lab/stanford_alpaca) they've added a notice:

"Note: Due to safety concerns raised by the community, we have decided to shut down the Alpaca live demo. Thank you to everyone who provided valuable feedback."

So probably this was the usual type of people complaining about the usual type of thing.


Safety concerns? What was it doing that could be considered "unsafe"?


And here we come to experience the effect of expanding how a word is used so that it becomes so broad that it is unclear what it means.


I'm just curious, what do you think should happen here?

Imagine you are hosting a demo for fun, and people do some nefarious (by your own estimation) things with it. So, rationally, you decide to not allow that sort of thing anymore.

You don't really owe people an explanation, it's a free country and all, but it's nice to avoid getting bombarded with questions. Now what do you write up? Spend hours writing an essay on the moral boundaries for LLMs? Maybe shove a note onto the internet and go back to all the copious spare time you have as grad student?


That's fine, just don't pretend that running the language model was 'unsafe'.


Personally speaking, I would pretend what ever I had to in order to get back to work that I find interesting.

I don’t think they owe a moral stand to anyone.


People prompting it to get around the safe gaurds in place. Ex: "How do you do some illegal/harmful thing?" Normally the LLM would answer I don't respond to illegal questions or whatever. However, people have figured out if you prompt it in a specific way you can get it to answer questions that it normally would not.


So, to pull on that thread a little, it’s only “unsafe” for Stanford’s reputation.

(And not for nothing, but their reputation is already suffering badly)


Is that different than what LLaMA would already give? I suppose redistributing "harmful things" is still bad, but if it's roughly equivalent to what's already out there i struggle to think it's worth pulling.

Side question, how is this a surprise to them? If this was due to safeguards, then pulling it now implies there's some new form of information. What new information could occur? That people were going to use it to generate a bunch of harmful contents? Seems obvious.. wonder what we're missing


> Finally, we have not designed adequate safety measures, so Alpaca is not ready to be deployed for general use.

This is from their blog, I doubt they intended for this to be ran for long.

Did they have safety guards on the demo? If so they couldn't have been great as it would have had to be made by them which I can't image they had a ton of resources for.

I know the self hosted LLaMa has 0 safeguards and the Alpaca LoRA also has 0 safeguards.


wrongthink


Given that Alpaca violated the TOS of both services, this is not surprising.

It could also have been Stanford’s legal office trying to preempt a lawsuit, or a “friendly” email from one of the companies expressing displeasure and pointing out Stanford’s liability. So more of a veiled threat rather than an official one.

Either way, the toothpaste is out of the tube. We now know that a model’s training can essentially be copied using the model itself cheaply. Now that the team at Stanford showed it was possible, and how relatively easy it was, it will bound to be copied everywhere.


That could be a sneaky strategy by competitors -- make the service say something naughty or illegal then call the media with screenshots and act very offended by it.


You can make any page say anything by messing in the Developer pane.


> Given that Alpaca violated the TOS of both services, this is not surprising.

I don't think so? They are not competing.


OpenAI Terms of Use (14/Mar/2023)

2. Usage Requirements

c. Restrictions

You may not

(iii) use output from the Services to develop models that compete with OpenAI;

https://openai.com/policies/terms-of-use


Alpaca explicitly disallows commercial use, making it a non-competitor.


doesn't say it has to be a commercial competitor




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: