Hacker News new | past | comments | ask | show | jobs | submit login

> That big models can't fit onto most person devices at present is indeed likely to be a temporary state of affairs, however it does also mean that the political aspect is very different — the senators and members of parliament can't look at the thing and notice that, even if the symbols are arcane and mysterious beyond their education, it's fundamentally quite short and simple.

You can write the RSA algorithm on a t-shirt but in practice you need a computer to run it.

Likewise, you can fit llama or grok on a thumb drive and carry it around in your pocket, but in practice you need a computer to run it.

> And, of course, if they're planning legislation, they can easily say "no phones with ${feature} > ${threshold}"

But what good is that?

If they're actually trying to accomplish an outcome, those kinds of rules are completely useless. If all they're trying to do is check the "we did something about it" box, there are a hundred other useless things they could do that would be equally ineffective but still check the box and cause less collateral damage.

> This seems a surprising claim, given the popularity of cloud compute, Cloudfare, third-party tracking cookies, DropBox, Slack/MS Teams, Gmail, and web apps run by third parties such as Google Docs, JIRA, Miro, etc.

Notice how these are the kinds of things that major companies with trade secrets to protect have explicitly banned, and individual consumers with no bargaining power suffer while complaining about it?

> They've been saying this "malarkey" since before they had products to be marketed.

Eccentrics have been saying this "malarkey" for a long time. It becomes the official position of the market leader when it is convenient.




> You can write the RSA algorithm on a t-shirt but in practice you need a computer to run it.

Pretend you're a politician: the thing you've been told is a state secret is written onto a T-shirt by a protestor telling you it isn't secret, everyone already knows what it is, you're only holding back the value this can unlock. You, the politician, might feel a bit silly insisting this needs to remain secret even if you think all the 'potential value' talk is sci-fi because you can't imagine someone doing their banking on the computer when there's perfectly friendly teller in the bank's local office.

If the same discussion happens with a magic blob on the magic glass hand rectangle which connects you to the world's information and which is still called a "telephone", you might well incorrectly characterise the file that's actually on the device of the protestor talking to you as "somewhere else" and "we need to stop those naughty people providing you access to this" and never feel silly about your mistake.

> But what good is that?

"Then the people can't run the 'dangerous' models on their phones. Job done, let's get crumpets and tea!" — or insert non-UK metaphor of your choice here; again, I'm inviting you to role-play as a politician rather than to simulate the entire game-theoretic space of the whole planet.

> cause less collateral damage

I made that argument directly to my local MP about the Investigatory Powers Act 2016 when it was still being debated; my argument fell on deaf ears even with actual crypto, it's definitely going to fall on deaf ears when it's potential collateral damage for a tech that's not yet even widely distributed (just widely available).

> Notice how these are the kinds of things that major companies with trade secrets to protect have explicitly banned, and individual consumers with no bargaining power suffer while complaining about it?

No.

Rather the opposite, in fact: each is used by major companies.

> Eccentrics have been saying this "malarkey" for a long time. It becomes the official position of the market leader when it is convenient.

It was the position of OpenAI with GPT-2, which predates their public API by 16 months, and them being a "market leader" by 3 years 9 months:

February 14, 2019: "Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper." - https://openai.com/research/better-language-models

June 11, 2020:

"""What specifically will OpenAI do about misuse of the API, given what you’ve previously said about GPT-2?

With GPT-2, one of our key concerns was malicious use of the model (e.g., for disinformation), which is difficult to prevent once a model is open sourced. […]

We terminate API access for use cases that are found to cause (or are intended to cause) physical, emotional, or psychological harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam, as well as applications that have insufficient guardrails to limit misuse by end users. As we gain more experience operating the API in practice, we will continually refine the categories of use we are able to support, both to broaden the range of applications we can support, and to create finer-grained categories for those we have misuse concerns about.""" - https://openai.com/blog/openai-api

People called them names for this at the time, too, calling their fears ridiculous and unfounded. But then their eccentricities turned out to lead to successfully tripping over a money printer and now loads of people interpret everything they do in the worst, the most conspiratorial, way possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: