Hacker News new | past | comments | ask | show | jobs | submit login

Remember when the world freaked out over encryption, thinking every coded message was a digital skeleton key to anarchy? Yeah, the 90s were wild with the whole PGP (Pretty Good Privacy) encryption fight. The government basically treated encryption like it was some kind of wizardry that only "good guys" should have. Fast forward to today, and it's like we're stuck on repeat with open model weights.

Just like code was the battleground back then, open model weights are the new frontier. Think about it—code is just a bunch of instructions, right? Well, model weights are pretty much the same; they're the brains behind AI, telling it how to think and learn. Saying "nah, you can't share those" is like trying to put a genie back in its bottle after it's shown you it can grant wishes.

The whole deal with PGP was about privacy, sending messages without worrying about prying eyes. Fast forward, and model weights are about sharing knowledge, making AI smarter and more accessible. Blocking that flow of information? It's like telling scientists they can't share their research because someone, somewhere, might do something bad with it.

Code lets us communicate with machines, model weights let machines learn from us. Both are about building and sharing knowledge. When the government tried to control encryption, it wasn't just about keeping secrets; it was about who gets to have a voice and who gets to listen. With open model weights, we're talking about who gets to learn and who gets to teach.

Banning or restricting access to model weights feels eerily similar to those encryption wars. It's a move that says, "We're not sure we trust you with this power." But just like with code, the answer isn't locking it away. It's about education, responsible use, and embracing the potential for good.

Innovation thrives on openness. Whether it's the lines of code that secure our digital lives or the model weights that could revolutionize AI, putting up walls only slows us down. We've been down this road before. Let's not make the same mistake of thinking we can control innovation by restricting access.




The fight against encryption continue to this day and while https is now ubiquitous, large-scale cdns makes it somewhat a moot point and emails are still largely plaintext.


> emails are still largely plaintext

But people's private digital communications have largely moved to platforms like WhatsApp and Messenger which enjoy end-to-end encryption. Email, at least between major providers, today enjoys TLS over the wire while being sent.

I'm sure there are various flaws and weaknesses and maybe even backdoors, but trying to make it sound like we lost the fight for encryption because emails are in plaintext is rather disingenuous.


I'm not sure Facebook and a now Facebook owned platform are good examples for private communications. There was an article posted here a week or two ago detailing how Facebook sold access to the contents of users private messages to advertisers.


It represents a step forward from the 90s for the vast majority of people. E2E in messenger and WhatsApp is still painful for LEO.

The article last week (assuming you're referring to this [1]) involved users consenting for Netflix to see their messages. A user from the 90s could have made the same mistake sharing plaintext emails.

[1] https://news.ycombinator.com/item?id=39858850


> like WhatsApp and Messenger which enjoy end-to-end encryption

They aren't open source. For all we know they have backdoors.


If you think Facebook is providing you private communications, you might want to rethink your operational security.


I share your concerns and think you're broadly correct. I think it's worth adding some nuance though.

When you drill into specifics there are almost always exceptions. For instance, in your example about sharing research, there are certainly some types of research we shouldn't automatically and immediately make publicly available like biological superviruses or software vulnerabilities.

I think the same can be said about AI. We should aim to be as open as we can but I'd be hesitant about being an open source absolutist.


No absolutely not.

To assume that there even are "good guys" that can "do it safely" is INSANE.

Any attempt to prevent democratizing AI is just a capitalist ploy to make a monopoly over their market. There is no saftey play here.

The FED needs to stay the frik away from this.


I agree we want to democratize AI and we should be very, very weary of powerful people trying to get a monopoly on AI.

But I'm not ready to say absolutely everything should immediately be shared with everyone. At least not until it's clear we know what we're dealing with.

I know it can be hard to trust people but the fact is we have to. Even today, there are many people with the power to end the world (nuclear weapons, viruses etc) but we trust the people who have these capabilities not to abuse their power. We do this because we don't want anyone to have the right to launch nuclear weapons. And I think that's wise.

I definitely don't know for sure but AI may be another one of these technologies.

Either way, you can fight against regulatory capture, the downsides of capitalism etc without being an open source absolutist.


Weights is derivative work, and as such should follow the licensing of the original works. If those works are public domain or were appropriately licensed, distributing weights openly should be protected as free speech.

And let’s not anthropomorphize of ML. Models don’t “think” or “learn”. The only party with free will and agency is whoever makes or operates them; trying to paint an unthinking tool as a human is just a means of waiving responsibility for them.


"Derivative works" is a dodgy legal concept in the first place, because every work can be construed to be derivative.

In practice it just gives lawyers and judges a lot of leeway to pretend that they're being consistent when they aren't.


The concept that you call “dodgy” is actually fundamental to copyright. If you remove the idea of derivative works, the entire system that encourages innovation by protecting intellectual property (yes, the same system that gave us computing and ML) falls apart.


The one that encourages innovation by allowing copyrighted works to stay copyrighted for 100+ years? That's a long time to wait for something to build on something!


We’ve been building on top of each other’s work in the open-source software ecosystem and in science this entire time.

Also, there are licensing models that specifically allow derivative works.


Unfortunately nobody told Marvel, Disney, and the Tolkien estate about it, so we'll have to wait until we're all dead before being able to build ontop of the cultural works that defined our childhoods.


Don’t worry, you can always be inspired by them and make something of your own.


"Something" sure. But I can't make something set in Middle Earth (I can't even say the word hobbit, ask D&D how that went), I can't write and sell my own Spiderman comics, I'd have a hard time writing my own Cinderella story without Disney breathing down my neck.

This is a very different reality from the "We’ve been building on top of each other’s work in the open-source software ecosystem and in science this entire time".


You are trying to pass off some arbitrary thing for “building on top of”.

I can use all the benefits of Linux to deliver a SaaS (like 99% of us do), I just can’t call my SaaS “Linux Something”.

You can use all the ideas Tolkien used in your own creative work. You just can’t call them things he called them for now.

> I can't make something set in Middle Earth (I can't even say the word hobbit, ask D&D how that went), I can't write and sell my own Spiderman comics, I'd have a hard time writing my own Cinderella story without Disney breathing down my neck

There is no creative reason to do those things. The only reason is to commercially profit by piggying back off the big names. Copyright works as intended.


> for now.

for now, the last 80 years, and until easily 2040.

> I can use all the benefits of Linux to deliver a SaaS

> You can use all the ideas Tolkien used in your own creative work. You just can’t call them things he called

That's quite a bait and switch. You don't have to rewrite a linux kernel, or gcc, or whatever language you wish to use in order to make your SaaS. You aren't limited to using the "idea" of a linked-list, a hashmap, or HTTP, and have to reimplement it yourself from scratch. But that's exactly what you're proposing for literature.

I can't build on the idea of Spiderman by making ArachnidMan, a crimefighting superhero who got pinched by a electromagnectic spider without wondering "Is today the day Marvel sues me into the ground". And I absolutely cannot write my own Spiderman comics.

> There is no creative reason to do those things. The only reason is to commercially profit by piggying back off the big names. Copyright works as intended.

In the US at least, copyright comes from a clause which states that laws can be passed "to promote the Progress of Science and useful Arts". The only thing the current system is promoting is the concentration of copyright into large corporations (see music labels) and locking people out of the cultural artifacts that define their life (Disney gets to take public domain stories, but a slight twist, and copyright them for 100+ years).

What there is no creative reason is the way that the system is currently set up. All that exist as blatant money and power-grab reasons.


> for now, the last 80 years, and until easily 2040.

Let it be until the end of the universe, please. What kind of creative are you anyway if you don’t want to do your own world-building?

> You don't have to rewrite a linux kernel, or gcc, or whatever language you wish to use in order to make your SaaS. You aren't limited to using the "idea" of a linked-list, a hashmap, or HTTP, and have to reimplement it yourself from scratch.

That was exactly to show you how copyright doesn’t prevent us from building on top of things, but instead fuels innovation. IP protections is how one guy can say “you must always provide your source if you use my work” and have it quickly grow into a massive ecosystem on which most of today’s Internet runs to this day.

Open-source licensing can only exist thanks to copyright and the idea of derivative works in particular.

1. In order to license something, you have to be able to enforce it legally, and for that you have to be the author, which is what copyright means.

2. In order to encourage others to use your library but also ensure they contribute back if they make changes, you need to talk about the concept of a work that is based on your work, and that is—drumroll—a derivative work.

> "to promote the Progress of Science and useful Arts"

If you are saying that allowing any writer to be able to take the world someone else built and profit off its fame with minimal modifications is somehow promoting progress of the arts, then I don’t know what to say to you.


> What kind of creative are you anyway if you don’t want to do your own world-building?

A no true creative.

What kind of creative needs lifetime + 70 years of monopoly on an idea?

> If you are saying that allowing any writer to be able to take the world someone else built and profit off its fame with minimal modifications is somehow promoting progress of the arts, then I don’t know what to say to you.

I am not. I am saying that the current way it's implemented is broken. Tolkein has earned more than enough money off of LOTR, it's far past the time for it to enter the public domain and join the stories that he build his work on.

Many of Disney's earlier movies are retelling of folktales. Tolkein in addition to creating parts of his world build on top of existing folktales. Neither would be where they are now if someone was able to impose the kind of restrictions on them that they themselves now impose on us.


I don’t know whether Tolkien’s estate should or should not keep profiting from his work, but I don’t see why this should be forbidden. If anything, we probably have so much great work coming out because people want to repeat the success of great masters and achieve fame and wealth that are possible thanks to IP protections.

Make a cool world and license its use semi-openly, be the change you want to see? Just keep in mind that if your world is openly licensed, it will quickly go out of your control and likely in a direction you (or your family, when you’re dead) may find repulsive. Also, I think you would have to be comfortable that if the next guy writes something based on your hard work and gets a Netflix deal tomorrow you may find it difficult to do something about it.

> Many of Disney's earlier movies are retelling of folktales. Tolkein in addition to creating parts of his world build on top of existing folktales. Neither would be where they are now if someone was able to impose the kind of restrictions on them that they themselves now impose on us.

Anything you say, write, do, etc., is technically based on everything done before you. So now we should eschew the idea of intellectual property and the progress it brought. Got it!


> I don’t know whether Tolkien’s estate should or should not keep profiting from his work, but I don’t see why this should be forbidden.

Because you and I are arguing, or at least I am arguing, about the duration over which they should keep profiting off of it. I claim that it's self evidence that a copyright that never expires and is transferable is self evidently bad (companies will end up having monopoly rights to every idea, see their concentration of money).

And I further argue that the current length for copyright also problematic. the current life of author + 70 years is ridiculously long time for a monopoly. It's gotten to that length not because starving artists need it, but precisely because unimaginative large corporations found it easier to extend the copyright on their existing IP than to image something news.

> Make a cool world and license its use semi-openly, be the change you want to see?

But even if you choose not to because you wish to monetize your work, that's fine by me. But lifetime + 70 years is too long of a time.

My thought is that we should have it be non-transferable, no more having authors and musicians sign their copyright away for a pittance. At it should be capped at something like 30 years or 20 million dollars, whichever comes first.

Gives people time to monetize their idea, and if 30 years wasn't enough time, no amount of time will be enough because nobody's looking at your work. And the amount of works that earn over 20 million dollars is a rounding error; you get to be rich, and then the rest of us get to remix your work.

> Just keep in mind that if your world is openly licensed, it will quickly go out of your control and likely in a direction you (or your family, when you’re dead) may find repulsive. Also, I think you would have to be comfortable that if the next guy writes something based on your hard work and gets a Netflix deal tomorrow you may find it difficult to do something about it.

Both of these I find acceptable. Firstly it's my work only in the sense that I was the one who made it, not it the sense that it belongs to me. Copyright doesn't mean that I own the idea, it only gives me a monopoly on monetizing and distributing the idea.

Secondly, this is exactly how open source projects work. The parallel you drew earlier to open source work comes back here. You have no control over what people make or do with your open source licensed project, all you enforce is what requirements they have to share the code (this quickly becomes a semantic discussion).

The GPLv3 and APGLv3 licenses, which people derogatorily label as "viral", does not even attempt to limit you from making money off the code, all it does is require you to allow everyone else to access the changes you make to it and code you built off it.


The profit cap is an interesting idea in theory, maybe it would discourage some people who go into it hoping to make infinite money but maybe that is for the better.

Regarding software, true, we can’t limit what people build with it if you open-source it, we sort of do not think about it (I wonder for how many people it is an issue secretly).

I think it is similar in art, however. Fanfics are generally tolerated, unless you do something the author specifically dislikes or try to compete with the original. In Japan I think it is also legal to sell them (see dōjinshi) unless the copyright holder specifically complains (which they do not tend to). But using Middle Earth in a commercial ad for a car, for example, is insta-lawsuit.


> maybe it would discourage some people who go into it hoping to make infinite money

That's totally fine by me, the point of copyright should be to advances the arts and science, not to help concentrate wealth.

> I wonder for how many people it is an issue secretly

I think quiet a few. See for example ElasticSearch and Amazon. If you want a truly open-source license then you have to accept that. Otherwise there's a whole spectrum of "source available" licenses, or something like CC BY-NC-SA.

Though that's another difference between works of art and source code. Releasing my book doesn't produce a maintenance burned on me, an open source project usually does.

> using Middle Earth in a commercial ad for a car

You don't even have to go as far as a car, see [0]. They didn't get sued, but it's safe to say that they would the moment they try selling it.

As for the other aspect of fanfic, I think the copyright's holders desires should be weighed less. I personally disliked many of the star wars movies, particularly the latest trilogy, and I'm not alone in this.

A) I know for a fact that better star wars movies would be possible if Disney did not hold a death grip around the IP by the simple fact that other good movies can be made, there's nothing intrinsic to star wars.

And B) just as copyright holders won't stand by if you try to do a fanfic that they don't like, I shouldn't have to be forced to stand by and watch them butcher (yet again) the stories that I grew up on AND be told that that's the only version available.

[0]: https://en.wikipedia.org/wiki/The_Last_Ringbearer


I think I should have been less pointed/sarcastic in my previous reply, sorry.


No worries, apology accepted.


You can if you have a well-connected lawyer.


No, you simply can. You just cannot piggy back off the actual names they used.


The reason copyright is so contentious and inherently evokes "dodgy" solutions is that it goes against the flow of nature.


Painting something you dislike as “against nature” is the lazy way out and not a real argument.

Humans are part of nature. Property rights are necessarily part of nature too, by extension. It can’t go “against” the flow of nature when it is the flow of nature.


Derivative works are only concerned with the licensing of the original work if they don't fall under fair use. It's really, really hard to argue that an open weight AI doesn't fall under fair use.

1. "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;" -> an openly distributed model is not of commercial nature. Check one for fair use.

2. "the nature of the copyrighted work;" -> Harry Potter and The Sorcerer's Stone has very little in common nature with an artificial intelligence that produces arbitrary text based on a prompt. Check two for fair use.

3. "the amount and substantiality of the portion used in relation to the copyrighted work as a whole;" -> The amount (all of it) goes in favor of the original work, but the substatiality of it is definitely more ambiguous, as while if they had used none of the copyrighted works, the AI couldn't exist, any single original copyright holder's work contributed very little to the end result.

4. "the effect of the use upon the potential market for or value of the copyrighted work." -> No one's using AI to avoid having to buy a copy of Harry Potter. An AI and a book are not currently considered substitutable goods.

Basically, even if a model's weights constitute a derivative work of the training material's copywritten material (that's another can of worms, there's a finite possibility copyright doesn't protect/apply to weights at all), there's a very good argument to be made for fair use.


1. First, there are commercial models. But also, we could consider that if something specifically enables unfair use then that may deserve a look.

2. Weights are a derivative of another nature, but one that specifically enables mass production of derivatives of the same nature. Could be an important concept that simply didn’t exist until now.

3. That’s roughly my understanding, too.

4. Valid point on one hand, but consider that using LLMs is exactly substitutable for any non-fiction book or resource. If no one can sell a book, then who would write them?


> Weights is derivative work, and as such should follow the licensing of the original works.

No that's something you made up.


Property rights in general are a thing made up by people, we're just arguing over where to draw the line.


Ideally somewhere between "you're not allowed to take my things" and "you're not allowed to learn from what I tell you".


Not really - "arguing over where to draw the line" would be a discussion about how a hypothetical new law should treat model weights; however, assertions about whether they are a derivative work (or even "work" as such, in the sense of copyright law - noncreative transformations aren't copyrightable works, no matter how much "sweat of brow" or money they took) is a discussion about where the old copyright laws have drawn the line, and the "should's" don't matter there, it's about how the courts will interpret the precedent, and at the moment they haven't said that they are derived works, and while they could declare that, there are all kinds of arguments why they also might declare the opposite.


Imaginary Property Rights don't exist in Nature. A purely man-made construction.


Humans are part of nature, like trees and foxes. Property rights are necessarily part of nature too, by extension.


Off-topic but this user seems to be using ChatGPT or something similar for almost every single comment. Does Hacker News have a stance on this or is the thinking that it is allowed as long as the content is good?


There is indeed a stance, adding autogenerated comments can get you banned: https://news.ycombinator.com/item?id=33945628


I took a look at their profile and I’m not seeing anything that looks auto generated to me.



If posts with multiple paragraphs encircling a topic seems suspect, then we're all guilty. Their points are cogent. If they've dressed the arguments in robes of "purple", so be it.


Maybe we need to up our game then. That could very well be mergekit and clever prompting. Their comment history bends in the direction of content engineering and will get less discernible the more accounts are doing it.


Please, these kinds of purple-prosed thesaurus-laden ruminations smack of LLM augmentation - two in the same thread, and neither contributing anything much more than a long winded flowery restatement of the existing state of the discussion:

https://news.ycombinator.com/item?id=39876685 https://news.ycombinator.com/item?id=39876634


It doesn’t take an LLM to write long-winded and flowery prose.

Nice new burn, though… “you write like an LLM”.

Is that what we want on these forums, attacking someone’s prose?


never heard "purple-prosed", what's that mean? like rose-tinted?


I'm thankful that I'm 40, and not in school anymore. My general writing style would get my work flagged as "AI-generated" more often than not.

I've run my blog posts through those tools in the past and pretty they consistently fail to be considered human-generated.


At least according to https://gptzero.me/ the submissions are AI Generated.


It's important not to jump to conclusions about how others are participating in discussions, especially based on the content or style of their comments alone. Many users might have a consistent way of expressing themselves, or they may be leveraging various tools and resources to enhance their contributions.

The key focus should always be on the quality and relevance of the content shared.


This comment reads like a GPT-generated comment! When I look at your history, your other comments seem genuine. I’m probably totally wrong, but I do like the idea of using GPT to counter anti-GPT sentiment. :)


That doesn't only feel odd, it also is empty prose based on a far-fetched analogy. Apologies to the root commenter if they wrote it themselves, but quality and relevance is lacking here. 99% sure some of their posts are at least heavily augmented.


I imagine AI is out there defending it's right to exist where we don't pick up on it's pose.


golf clap


I don't think this analogy is wholly applicable, simply given the scale and potential blast radius of certain classes of models. A more apt analogy would be nuclear technology. There are the Atomic Gardeners on the one hand, who believe in only the good and see the promise of the technology and peoples intent, and then there is the bitter struggle for power and threat which hinges on it.

In most cases, ML models have the capability to revolutionise or at least augment/optimise problems in predictable ways. In extreme cases, deepfake technology and the like can erode at the tenuous levels of trust which hold societies together. We have seen what happens when disinformation, mistrust, and even basic levels of technology meet: look at students being lynched in India, Pakistan, and elsewhere due to WhatsApp group messages claiming blasphemy; or the practice of SWOTing; or the insinuation of CSE gangs leading to a pizza parlour in the US being stormed by someone with a machine gun.

The stakes here are a lot larger than pure technologists, todays Atomic Gardeners, may perceive.

In that context, legislation is trying to provide a counterweight to the pace of change to allow for some – any – breathing room, and particularly to prevent an increasingly hostile cast of nation states from weaponising that technology, even in small ways. For example, OSINT of North Koreas operations show that the basic parts needed for making baby powder are also capable of being used for weapons development.


I think all the issues that you've listed are serious. They're the things that keep me up at night. But I don't think the content is really problematic (except maybe deep fakes), it's the untraceable nature of their sources. To me, it's anonymity that gives them their power. If we knew that the content was posted by a specific REAL person/company/government, we could judge it's authenticity and hold them liable if it's criminal in some way. Not saying there isn't a place for anonymity on the internet but to me, that's the problem that needs to be solved more so than AI.


imho, and you can call me an atomic gardener if you wish, nuclear tech is a bad example. If we had actually gone "all in" on nuclear to produce electricity, our and the world's clean energy footprint might result in warming of entire degC's less than it's currently projected to be. Climate change is a global problem -- what stakes could possibly be higher than ecological collapse?

I actually cannot think of any example where successfully withholding technology has helped people thrive and prosper. People using this tech to sow distrust is outweighed, so obviously and so massively, by the benefit of having an intelligent machine companion under their complete control.

Every "weapons technology" in human history has been used for good. Every single one.


Encryption was about ensuring trust and privacy.

AI is about destroying trust (in the short term).

Give every script kiddie, bored teenager, Mexican cartel and scammer an AI that can mimic anyone's voice and likeness, and the world will get a lot messier.

I don't think they're the same. I wish we could put the genie back in the bottle. I think AI will make humanity less special.

I'm not convinced society will be better with AI. The benefits must push down cost of living for the masses, improve quality of life for the masses, all without destroying society with disinformation and shattering job loss.


I think LLMs have considerably more probability to make the world worse overall than cryptography does (the sheer level of information bullshit they can develop for pennies is going to transform our society and I doubt it is going to be for the better). Still I don't see the point of banning open weight models and LLMs that don't have guardrails. And I'm not sure you can realistically construct laws that would do it accurately. The genie is out of the bottle, pandora's box is opened, etc, etc. And locking down models with guardrails is only something that corporations have to do in order to avoid having a public racist chatbot problem and the associated headlines.


Well, that was a healthy rant

Life, all biological life with us as a kind of pinnacle, is about to go through radical change.

There is no risk free path. It isn’t guaranteed that a single human will be alive in 100 years - because we failed, or even because technologically we succeeded

But a degree of openness is necessary for our best ideas, our most good faith collaborations, to have a chance

It is more chaotic to trust each other, en masse. But I also think it is our best bet

The dice must be rolled. Best we throw them bold


> Just like code was the battleground back then, open model weights are the new frontier. Think about it—code is just a bunch of instructions, right? Well, model weights are pretty much the same; they're the brains behind AI, telling it how to think and learn. Saying "nah, you can't share those" is like trying to put a genie back in its bottle after it's shown you it can grant wishes.

I think telling a genie "I wish for no more wishes" is a common enough trope.

I'd agree that making weights available is basically irreversible; however making it illegal to make new sets of weights available is probably fairly achievable… at present.

-

Some of the issues with "winning" the battle for encryption include:

1) We also need it to defend normal people from attackers

2) It's simple enough to print onto a T-shirt

3) The developers recognised the value and wanted to share this

The differences with AI include:

1) The most capable models don't fit on most personal devices at present, let alone T-shirts

2) 95% of the advantages can be had from centralised systems without needing to distribute the models directly to everyone

3) A huge number of developers have signed an open letter which is basically screaming "please regulate us! We don't want to be in an arms race with each other to make this more capable! We don't know what we're doing or what risks this has!"


As long as everyone admits that number 3 “please regulate us” is really just asking for regulatory capture and is in no way a good faith move, but rather protectionism. Then I’m good to proceed with these conversations.

These people have not just suddenly developed consciouses. This is a game move.


Numbers 1 and 2 are the same thing though:

1) You can run a very large model on existing cheap hardware, with poor performance, but it will run. Also, "most personal devices at present" is doing all the work here. Obviously if there is now demand for devices with hundreds of GB of VRAM then sellers will soon make them and buyers will buy them. That amount of memory is not actually that expensive. And it's presenting a false dichotomy where the only alternatives are companies the size of Microsoft and the ability to run GPT-4 locally on your existing iPhone. There are thousands of medium-sized companies that can afford five to six figures in hardware, and any individual could rent big hardware on any cloud provider -- requiring them to compete with each other -- instead of locking each model behind a single monopoly service.

2) Claiming that the benefit of people being able to run the models locally is only 5% is absurd. The privacy benefit of not sending your sensitive data to a third party is worth more than that by itself, much less setting up some huge institutions to have an oligopoly over the technology and subject everyone to all the abuses inherent in uncompetitive markets. But from the perspective of those institutions they want to classify those benefits as costs, so they try to come up with some malarkey about how they need to gatekeep for "safety" because so they can control you is unsympathetic.


Ah, now I see my mistake; I gave a list of what I see as things of political importance without making it clear this is what I was doing.

That the source code for unbreakable encryption fits onto a T-shirt made it clear to the US government that it was a "speech" issue, and also made it clear that it was in any practical sense unstoppable.

That big models can't fit onto most person devices at present is indeed likely to be a temporary state of affairs, however it does also mean that the political aspect is very different — the senators and members of parliament can't look at the thing and notice that, even if the symbols are arcane and mysterious beyond their education, it's fundamentally quite short and simple.

And, of course, if they're planning legislation, they can easily say "no phones with ${feature} > ${threshold}", which they've already done on various compute hardware over the years, even if the thresholds seem quaint today: https://www.youtube.com/watch?v=l2ThMmgQdpE

Politicians don't care if all of us here think they look silly.

> 2) Claiming that the benefit of people being able to run the models locally is only 5% is absurd. The privacy benefit of not sending your sensitive data to a third party is worth more than that by itself, much less setting up some huge institutions to have an oligopoly over the technology and subject everyone to all the abuses inherent in uncompetitive markets.

This seems a surprising claim, given the popularity of cloud compute, Cloudfare, third-party tracking cookies, DropBox, Slack/MS Teams, Gmail, and web apps run by third parties such as Google Docs, JIRA, Miro, etc.

There are certainly cases where people benefit from local models. The observation of all these non-local examples is part of why I think that benefit is about 5% — though perhaps this is merely sample bias on my part?

> But from the perspective of those institutions they want to classify those benefits as costs, so they try to come up with some malarkey about how they need to gatekeep for "safety" because so they can control you is unsympathetic.

They've been saying this "malarkey" since before they had products to be marketed.


> That big models can't fit onto most person devices at present is indeed likely to be a temporary state of affairs, however it does also mean that the political aspect is very different — the senators and members of parliament can't look at the thing and notice that, even if the symbols are arcane and mysterious beyond their education, it's fundamentally quite short and simple.

You can write the RSA algorithm on a t-shirt but in practice you need a computer to run it.

Likewise, you can fit llama or grok on a thumb drive and carry it around in your pocket, but in practice you need a computer to run it.

> And, of course, if they're planning legislation, they can easily say "no phones with ${feature} > ${threshold}"

But what good is that?

If they're actually trying to accomplish an outcome, those kinds of rules are completely useless. If all they're trying to do is check the "we did something about it" box, there are a hundred other useless things they could do that would be equally ineffective but still check the box and cause less collateral damage.

> This seems a surprising claim, given the popularity of cloud compute, Cloudfare, third-party tracking cookies, DropBox, Slack/MS Teams, Gmail, and web apps run by third parties such as Google Docs, JIRA, Miro, etc.

Notice how these are the kinds of things that major companies with trade secrets to protect have explicitly banned, and individual consumers with no bargaining power suffer while complaining about it?

> They've been saying this "malarkey" since before they had products to be marketed.

Eccentrics have been saying this "malarkey" for a long time. It becomes the official position of the market leader when it is convenient.


> You can write the RSA algorithm on a t-shirt but in practice you need a computer to run it.

Pretend you're a politician: the thing you've been told is a state secret is written onto a T-shirt by a protestor telling you it isn't secret, everyone already knows what it is, you're only holding back the value this can unlock. You, the politician, might feel a bit silly insisting this needs to remain secret even if you think all the 'potential value' talk is sci-fi because you can't imagine someone doing their banking on the computer when there's perfectly friendly teller in the bank's local office.

If the same discussion happens with a magic blob on the magic glass hand rectangle which connects you to the world's information and which is still called a "telephone", you might well incorrectly characterise the file that's actually on the device of the protestor talking to you as "somewhere else" and "we need to stop those naughty people providing you access to this" and never feel silly about your mistake.

> But what good is that?

"Then the people can't run the 'dangerous' models on their phones. Job done, let's get crumpets and tea!" — or insert non-UK metaphor of your choice here; again, I'm inviting you to role-play as a politician rather than to simulate the entire game-theoretic space of the whole planet.

> cause less collateral damage

I made that argument directly to my local MP about the Investigatory Powers Act 2016 when it was still being debated; my argument fell on deaf ears even with actual crypto, it's definitely going to fall on deaf ears when it's potential collateral damage for a tech that's not yet even widely distributed (just widely available).

> Notice how these are the kinds of things that major companies with trade secrets to protect have explicitly banned, and individual consumers with no bargaining power suffer while complaining about it?

No.

Rather the opposite, in fact: each is used by major companies.

> Eccentrics have been saying this "malarkey" for a long time. It becomes the official position of the market leader when it is convenient.

It was the position of OpenAI with GPT-2, which predates their public API by 16 months, and them being a "market leader" by 3 years 9 months:

February 14, 2019: "Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper." - https://openai.com/research/better-language-models

June 11, 2020:

"""What specifically will OpenAI do about misuse of the API, given what you’ve previously said about GPT-2?

With GPT-2, one of our key concerns was malicious use of the model (e.g., for disinformation), which is difficult to prevent once a model is open sourced. […]

We terminate API access for use cases that are found to cause (or are intended to cause) physical, emotional, or psychological harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam, as well as applications that have insufficient guardrails to limit misuse by end users. As we gain more experience operating the API in practice, we will continually refine the categories of use we are able to support, both to broaden the range of applications we can support, and to create finer-grained categories for those we have misuse concerns about.""" - https://openai.com/blog/openai-api

People called them names for this at the time, too, calling their fears ridiculous and unfounded. But then their eccentricities turned out to lead to successfully tripping over a money printer and now loads of people interpret everything they do in the worst, the most conspiratorial, way possible.


I'm reminded of the last Narnia book, specifically the Dwarfs who could not see their surrounds.

"""They have chosen cunning instead of belief. Their prison is only in their minds, yet they are in that prison; and so afraid of being taken in that they cannot be taken out."""

Is it really so hard to believe they might be sincere? We've had tales of scientists — and before them alchemists and wizards — undone by their creations for a very long time now. It's a very convenient meme to latch on to; surely even if you do not share this caution, or find it silly for whatever reason, you can at least be aware of it existing in others?

The term "conservative" may be now affiliated with a specific political team more than the concept of being risk-adverse, but the idea is found even in the history of the ancient Romans and the tales from ancient Greece.

And I say this as one who personally found the idea of conservatism strange; who is an "all improvements are change" person; a person who took a long time to learn about the metaphor of Chesterton's Fence; and is a person who even now finds it hard to fully empathise with those who default to "treat all change as a potential regression until proven otherwise" even though I have to force myself into that attitude while writing unit tests.


"The most capable models don't fit on most personal devices at present" Come back in 3 years, most phones will have an AI co-chip.


You're expecting them to fit on a T-shirt in 3 years, or is your context length too short to have noticed that clause? (Only human, if so).

https://news.ycombinator.com/item?id=39904650




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: