Hacker News new | past | comments | ask | show | jobs | submit login

What is so crazy is this seem like a disincentive to publish and share ideas. If anything you say can be used in an AI against you... or the alternatively that is not your idea the AI came up with it... This has nothing to do with AI and everything to do with the crackpots in charge.



Unfortunately it's not so easy. If I as a person (which is essentially just a biological neural network) can simply go to websites, read the content and use the information I gathered to create slightly modified new content without repercussions, who's to say that an artificial neural network should not be allowed to do that? Just because I'm not as fast? What if I hire 1000 workers in a low wage country to do it for me? As AI capabilities grow, this separation will grow even narrower in the future. There's no realistic way to differentiate web access for human purposes vs. AI purposes in the long run.


I as a person understand that the GPL, AGPL and it's ilk are (at least in my moral view) a certain level of sacrosanct.

I might read GPL code once in a while. But I would never copy-paste it when someone asks for, say, how to do a fast inverse square root. (I don't really read AGPL code b/c most AGPL companies I've encountered strike me as salivating for a reason to force a license on a business entity.) The closest I did was looking at GPL code once upon a time for some geo-transform code, and frankly didn't use any of it and instead used a USGS book to re-implement everything in a fully legal-safe way.


How much copy pasted code on stackoverflow has a license attribution? And on top of that, even more restrictive licenses wouldn't change the problem - neither with humans nor with AI.


>Just because I'm not(...)

It's because a human you can be accountable and can feel consequences, and yes, also because you aren't as fast.

I feel like many people just equivalent "human brain and consciousness" with "neural network" way too quickly. You can't remove the human factor however much you try to equivelate(?) it with a program.

(? = non english speaker)


But you aren't held accountable. At least not as much as LLMs nowadays. And speed is also irrelevant as stated above.


Yes there is.

You can spin up more compute with a credit card. You can't make 1000 people in the same manner, nor can you own them.

Lets be real here though. The only reason anyone is drooling over AI is because it potentially allows one to elide paying someone else, ehich means more money for them.


You can use your credit card to find and convince 1000 people to do your bidding, why is owning them a requirement? You don't "own" the compute you spin up either, you're temporarily borrowing it.


As the AI business model as a two prong approach. Create an idea generating system so complex one can legal elude responsibility, followed by rent seeking opportunities from the generated ideas.


Yes, God forbid we build powerful new tools that extend human knowledge, insight, and productivity in directions previously undreamed-of. Mah coppy rite is more important! Thereoughttabealaw!

As usual in these scenarios, the only real injustice is that the people who tried to stand in the way will enjoy the benefits of progress in AI alongside those who worked to make it happen. So it goes, I guess.


I'm no ally of copyright; however, as long as it's here and a thing to be dealt with, I'm not going to cheer on a company operating on the back of flagrant disregard thereof. This isn't "code I'm using in a personal project that maybe only my friends will interact with". This is a full fledged business, owned in part by of all people, Microsoft, the people who rammed copyright down our throats for the last 3 to 6 decades while doing everything they could to cripple FLOSS.

I expect the absolute most aggressive enforcement of copyright in this case.

As to my more general assertion of AI only getting the traction it is because of an industry looking to devalue it's currently incredibly highly priced laborers; spend a bit of time around shareholders/management types and you'll soon understand why I think the way I do. Magical thinking, "as long as it makes my outlays lower" thinking is par for the course. There are, in fact, social classes who see the "hired help" as something meant to be out of sight, out of mind, and lucky they get what they are willing to give.

Besides the above hot take, I also see AI as being fundamentally disruptive to the human social fabric. I'm not convinced that as a society we're even prepared to have a real conversation eith regards to a technology that at any time could cross a threshold to sspience. The choruses of such individuals as Carmack and plenty of other HN posters on "it's just a statistical model", and "lets wait til it's at least a developmentally challenged toddler before worrying about those types of concerns" (where those types of questions are those with regard to sapience, and the matter of where the line between "just a statistical model" lay) only proves my point The reductionist viewpoint will be stretched right up to the point that there's a court case where the public finds out that training or instantiating models that communicate with one another basically involves torturing a collective mind that no one bothered to see that way because it was just so stupidly productive.

Hell, the outcome of said case would probably be shifting research in a direction whereby it's possible to make a construct that just barely toes the line. Which misses the entire moral point.

You could say I'm fairly black-pilled on the matter. Humanity can't even deal with one another, or competently raise their own children. We don't need to be committing terrible parenting on an industrial scale.

...If you've read through all of this, you're ptobably a better person than I currently, but know there was a time I shared your attitude toward the subject matter. Then I really started to pay attention to how people treat one another, and how money actually gets earmarked for different things. The learning experience is something I'd not wish on anyone, but as you, and our shared friends the Trafalmadorians say,

So it goes.


My issue isn’t the mechanics of what is happening. I believe the AI is being used as a shim to work around otherwise prohibited behavior.


> who's to say that an artificial neural network should not be allowed to do that?

If I'm the creator of the work, I get to say that. That I have no means to enforce that is precisely why I've taken all of my work off of the public web.

> There's no realistic way to differentiate web access for human purposes vs. AI purposes in the long run.

Right, which is a very serious problem.


As an individual I would like to use bots to do my websurfing. Where I see the the problem is large corporations using webscrapped ideas to patent/copyright ideas. IF the data produced by AI is as open as the webscrapped data that seems fine.


If you produce code from reading stackoverflow or github for some company, it will also own it - not you. AI will only be faster at producing stuff for these companies.


I'm not interested in code as much as other forms of human expression. Imagine having to convince the courts that you said something first no matter what the expert AI states.

I still don't see it as an AI failure as a human failure in the use of sophisticated tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: