Hacker News new | past | comments | ask | show | jobs | submit login
Japanese court rules that AI cannot be issued patents (yomiuri.co.jp)
53 points by anigbrowl 21 days ago | hide | past | favorite | 39 comments



Doesn't the legal fiction of corporations already provide essentially all the cover you need here? Can't you just umbrella things under it?

I can't pretend to be a Japanese legal scholar but at the end of the day both an AI and corporation are just property.


I don’t know about Japan, but in the US, patents have to be issued to individuals and not corporations.


Yes but a corporation cannot be owned by AI


Sure. But I'm imagining the AI is property. It's like saying your LLC owns something.

Can't you just put a thin legal wrapper around your AI and call it invention-station Inc or something and thus christen it with all the rights of corporations?

If a company can own a computer and it can own software then it can certainly own what's produced by the software on the computer...

I don't see how this in practice changes anything. It sounds like you can still do the thing with slightly different paperwork.


Robots rights protests coming our way.


It already exists:

* http://www.chaair.org


Essentially the story is JPO falls inline with IP Australia, EPO, UK IPO, USPTO, etc?, on the DABUS applications: https://en.wikipedia.org/wiki/DABUS.

I think they may be the first to suggest in court that a different IP system might be used for AIs though?


And my love for Japanese culture continues uninterrupted.


The article doesn't say that AI generated content cannot be copyrighted.


It would be hypocritical if it could.


Why ?


Because “A.I.” scrapes real art. You can’t ignore copyright law and then enforce it when it benefits you.


A human brain also trains on copyrighted work.

A human can also reproduce very realistic copies of things and can even pass things off as the "genuine" thing.

We already have copyright law and counterfeit law for this.

Even if AI doesn't work the same way as parts of the human brain, I don't understand what difference it makes, whether AI was involved in the production of work legally, morally, or philosophically.


False equivalency. Humans have consciousness, an LLM does not. An LLM is nothing like a human mind, it’s a stochastic parrot. Art, by definition, is made by a human.


I'm not equivocating them.

I'm stating plainly: I don't see what differences between AI and human brains are relevant to the enforcement of copyright.

What is fundamentally changed by an LLM with respect to these three thought frameworks?


You are equivocating when you said “A human brain also trains on copyrighted work.” You’re stating foolishly, not plainly. Only an author, owner, or agent copyright something. LLMs are none of those. A person who uses an LLM to scrape art isn’t making art so they’re neither of the three either.


Then the person using an LLM to produce something that infringes on cooyright is responsible. They better check for every produced output, whether it infringes, since they are not creating it by thinking themselves and coming to a solution, but are using a tool that stochastically regurgitates bits and pieces it has seen before. Not much credit is deserved.


Both OpenAI and the user are infringing on the artist and neither are making art.


It is about time, that we get a solid framework for handling those and other cases.

I would also argue, that models learned from publicly available data need to stay public as well.


Yup, the fake conversation about LLMs being A.I. is a distraction from a more important conversation about public knowledge and information. OpenAI is not a proper education tool but it presents its presented that way. It also a platform form propaganda and censorship.


I really don't think it's an equivocation.

Training is a high level word, and because we engineered LLMs we know more specifics about how their training works, so when used in context with LLM, it can mean something more specific.

I don't know the full mechanics of how training/learning works in the human brain, but it's I've been studying casually for 30 years. It's possible that parts of our brain work very similarly.

What I'm trying to say is, regardless of underlying mechanics, I don't understand why using copyrighted works for training (anything) is a against some legal or moral framework.


You don’t think you are but you are. There’s a connection between your poor reasoning and your belief that “A.I.” is analogous to human thinking. You don’t understand LLMs or human cognition, therefore you have an ignorant opinion.


@slowhadoken, I do not "believe" AI is analogous to human thinking. Please read my previous messages more carefully.

-> I don't know how human thinking works, and according to my research, nobody does (not yet at least).

It might be exactly the same, or it might be similar, and it might not. This is all irrelevant to what I am saying.

I am saying, it doesn't make any difference according to my understanding of existing law.

I also don't understand what moral issues there are here that are specific to LLMs, and apply only to LLMs.

When I say that LLMs train, and humans train, I mean from a systems perspective, they both use the input of the data for similar purposes.

I ask the question: Why is one different than the other? Why are humans allowed to analyze and learn from art, but an LLM isn't? If they are different, and not analogous, that's one possiblity... but still doesn't answer my question: why does this unspecified and unknown "difference" matter?


The human mind and an LLM are not the same, that’s an ignorant thing too imply. Your questions are intensely naive too. Also your logic is weak. Just because scientists don’t know exactly how the human brain “works” doesn’t mean we don’t have accurate tests for general intelligence. On the other hand we know exactly how an LLM works and it’s not in the same ballpark with human intelligence, it’s not even the same game. It’s like thinking a calculator has a soul because it can “do” math. It’s childish fiction.


It looks like you never understood what the other party tried to say.


I mean I understand fallacy but I can’t encourage it. Their concepts were stillborn. You can’t have a dialogue or compromise with nonsense.


You are the one ignoring copyright exception when it suits you.

Fair use is a thing.

Also you seems to misunderstand something, the article never stated you couldn't patent an AI output, simply that the AI itself cannot be the author.


Fair use laws, if the country has them, don’t include profit. OpenAI, and other similar LLMs, are commercial products. “A.I.”, i.e., an LLM, can not make art therefore its output can not be patented and the algorithm can not be an author. You’re playing an ignorant semantic game.


> Fair use laws, if the country has them, don’t include profit.

It does.

Oracle vs Google, Google copied Oracle's API, is considered fair use as last judgment.

Music sampling is also fair use. Very popular and lucrative music's used samples of copyrighted material.


False equivalency. Oracle vs Google was over an open source Sun Microsystems API. And sampled music requires permission. Paying OpenAI to steal art for you isn’t some creative commons community.


> And sampled music requires permission.

Almost nobody ask for permission.

I'd also like to point out that you said

> Fair use laws, if the country has them, don’t include profit.

And now you start to add a lot of conditions depending of that. Oracle vs Google isn't a "false equivalency" it's litteraly a fair use case, where the one who copied the original work did it for a commercial product.


I’m not adding conditions, you’re ignoring context and application of the law. Don’t take my word for it, contact a lawyer. Besides the law you’re ignoring or unaware of philosophical and scientific definitions. Also the huge ethical problem with letting a machine fence art for you, it’s a factory for social pollution.


I see a lot of people commenting that have only read the title and misinterpreted it.

The article state that AI cannot be the owner of a patent, it have to be a human.


So someone will just hide it's AI generated and gets the patent.


You can write patents and uncover things by using AI but you get the patent, not the AI. Like the way the typewriter or the computer don't get the patent instead of you just because you used them.


Issue is not AI generated content getting patent, but AI being the named inventor. Which to me is reasonable as no AI has personhood. Using AI to generate patent is likely fine as long as that patent fills all other criteria.


The article doesn't say that AI generated content cannot be copyrighted.


Isn't this how the movie "the matrix" started?


It is similar to the plot of "The Second Renaissance", which is an animated prequel to The Matrix.

https://en.wikipedia.org/wiki/The_Animatrix#The_Second_Renai...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: