Hacker News new | past | comments | ask | show | jobs | submit | thurn's comments login

Liking it so far, although the 'task' progression isn't my favorite. Feels like there are way too many tasks to do that seem like tedious busywork.

Honestly a really simple game design fix for this would be to unlock tasks more slowly as the player demonstrates more engagement with the system. That way if you are like me and mostly find them boring and repetitive, you don't feel as bad about not getting them done.


From what I have seen, there are the main tasks (of which there are generally only 2 active at once), and side tasks. New side tasks unlock as you complete each tier of main tasks, and seem to be a mix between teaching different useful patterns you will need and something to do while waiting for the main tasks to finish.


The queryable expression thing is something I struggle with all the time in Rust. It's especially bad in that language because (unlike in e.g. Java or C#) there is no way to view the Debug representation of your types in the debugger, you just get the raw memory layout which adds a huge barrier to 'what is going on with this code?' and requires you to dig around through countless nested layers to understand it.


I also find this super annoying. In C++ land, Microsoft solves this problem by having "natvis" [0] files which allows you to have custom representations of complex & deeply nested objects. Unfortunately, most third-party debuggers don't support it. And like you said, any non-trivial program in Rust is basically not parsible without digger though 50 layers of nested abstractions.

[0]: https://learn.microsoft.com/en-us/visualstudio/debugger/crea...


There is a certain amount of deliberate ambiguity to Article I Section 8 -- take a look at e.g. https://avalon.law.yale.edu/18th_century/debates_817.asp and you can see that the wording "The Congress shall have power to [...] declare war" was revised from "make war" in the earlier drafts. Madison clearly felt that Congressional authorization was on some level required to conduct a war, but that the Executive should be free to act quickly in self-defense, e.g. to repel an invasion.


I don’t think we covered that in my history classes. Congress’s website about the constitution agrees with you and adds additional details ( https://constitution.congress.gov/browse/essay/artI-S8-C11-3... ).


Like a lot of things, this seems to fall under the idea of "convention" and not law. This has been an ongoing problem in recent times. Practically speaking, Congress is a rubber stamp for matters of war.

Theoretically, they can withhold funding from the military. But seeing as the Treasury Department falls under the President, it's unlikely they can actually do that.


"Withhold funding from the military" it's actually better (or worse) than that. The US doesn't have a standing army technically speaking, it cannot constitutionally. The post war military has been continuously reauthorized twice yearly since the end of world war 2. Congress can refuse to reauthorize continuation of it and the treasury can't do anything about it.


Legally though the power of the purse also lies with Congress. If we're throwing in extra constitutional actions all bets are off though because we're no longer bound by the rules of what is allowed and it's down to the old power of might making right.


Interesting that this project seems to contain a bunch of sprite assets released under GPL3. How does GPL apply to something like an art asset?


How does their garbage collection work? Especially in Rust it would be cool to see a concurrent collector.


One positive aspect of the status quo in the United States is that AI-generated images are not currently eligible for copyright. I think this is a great direction to go in, I highly doubt Wizards of the Coast or whoever is going to want their premium products to lose copyright protections, so they'll need to keep paying artists. I'd love for us to lean into this -- you can make all the AI art you want, but it automatically gets a Creative Commons ShareAlike-style license!


Completely unenforceable. How can you even tell if an image was made by AI? What if AI created an outline that was worked on by a human artist (or vice versa)? Who would the burden of proof be on?

Steam has a “no AI art” policy, and it’s rapidly turning into a “no obvious AI art policy”. How could they tell?


The thing about AI art is that, absent lots of prompt engineering, seed grinding, and touchups, you're likely to have a bunch of images that are obvious tells if your entire project is AI. Anyone trying to hide it would be spending time equivalent to just making the art themselves.

There's also another advantage to having a "no obvious AI art" policy; and that's to cut down on spam. AI is extremely useful to people who want to spam art platforms.


> Anyone trying to hide it would be spending time equivalent to just making the art themselves.

The gap of being distinguishable from manually drawn images is still closing - we don't know if it'll ever reach the threshold, but the amount of effort required to stamp out all the wonkiness from an AI generation has been going down ever since the first viable algorithms appeared.

I don't think that this was an anti-spam policy - Steam already manually reviews all new applicants that want to publish a game, so they don't need to forbid anything to turn it down. I'm guessing that this policy was because they don't want to be entangled in IP legislation if some copyright exception is carved out to forbid the use of generative AI.


> The gap of being distinguishable from manually drawn images is still closing

People have been trumpeting this since day one of Stable Diffusion releasing, but I'm seeing the same output quality as that day and I've been keeping up.


Just because the pace of progress isn't exponential (like what some people would want to believe) doesn't mean it isn't happening. I remember getting an early invite to DALL-E 1 all the way back, and while I don't use it anymore, the modern improvements made seem very substantial. From plain comparisons of different versions where the same inputs produce substantially better outputs, to the mere fact that the latest version can actually generate decent, often discernible text at all (something that people joked would be impossible from AI to achieve) shows that some progress is being made.

The reason why it's not as visible with Stable Diffusion is because a lot of the technologies around it circle the same few foundational SD models - people build on top of them, add new ways of interacting with them, but ultimately, the same thing underlies them all. Community support is seen as more important than cutting-edge tech, which is why something like Stable Diffusion XL hasn't even seen universal adoption yet.


I'm telling you the progress isn't happening based on my own consistent observations of various releases across multiple platforms. The only people who don't seem to agree with me are those who have the art literacy of a highschooler and think "discernible text" is a improvement.

As an aside, no one said AI couldn't achieve drawn generated text, that's been possible for years prior to stable diffusion.


In just the past two years it's gone from obvious horrors like hands attached directly at the elbow to much more subtle errors like chair legs that cross over each other like an Escher drawing or doorknobs adjacent to the hinges.

Human artists might have to become used to tracking provenance. If you work with traditional media, that's easy: Here's the painting. For digital artists, software can publish encrypted, timestamped brushstroke-level histories of the work if we need that level of proof.


IME, if GenAI ever reaches human parity, whatever that amounts to, the relevant subgenre of art will just move into surrealism. Invention of paintbrushes didn't kill art.


the fight about AI is using copyrighted stuff for its weights... i wonder which % of artists that wouldn't tweak or use heavily AI that has a transparent/ethical data-base (read it: they didn't added anything proprietary without authorization)


It's been tried. Numerous times. There's a reason why GenAI controversy is stuck at ethics and filled with rage, the generated images just aren't that great and so that part isn't so controversial.


There's a lot of AI "artists"[0] who think their text2image prompt generations are equivalent if not better than actually drawing or photographing an image.

Part of becoming an artist is learning how to evaluate your own work, break it down, and critique the shit out of it. When you jump straight to generating art with an AI, you skip the criticism step, which means you don't have a sense of taste and you haven't really explored what your preferences for style are.

A lot of AI art generators default to an extremely cinematic, "Hollywood" art style - i.e. exactly the sort of thing that is trying to look impressive to people who don't know better, and will make them overlook all the fundamental mistakes in the image.

[0] Normally I wouldn't scarequote "artist" here, given that actual artists do use AI tools where it makes sense.


And this will continue to be an issue until ML models have achieved something resembling sentience, because many of these tells are the result of the model not truly comprehending the subject matter and thus struggling to maintain internal consistency in everything from geometry and kinetics of human bodies to lighting and physics.

Less obviously, ML models also lack the ability to bake in intent. In human made pieces, everything is as it is for a reason; it’s communicating something. In ML generated pieces, things are the way they are because that’s what’s statistically likely for the type of generated image.


Absolutely correct. This is a far more obvious problem in text models, because you end up with internally flawless arguments as to why your next scuba diving vacation should be in Ulan Bator.

With art, it's more subtle, because there there's no single reference point that lets us determine if an artwork is "true". There are the glaring errors that everyone can agree on - notoriously, human hands - but those cases are improving rapidly.


>Anyone trying to hide it would be spending time equivalent to just making the art themselves.

Microsoft and every other tech company is indeed investing billions in the tech. I'm sure each company can fund the entire (woefully underpaid) art industry by themselves, let alone the 10 or so tech hubs altogether.

But they are happy to throw money at AI instead for the payoff of being the next big tech brand.


That also devalues the work of the original creators whose work got knocked off by ai and they should be compensated for the damage done


This has never happened ever in history. So many jobs were devalued by new machines. And the people doing them were never compensated.


> Anyone trying to hide it would be spending time equivalent to just making the art themselves.

That's basically the argument why Jason Allen should have been allowed to win the art competition, is it?

It's not that he typed "award winning painting" into Midjourney and the image was the result.

He tried hundreds of seeds, selected one that he liked and refined it over countless iterations with infilling until he was satisfied with the result.

I honestly don't see how this is fundamentally different from other art forms.


"I claim this art was made by this person" "Who?" <gives name> "OK <name>. did you work on this?" "Where are related work products? Are there any? What about invoices? Simultaneous employment?"

The reality is that most legal things are determined by _convincing people of a truth_. Perhaps you can set up a whole scheme to "launder" AI art and attach names to them. And all the papertrail you generate doing this will show up in discovery in some lawsuit and the copyrights all disappear.

Laws are vibes, not code.


The way that AI will be laundered into art is by including it into things like Photoshop. There'll still be a human touch just with "smart brushes" and "smart auto fill" that paints 90% of what you want.

Art will then take less skill to produce, and be produced faster for lower prices.

An 80% price reduction on art (because artists can now produce it 5 times faster thanks to AI) is 80% as good as getting it for free.


Art will take more skill to produce, not less, at faster speed by select artists. GenAI will become another tool that artists and clients alike must understand and use effectively within unspoken guild rules, that is, if it stays.


Googled this because that was an astounding claim but Steam does in fact not have an anti ai art policy.

They don't allow ai art produced by models trained on material that the model makers don't have copyright to.

In practice that's a ban (currently) but in principle it isn't.


It makes perfect sense because much of the friction around generative ML models has to do with the data it was trained on. There’s not much reason to ban images generated by a model that was trained entirely on consensually gathered material.


> Completely unenforceable.

Complete wrong. You just flip the defaults--something is AI unless you can prove otherwise.

This is done already and has precedent. Producing porn requires that you keep artifacts demonstrating that who the performers were, that they were of age, etc.

If you claim a work is not AI generated, you should have to produce some artifacts to back up that claim.

In the case of a corporation, that would be easy as you have payment records.

In the case of an individual doing digital only, that's a little harder. You probably have to keep some intermediate artifacts.


I'm really struggling to conceptualize a world where every picture that's drawn must have a full notary log of how exactly it was produced, all for the sake of removing generative AI.

Besides, it's not that easy of a problem - a lot of corporate artists are salaried workers, they don't get specified commissions with an attached bill per work, but are paid a salary so the company can ask them to draw whatever they need throughout the process. Considering this, all artists would need to retain "intermediate artifacts".

And then, how do these artifacts work for other ways of doing art? What about traditional artists whose work gets scanned in after completion - would they have to keep a camera on hand to take photographs as they're working? What about an animated film - would every intermediate step in production, from character design to storyboarding to environmental design etc etc need to have a full record for every single sketch?


It is for the sake of copyright, if you want society to protect your work, provide evidence for your creative work. It seems rather simple to me.

Keep in mind that in the not so far future, producing art will be as cheap as consuming it, this means that the original benefits society got in return for copyright no longer applies, so why should they protect it?


> It is for the sake of copyright, if you want society to protect your work, provide evidence for your creative work.

I'm not sure if it's that simple - for one, this requirement is a complete departure from how copyright systems work now. Providing complete history logs isn't normal practice, and expanding law to necessitate it isn't common sense.

> Keep in mind that in the not so far future, producing art will be as cheap as consuming it, this means that the original benefits society got in return for copyright no longer applies, so why should they protect it?

I'll make a prediction that this future is further from now than you may think it is. Sure, things like static imagery may become completely indistinguishable from human-made art in the near(ish?) future, but the production of all art is still an unsolved problem. How long will it take until some advanced multimodal algorithm can make a full game that can measure up to ones that are released today? I'm guessing that it'll take a while.

And yeah - once we do reach this scenario of hypothetical "art post-scarcity", we may as well just delete the whole copyright system from existence - it'd be a logical thing to do. But how does any of it contradict what I said in my other comments?


  > for one, this requirement is a complete departure from how copyright systems work now.
A complete departure? Here is the current form used to register an artistic visual work for copyright. Its more elaborate than you might think.

https://www.copyright.gov/forms/formva.pdf

Registration is not a rubber-stamp, it is increasingly refused because of indicia of AI tooling.

Why would adding some questions on provenance and methodology be beyond the pale?


Nothing in the form seems out of the ordinary to me. It is a lot of fields, but ultimately the main goal is establishing ownership, not discerning the specific methodology in which a person made the work. It's a departure in that the current system is results-based, where you register a final product, while the proposed system also must take into consideration every intricacy of creating the work.

> it is increasingly refused because of indicia of AI tooling.

Do you have a source that a statistically significant number of copyright applications gets refused on account of a work just seeming like AI? On what grounds does it get refused?


Asking for a statistical analysis is an unreasonably high bar. See the link I provided in a sibling comment, in which the copyright office plainly states as much and explains their guidelines.

https://www.federalregister.gov/documents/2023/03/16/2023-05...


What part of generative AI seems ordinary to you? the rest follows from there my friend.


> How long will it take until some advanced multimodal algorithm can make a full game that can measure up to ones that are released today?

We can disagree on how long it will take us to get there, but if you use AI generated content, that is not product by copyright, your game as whole, sure, as long as it is not the result of a simple prompt, you're protected as usual.

Keep in mind that already, in many games, there is a mix of protected and unprotected content, for reasons of trademark, copyright, and licensing.


This is a really good point that I think will be hard for a lot of people to come to terms with. The basis for the whole idea of intellectual property rests on assumptions that look increasingly fragile.


I think there's a lot of confusion about what records are needed presumably due to lack of understanding of what the law requires for proof of ownership.

Generally the party asserting ownership has the burden of proof. The standard is "preponderance of the evidence", which generally is understood to mean "more likely than not" or "> 50%". So basically it means if you can prove to judge or jury that there's a >50% chance you own the work, it's good enough.

Also note that in many cases where there's a dispute over the evidence, witnesses are summoned to testify. So you might not have a "full notary log" of how it was produced or all the "intermediate artifacts", but as long as the artist is able to convincingly explain how the work was created, and the other party's lawyers are not able to poke holes in their story during cross evidence, that's usually enough.

Which is, basically, what happens today, if the authorship or ownership of a work is disputed.

That said, I'm not sure whether "assume work AI (thus uncopyrightable) unless proven otherwise" should be the default for other reasons. For one, most quality "AI art" needs some manual adjustments or touch ups, and arguably the prompt and hyperparameters may be sufficient creativity element. I mean, that's basically how copyrights dealt with photography (the mere fact you decided when and where to point the camera with what settings is sufficient for copyright to subsist in a photo).


> drawn must have a full notary log of how exactly it was produced

I love when non-artists talk about art.


The word choices were kind of on purpose - I meant to highlight the partial absurdity of having to entangle yourself with all these legal considerations and obtaining sufficient legal proof, all for the sake of making some art piece.


If it's physical media, you have the physical media.

If it's digital media, the software can keep an encrypted record at the brushstroke level that can be played back to produce a bit-perfect reproduction. Maybe even write it to a public ledger.


All of these things have loopholes. For physical media, depending on the quality of the output, one could pay a sufficiently skilled person to reproduce an AI output on physical media in a fraction of the time it'd take to come up with and draw for real.

For digital media - ignoring how overbearing this whole system could be, what prevents someone from taking all that data and making an algorithm that outputs brush stroke parameters instead of pixels? And digital art isn't the only thing we need to concern ourselves with - eventually, we might have AI models that could make 3D models, sounds, vector imagery and other forms of art. The idea of just documenting every workflow would be an ever-growing burden with no perfect solutions.


> All of these things have loopholes. For physical media, depending on the quality of the output, one could pay a sufficiently skilled person to reproduce an AI output on physical media in a fraction of the time it'd take to come up with and draw for real.

Um, that's a real work, you know? In what way does this differ from people who take a photograph and then, for example, creating an oil painting?

Now, there are some weirdnesses because of the copyright of the source photograph, but the oil painting would be your own work.

Yeah, you might get called into court to demonstrate that you can produce the work. But so did Michael Jackson.


> If you claim a work is not AI generated, you should have to produce some artifacts to back up that claim.

So AI (or a simpler, non-statistics-based algorithm) can't produce these artifacts and will never be able to? Why? Are these artifacts "human souls" or something?

> In the case of a corporation, that would be easy as you have payment records.

Outsourcing.


The end result of this will be end to end cryptographically authenticated pipelines. I'm sure Adobe will be very happy about having yet another way to extract money from artists.


> something is AI unless you can prove otherwise.

I think this is rather what pro-AI/spammers are trying to do by flooding platforms, that aren't so successful. People don't give as high scores they do for human generated data, and AI images are still considered a form of spam.


> you should have to produce some artifacts to back up that claim

Why would that be relevant in court? Just show the process of making the art.


A company today has the burden of proof to demonstrate authorship of a claimed work when they sue for copyright infringement. This isn't a crazy expansion of that concept -- companies do not generally break the law just because there's a low chance of getting caught. Furthermore it's not hard to imagine, for example, a whistleblower calling out their employer for copyrighting AI works.


Furthermore, what is the threshold for something to still legally be considered AI art once an artist's hand has modified it? What if they change the brightness? Fix a hand? completely replace a character in a scene? Illustrate most of the scene themselves but add an AI figure or background? Use AI to sharpen a hand drawn image?


It's a gray area. But if someone generates AI images at scale without human in the loop it's likely not copyrightable.


What does "at scale" mean here, and how would it be detected or enforced?


that means that openai cannot claim copyright on images produced by dalle generator. no can other online services and offline software owners/producers.

this is enough to use them without copyright violation for example for other ai models training.


  > AI-generated images are not currently eligible for copyright
It’s a bit more nuanced than that. Here is the relevant policy statement, which notes that some AI-assisted works are potentially eligible for registration and have indeed been registered, while works that are primarily the product of an AI are not.

https://www.federalregister.gov/documents/2023/03/16/2023-05...


Makes sense. Photoshop has had content-aware-fill for over a decade. That counts as AI as much as any diffuser does. I don't think those images should have their copyright invalidated.


But then, if I generate an image using "AI" and touch it up in Photoshop, is it eligible for copyright again? How much "touch up" do I have to do for it to not be "AI generated"?


This has been problematic forever long even before any LLMs. Techniques like photobashing that use copyrighted images modify them for it to be new work. Or even older cases like work of Andy Warhol.

Where this is more extreme here is that there is no human labor involved there is no invention. On the other hand LLMs make this extra tricky because in one way output they create is objectively unique but subjectively/culturally it's not.

To answer your question if you take AI generated image and change it enough for it to stand as it's own unique thing you could for sure claim it as your work and it would be eligible for copyright.


> I highly doubt Wizards of the Coast or whoever is going to want their premium products to lose copyright protections, so they'll need to keep paying artists.

WotC's latest round of layoffs (within the past month or so) hit the art staff especially hard.


The main source I've found for the December WotC layoffs is Christian Hoffer's Twitter[1] and only 3 of the ~20 people listed seem to be art staff.

Some of the lists, such as on Reddit, appear to (erroneously?) list a few artists who advertise themselves as still employed by WotC. [2]

[1] https://twitter.com/CHofferCBus/status/1734947730491932929

[2] https://www.reddit.com/r/dndnext/comments/18ij198/list_of_kn...


> status quo in the United States is that AI-generated images are not currently eligible for copyright.

Aren't they? I thought it was just that the copyright holder has to be a recognized legal entity (so, the copyright would have to belong to the human operator or their employer, not to the ai model itself).


From the article:

> Last September, the US Copyright Review Board decided that an image generated using Midjourney’s software could not be copyright due to how it was produced.


A grossly misleading oversimplification.


I do wonder if its a fruit of the poisoned tree argument and any AI derived work can’t be copyrighted because it used already dubious source material.


No, it's a Butlerian Jihad[0] argument. The Copyright Office's argument holds even for a fully public domain training set. US copyright law is already speciesist[1] - you can't assign authorship to an animal - so computers are also forbidden from authorship.

[0] In the Dune universe, the "Butlerian Jihad" refers to a legal ban on thinking machines.

[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...


Seems like that only applies if you anthropomorphize the AI and consider it the author, rather than a tool utilized by the artist. I mean, yes, the AI is doing the bulk of the work, but so is photoshop for a lot of digital art.


Photographs can be copyrighted, and all the human did was aim and press a button. Oh, and potentially travel to specific locations, adjust parameters, choose lenses, stage a scene, makeup, wardrobe, lighting, etc. etc. But none of those things are required for the image to be copyrighted


I'm not really sure how this connects to the argument. No one is trying to grant authorship to an algorithm - it would be a ridiculous effort that was never even in the cards. In these copyright disputes, the authorship on AI outputs would be on the person using the AI. Generative AI takes inputs that are provided by a human and transforms it into certain outputs. Legally speaking, I don't see it as different from me getting protection for something I did in Photoshop - trying to somehow give Photoshop authorship would be absurd.


I agree it's not quite the right argument. IANAL, but I think it's more illustrative to remove AI from the example entirely. If you wrote a prompt and gave it to a human artist to draw, would you have joint copyright over the resulting work? If you didn't do anything worthy of copyright, and the AI cannot be granted copyright, then it is not copyrightable.

That said, it seems like a moot point to me. The practical uses of generative AI are not going to be one-and-done prompt-to-image tools. When AI is used more like a brush, the brush strokes the human chooses will still be granted copyright.


Maybe but copyright on a remixed song that samples other artists is really difficult to deal with in terms of rights unless one party owns the rights to all the samples it’s difficult to negotiate and you might not get any royalties. AI art is basically just other people’s work remixed so the same problem applies


This is literally a case of someone making AI art and trying to attribute it to the algorithm.


The Copyright Office wrote their guidance specifically because someone tried to register a comic book they wrote and put AI art in. They specifically credited Midjourney as co-author.

Their guidance would not apply for someone using AI as a tool, but said copyright would be thinner than if you'd drawn everything by hand. Specifically, you don't own any of the things the AI "just came up with". If you just wrote a prompt and grinded out some results, you probably own nothing[0]. If you use shittons of inpainting to control, say, the overall composition, but the AI filled in pixels somewhere, then you probably still own the overall image, but that's only because I'm not sure how you'd separate the two in a way that would let you copy just the AI-generated portion. Or, in the case of the comic book I mentioned earlier, they own the text, characters, and plot of the comic book, but not the artwork.

[0] Yes, you could probably just lie to the Copyright Office. Make sure to never reveal your use of AI to anyone, because there's loads of angry artists who would love to tattle on you.


This is a horrible direction to start off. Especially given that we can't truly tell if an image is AI generated or not. What if I modify an AI generated image?

There's so many technicalities here that can be weaponized.


I think (hypothetically, they may choose not to do this for a number of reasons) WoTC could still dramatically cut back on how much they pay artists by “outsourcing” things like backgrounds, extended art, etc to AI so long as the focal point of the piece is human-created and therefore copyrightable.


Ironically, there's a massive scandal with WoTC right now for doing this. They say they aren't using AI, the majority doesn't believe them due to artifacts only AI would produce for backgrounds.


One question that is unclear to me is how this works if images are packaged with text or other content. For example; let’s say I write a book and then use AI images to illustrate it. It doesn’t seem logical to me that somehow the book would be copyrighted but the images inside the book wouldn’t be…? At some level, the “package” of images + text would supersede the two things separately. Otherwise you would have a situation where sharing the book is a copyright violation but sharing the images inside of it isn’t.


There's nothing particularly contradictory about that: there's already situations where that is the case. For example when a book contains images which are public domain, or where elements of the book like the facts within it are not copyrightable. Another interesting one is tabletop game manuals: the layout and presentation of the rules are copyrightable, but the game mechanics generally aren't. So you can make a book which just contains the rules and not be infringing copyright. Using AI-generated images would be exactly the same situation.


I'm envisioning more of a situation where a company adds text directly to AI-generated images, or otherwise somehow modifies them that prevents them from just being generic images, in the way public domain images are. I really don't think companies will just add images straight-from-the-generator without modifying them in such a way that prevents their easy re-use.


> AI-generated images are not currently eligible for copyright.

There is another bright side of it. This images can be used for AI training without copyright violation. Does this apply to texts as well?


Won't they move to trademark protection instead, as it's a lot more flexible with less restrictions ?

Basically the same way Disney let copyright go but will fight for trademark to the bitter end ?


That won't work. Trademarks are a lot harder to establish -- it's not just automatic from the moment you publish it as with copyright. You have to first start using it, then always mention it's a trademark when using it (with the TM or R symbols for example), then wait for it to catch on, then file with the government some paperwork. (Iirc, exact process is probably similar but different, but the point is it's a lot more involved)

Trademark is intended to protect the holder from being impersonated, not from losing revenue from selling content.... So it's a lot easier to redistribute copies of trademarked work as long as you make it clear you are not affiliated with the trademark holder, in a manner which a reasonable person would heed.

So for example, if a piece of art is trademarked by Disney, and it is well known by the public, and I print a copy and put it in front of my shop, a reasonable observer might this my shop is owned, operated, or endorsed by Fisney. So that's not OK.

If instead I sell copies of that art in my shop, and make it clear to everyone I sell it to that I am in no way affiliated with Disney and this is totally unauthorized by Disney, I'm probably fine.

Trademarks are also industry specific. That's why Apple Records and Apple Computer both exist -- as long as a reasonable person could not confuse them, it's OK.

In short, trademarks are very very different from copyrights. They protect different activities.

In fact I should not have used the phrase trademarked work. A work (like an image or movie or novel or software program) does not get trademarked. The character, slogan, logo, product name, company name, brand name, color scheme, etc used therein to identify the brand, is what is trademarked. Very different.

I will add more examples, this time to illustrate copyright, which works basically the opposite : Suppose mickey mouse were not trademarked. Then while it would be illegal to redistribute verbatim copies of a recent Mickey mouse picture authored by Disney, as well as any modified remixed versions based on that verbatim picture, it would be perfectly legal to draw totally new art involving the same character as long as it was completely new without referring to the copyrighted work, because coypright protects the right of Disney to make money off distributing that picture they made, and they did not make or contribute to making your mickey drawing, and while you are using a character they came up with, in the absence of trademark, copyright isn't intended to protect the public from being confused as to who they are dealing with as trademark is.

IANAL this is based on decades of amateur interest in IP law.


This is not 100% accurate from a trademark perspective, at least with respect to "famous" marks.

Generally speaking, you are correct - unless there is a likelihood of consumer confusion, you are free to use a trademark already used by a senior user.

But marks like Apple and Mickey Mouse, from a trademark, are sufficiently famous that they get special protection. There is a concept called trademark dilution that only applies to sufficiently famous marks. With respect to such marks, a junior user can be liable for use of the mark even if there is no likelihood of confusion.

(BTW: By "senior" user, I means a user that gained trademark rights first and a "junior" user is one that started using the mark in commerce later.)


I don't think you can broadly use trademark protection though, can you?


From [1] "Not every character qualifies for trademark protection, however. For a character to be trademarked, the character cannot be too similar to other existing trademarked characters and must be used to brand products or services. Once a character meets these requirements, the owner can file for trademark protection."

So I don't know if you could apply trademark to e.g., every card in magic, but maybe only to the key characters?

1: https://www.mekiplaw.com/how-to-trademark-a-character-an-eas....


Wow, this is incredibly insightful! I'm completely on board, for whatever that's worth (which is pretty much nothing).

That would really be a great way to structure things.


Does this mean that if, for example, a court rules that I cannot train an image generation model on copyrighted material, I can train it on AI-generated images?


Spry Fox v. LolApps (2012) was a case about a similar visual and thematic remake of a game (Triple Town vs Yeti Town). The Yeti Town developer decided to settle when it became clear that they would likely lose, and the remake is no longer available.

Realistically, of course, it probably doesn't matter since I assume the author doesn't have the resources to fight Firaxis in court over this anyway.


> I assume the author doesn't have the resources to fight Firaxis in court over this anyway.

Bigger corporations than Firaxis have made that same assumption and lost. There was a great story recently about an Adelaide woman who represented herself in a suit against Google for years and finally won: https://www.abc.net.au/news/2023-10-23/janice-duffy-wins-12-...


The Spry Fox fact pattern is distinguishable, I think, given that the defendant had access to privileged IP and had been negotiating to port the app before creating the knockoff after negotiations fell apart.


Dr. Alex Wellerstein has a lot of great writing on this subject on his blog and on the /r/askhistorians subreddit under the username 'restricteddata'.

    The way I see the field these days is a lot of hovering around a "middle" position on the bombs, as opposed to the extreme "ends" of the spectrum ("totally justified, best decision ever" vs. "terrible war crime done just to look tough"). The authors you are quoting, like Zinn (and Kuznik, who is also quoted by another response), are I think pretty anomalous in that they still stake out a hard, confident position on one end of the spectrum. (There are a few who stake out the other end, too.) The "middle" position, what the historian J. Samuel Walker calls "the consensus view", basically says that the bombs were seen as a perfectly fine, if not a little unusual, military decision, that they might not have been solely responsible for the surrender of Japan, and that the use of the bombs in the way they were used (on cities, little spacing, early August) was a mixture of vaguely strategic thinking (no "grand plan" on anyone's part, but people did have some ideas about what you might get out of doing it that way) and complete happenstance (the spacing between the bombs, and the fact that they had two ready to go in early August, depended on external factors that had nothing to do with real strategy).
https://www.reddit.com/r/AskHistorians/comments/3zuffw/some_... has a detailed discussion of the issue.


Transfer pricing is sort of just a pile of black magic. Microsoft's American business is required to charge their European subsidiary some "fair market price" for the Windows intellectual property, but what is a fair price for a license to sell Windows on the European continent? Unlike with fungible goods, there is no established market for this product that the IRS can easily refer to.

In this case Microsoft is saying that some amount of their IP was developed by their overseas subsidiary and thus doesn't need to be included in the transfer pricing calculation, so in their process of making up imaginary amounts of money to charge themselves for their own products they also need to deduct that. If e.g. most of the Windows networking stack was created in Germany they need to figure out what percentage of the value of the overall Windows IP is added by the ability to connect to the internet.

Understandably, it's possible for reasonable observers to disagree on the values chosen here, this is not a case of black and white corporate misconduct (tax issues essentially never are).


Is this basically the Double Irish With a Dutch Sandwich?


Double Irish isn't possible as of 2018:

https://en.wikipedia.org/wiki/Double_Irish_arrangement


This is for the time period 2004-2013


Yes it is. Microsoft routes a lot of its business through the Dublin and Belfast offices. At least now they have an actual office and a DC there. Previously, it was literally just a routing office. Apple and Google do the same as well.


A Dutch Sandwich sounds like two dry pieces of bread with a thin slice of cheese. I'd discourage MS from getting one.


Dutch cheeses are not many, but they are tasty even in dry bread. Best paired with a generous cup of évasion fiscale.


The real crime is not adding enough roomkaas (creamcheese). Seriously, if you've never tried it you're missing out.


After having just spent a month in Utrecht you are 100% right. Your comment made me smile.


Thanks :) Utrecht is a beautiful city. Funnily I have an Albert Heijn sandwich planned for dinner tonight.


this sounds complicated, does this sort of thing work in reverse for things "developed" in the EU and used in the US?


it can, sort of. it really depends on what you are optimizing for

there are nearly infinite permutations on how to form your business entities, what combinations of jurisdictions you use

and then you can structure which one does what operations where

additionally, all the countries compete for your business so are really competing against each other

most recognize that volume of transactions within their economy to many entities is more important than their passive taxation to one governmental entity, so they incentive the former


Part of me is like: 25% tariff, blanket, on everything unless there's an exemption. The list of exemptions can then be carefully curated. I don't care if it's 500 pages long for everything from honey to computer chips; but then at least we know that stupid tax stuff is unprofitable.


part of me is like: why does that particular governmental entity deserve additional payment just because its regulatory environment is uncompetitive in that regard and can't balance its own budget.


Completely true, but copyright is not a "right" in the sense of human rights, it's a legal construct that we created to create certain social benefits. And it certainly wasn't my impression that most HN users view the current state of copyright law as an unmitigated positive force in the world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: