Hacker News new | past | comments | ask | show | jobs | submit | jpalawaga's comments login

They basically did the same thing, which is why there was a lawsuit. It's also why Google preemitively offered users free money--it was informed by the lawsuit against Apple for the same shit.


The purpose of the throttling was to keep the phone from shutting off when the battery was old. That feature is still in iOS.


Copyright law stipulates the conditions in which content can be reproduced, not conditions in which it can be consumed.

Arguably the material has been learned and not copied. Maybe in some cases learned with an uncanny ability or photographic memory, but learned. (People with photographic memories also cannot reproduce content in an unlimited fashion).


Learned!

There's nothing special about an LLM, there's no learning, and they regurgitate verbatim text too.

May as well say curl + images in a db are learned as well, so thus I can use Mickey Miuse as I please in my php web page.


While learned is probably not the best word to use as far as describing the legality goes, I also don’t think “copied” is the right word.

Let’s say that the model “is influenced by” the copyrighted material. That seems hard to argue against.

So, now that we aren’t using the word “learned”, why would we say that the way the models are influenced by the copyrighted works that appear in the “training set” (not to imply that “training” in the usual sense is happening) counts as a copyright violation?

Or, perhaps the claim is that the outputs of the model are violating copyright?

If the output is substantially similar to some particular copyrighted work that is in the “training set”, and could work as a substitute for that work, and if the output resembling the work is in part due to the influence that the work had on the model, I think in this case it would be a clear case of violation of copyright.

However, if it doesn’t have substantial similarity to any particular copyrighted work that influenced it, only similarity to the style common to many of the works that influenced it (even if all by the same author), my impression is that this would not constitute copyright infringement because styles are not protected by copyright, only individual works.

(Now, is this unjust, in the case of it copying the style of some particular author/artist? Idk, maybe? But my impression is that copyright doesn’t protect styles, and that it probably shouldn’t protect styles in general… so I guess maybe if we had a law making a special case forbidding the (deliberate?) copying of a person’s particular style via some kind of machine learning model? Idk.)


The argument is that the LLM itself is essentially like a complex lossy archive of its entire training set. It's like an mp3 of all of the songs on Spotify, in some sense (of course, using all text on the internet instead of all songs on Spotify). This is the sense in which it is considered to be a copy of all of this.


Very insightful. Consider there are many "recreational imitators" who mimic how specific (famous) people speak. They are not violating copyright, they are just imitating a way of speaking.


I don’t think this is a good argument because the way of speaking isn’t a copyright issue. I don’t think you have copyright on your specific way of speaking, only on specific recordings of you yourself speaking.


I think I'm saying the same thing. Style is not something that can be copyrighted but I hadn't much thought about it before. I'm not a lawyer. I guess trademarks are something, and design-patents perhaps that let you have IP over style. :-)


> There's nothing special about an LLM, there's no learning

The model is 100-500x smaller than its training set. That is something hinting at learning, as direct storage is impossible.


Video compression ratios vs raw video is amazing too, but there's no learning and there's no doubt that the compressed form is subject to copyright.


It's worth noting that 'taking a proposal seriously' doesn't equate to accepting it.


Something here is not quite right. For example, if you look at the 7th ave line (red line/1,2,3) 18th street and 14th street have similar radii.

But anyone knows that 14th street is much more convenient because it's triply serviced by the 1,2,3 with 2,3 being express trains. 18th street is serviced by one train the runs locally. This ignores that you could easily switch to the L, A/C/E, F/M with a short walk.

Still, a cool visualization to show the power of mass transit, and even compare relatively between lines.


Does it include the express trains? (those trains that skip a bunch of stops on certain lines, so they can go faster like the 2)

I couldn't figure that out. I remember when I was in NY those trains really moved.

https://mta.info/map/5346


I think it includes transfers and express lines. I noticed if I click on a local stop that meets an express line somewhere, the express line stops are still quite a bit darker than other stops.

That's likely why 18th and 14th street are so similar: It only takes a minute or two to get from 18th st -> 14th st.


I see what you mean. According to this map, you can get from the 18th. street station to the 1st ave. stop on the L in ten minutes, but not from 14th. street.


Time of day / day or week ?

I got screwed yesterday waiting 23 minutes for an F when normally I'd 5 mins or less.


It does. New York's system is one of only a handful in the world that operates all (or nearly all) lines 24/7.


The level of redundancy in the NYC subway is marvelous. In most of the densest areas, you have 4-track lines and often other lines within a mile. It makes it possible to do maintenance while still offering 24/7 service.


If you have a 2-track line, you can close one of the tracks for maintenance on weekday nights when frequency doesn't need to be very high. That's how Copenhagen does it.


You just burn the hydrogen and receive h2o in return.


And a slightly more complex version of that cycle plays out when the body burns Carbohydrates (literal meaning: molecules of Carbon and water). Water is being split and re-combined all the time in biology.


You do realize health insurers have federally mandated caps on their profits, which simply incentivizes creative accounting to make money in more oblique ways, right?


They're exiting the market because the states have limits on how premiums can increase y/y. The risk modeling (which is turning out to be right) says premiums are fractional of what they should be. So unable to raise premiums, the companies just leave.

Rock, meet hard place.


California's building codes are the same. Three problems: overhaul takes generations, monster fire storms will still burn resistant materials, and brush upkeep is difficult


GC emulation wasn't emulation; it was done with a separate chip. It was more like native support. Eventually Nintendo removed that chip and backward-compatibility support from the console.

(so, even if you could put a GC disk in, it didn't have capability to natively play the game)


It sounds like you're confusing the Wii's backwards compatibility with the PS3's. The Wii didn't have a separate "GameCube chip", its core was effectively an overclocked GC.


https://youtu.be/meZA9KHkFuY?si=5xrsSjNxKLxLnd6J

He explains it quite well. Sorry it’s German but I guess the information about the chips and reasons Nintendo choose them should be all over the net.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: