Just bought a Gl.iNet Puli. It's only 4G but seems like a better option if you want to supply internet to some devices that you move around. Planning to use it for setup and management of a headless presentation PC as it can directly be connected to the LAN port.
I have a mobile 4g router from them and it supports physical esim. I even managed to get their suggested card for cheap. They have some support in their firmware to set it up, so you can do that fully on the router.
I have read that people managed to get an eSIM installed on it, but I think there are also physical eSIM options. See https://www.gl-inet.com/solutions/esim/
I've been using synching for a few years myself and it's been great. Except for when conflicts occur in my org files, which are the primary things I use it to keep synced. PerKeep may make that a complete non-issue, though I'm not 100% certain.
Beyond that though, I'm thinking this would be nice for syncing state for a cross-platform app that features multiple incarnations anywhere being in sync to a decent extent. Just need to create a PerKeep client library for the language it's in (Python).
Of course the models are not human, but if you consider this situation as if they are persons, then the question becomes: May a person read lyrics and tell it to someone when asked, and the court's ruling basically says no, this may not happen, which makes little sense.
I guess the main difference between the situation with language models and humans is one of scale.
I think the question should be viewed like this, if I as a corporation do the same thing but just with humans, would it be legal or not. Given a hypothetical of hiring a bunch of people, having them read a bunch of lyrics, and then having them answer questions about lyrics. If no law prohibits the hypothetical with people, then I don't see why it should be prohibited with language models, and if it is prohibited with people, then there should be no specific AI ruling needed.
All this being said, Europe is rapidly becoming even more irrelevant than it was, living of the largess of the US and China, it's like some uncontacted tribe ruling that satellites can't take areal photos of them. It's all good and well, just irrelevant. I guess Germany can always go the route of North Korea if they want.
> "May a person read lyrics and tell it to someone when asked"
If you sell tickets to an event where you read the lyrics aloud, it's commercial performance and you need to pay the author. (Usually a cover artist would be singing, but that's not a requirement.)
So it's not like a human can recite the lyrics anywhere freely either.
If someone hires me as a secretary, and they ask me what is the lyrics of a song, there is no law that prohibits me from telling them if I know and I don't have to license the lyrics in order to do so.
If they hire me primarily to recite lyrics, then sure, that would probably be some manner of infringement if I don't license them. But I feel like the case with a language model is much more the former than the latter.
As soon as you take the LLM output and publicize it, it turns around and is a lot more akin to having your secretary read out the lyrics publicly. If you don't publicize it in any way, how would the copyright holder ever find out?
But the LLM is not advertised as a lyrics DB, and it in no way guarantees that it will reproduce the lyrics accurately, and similarly the copyright holder will never know that it's reproducing the lyrics unless it snoops on my conversations with it, or go ask it directly.
But then with the analogy, if I'm a secretary and the copyright holder of lyrics calls me and asks if I know the lyrics of one of their songs, I don't think it's infringement to say yes and then repeat it back to them.
The LLM is not publicising anything, it's just doing what you ask it to do, it's the humans using it publicising the output.
> May a person read lyrics and tell it to someone when asked, and the court's ruling basically says no, this may not happen, which makes little sense.
I think the difference here is that your example is what a search engine might do, whereas AI is taking the lyrics, using them to create new lyrics, and then passing them off as its own.
> whereas AI is taking the lyrics, using them to create new lyrics, and then passing them off as its own.
Is this not something every single creative person ever has done? Is this not what creating is? We take in the world, and then create something based on that.
reply