There's already a rate limit on pulls. All this does is make that rate limit more inconvenient by making it hourly instead of allowing you to amortize it over 6 hours.
10 per hour is slightly lower than 100 per 6 hours, but not in any meaningful way from a bandwidth perspective, especially since image size isn't factored into these rate limits in any way.
If bandwidth is the real concern, why change to a more inconvenient time period for the rate limit rather than just lowering the existing rate limit to 60 per 6 hours?
> The average user doesn't even recognize that running a website literally cost electricity that must be paid for. Who pays for it? Who will carry the boats?
Running a retail store also has costs associated with it, including, yes, electricity.
Yet if I walk into a store and leave without buying anything, do I feel like I owe the store owner anything?
No. That's not how that works, nor is that how it should work.
There’s a difference between browsing in a shop and reading content online. It’s much more like going in and sitting in a book shop, reading a book and leaving.
No, it's not. It's more like an online bookstore mailing you a copy of a book for free. The only catch is that they also sent a book full of ads and directions that say: after every page of the book, look at an ad. Then, when I receive these, I don't even take the ad book out of the package, read the actual book, and send them both back.
You can imagine why Amazon never decided to go with this business model.
In my personal view, this seems a little overbearing.
If you expose an API, and you want to tell a user that they are "unauthorized" to use it, it should return a 401 status code so that the caller knows they're unauthorized.
If you can't do that because their traffic looks like normal usage of the API by your web app, then I question why their usage is problematic for you.
At the end of the day, you don't get to control what 'browser' the user uses to interact with your service. Sure, it might be Chrome, but it just as easily might be Firefox, or Lynx, or something the user built from scratch, or someone manually typing out HTTP requests in netcat, or, in this case, someone building a custom client for your specific service.
If you host a web server, it's on you to remember that and design accordingly, not on the user to limit how they use your service.
The $100 tab paid in dimes causes severe inconvenience to the person trying to count them and to the person who has to take them to the bank to cash them in and wait for them to be counted again.
Their very reasonable question was: if you can't distinguish the reverse engineered traffic from the traffic through your own app in order to block it, then what harm is the traffic doing? Presumably it's flying under your rate limits, and the traffic has a valid session token from a real customer. If you're unable to single it out and return a 4xx, why does it matter where it's coming from?
I can think of a few reasons it might, but I'm not particularly sympathetic to them. They generally boil down to "I won't be able to use my app to manipulate the user into taking actions they'd otherwise not take."
I'd be interested to hear if there are better reasons.
"if you can't distinguish the reverse engineered traffic from the traffic through your own app in order to block it, then what harm is the traffic doing?"
If you really believe this you'll use a custom user agent instead of spoofing Chrome. :-)
Some websites use HTTP referer to block traffic. Ask yourself if any reverse engineer would be stopped by what is obviously the website telling you not to access an endpoint.
I'll add that end users don't have complete information about the website. They can't know how many resources a website has to deal to reverse engineering (webmasters can't just play cat and mouse with you just because you're wasting their money) nor do they know the cost of an endpoint. I mean, most tech inclined use ad blockers when it's obvious 90% of the websites pay the cost of their endpoints by showing ads, so I doubt they would respect anything more subtle than that.
The only reason why "another client" can exist is due to limitations of the Internet itself.
If you could ensure that the web server can only be accessed by your client, you would do that, but there is no way to do this that can't be reverse-engineered.
Essentially your argument is that just because a door is open that means you're allowed to enter inside, and I don't believe that makes any sense.
The argument is that what you call "limitations of the Internet itself" is actually a feature, and an intended one at that. The state of things you're proposing is socially undesirable (and in many cases, anticompetitive). It's hard to extend analogies past this point, because the vision you're describing flies in the face of more fundamental social norms, and history of civilization in general.
It's not a limitation of the internet, it's a fundamental property of communication.
Imagine trying to validate that all letters sent to your company are written by special company-provided typewriters and you would run into the same fundamental limits.
Whenever you design any client/server architecture, the first rule should always be "never trust the client," for that very reason.
Rather than trying to work around that rule, put your effort into ensuring that the system is correct and resilient even in the face of malicious clients.
> If you really believe this you'll use a custom user agent instead of spoofing Chrome. :-)
Read up on the history of User Agent string, and why everyone claims they're Mozilla and "like Gecko". Yes, it's because of all the silly people who, since earliest days of the WWW, tried to change what they serve based on the contents of User-Agent header.
Not the greatest example. If someone has incurred a $100 debt to you, then, from a legal perspective, you must consider delivery of a thousand dimes as having paid the debt. You don't get a choice on that without prior contractual agreement.
I think it's the greatest example because it's something you're technically allowed to do but that you obviously shouldn't do because you're wasting other people's resources.
This is not an accurate reading of the code. Snopes quotes an FAQ on the US Treasury site (now missing, but presumably still correct) [0]:
> Q: I thought that United States currency was legal tender for all debts. Some businesses or governmental agencies say that they will only accept checks, money orders or credit cards as payment, and others will only accept currency notes in denominations of $20 or smaller. Isn't this illegal?
> A: The pertinent portion of law that applies to your question is the Coinage Act of 1965, specifically Section 31 U.S.C. 5103, entitled "Legal tender," which states: "United States coins and currency (including Federal reserve notes and circulating notes of Federal reserve banks and national banks) are legal tender for all debts, public charges, taxes, and dues."
> This statute means that all United States money as identified above are a valid and legal offer of payment for debts when tendered to a creditor. There is, however, no Federal statute mandating that a private business, a person or an organization must accept currency or coins as for payment for goods and/or services. Private businesses are free to develop their own policies on whether or not to accept cash unless there is a State law which says otherwise. For example, a bus line may prohibit payment of fares in pennies or dollar bills. In addition, movie theaters, convenience stores and gas stations may refuse to accept large denomination currency (usually notes above $20) as a matter of policy.
I specifically said "incurred a ... debt" and "without prior... agreement". As your source says
> In short, when a debt has been incurred by one party to another, and the parties have agreed that cash is to be the medium of exchange, then legal tender must be accepted if it is proffered in satisfaction of that debt.
You are correct that if cash is not accepted at all, or if payment is to happen ahead of the exchange of goods or services, you are not obligated to accept arbitrary cash.
No, your claim is backwards—if the parties have agreed that dimes are valid payment of debt then that agreement must be upheld. Absent a prior agreement to accept dimes, the party receiving the money may refuse any combination of currency that they see fit.
In other words, an agreement isn't required in order to refuse legal tender, an agreement would be required to make it mandatory.
A court might decide that an agreement to accept cash without specifying in what form was meant to include dimes, but I see no evidence anywhere that a court has to rule that way if the contextual evidence suggests something else was probably meant.
"legal tender" is a term of art that specifically means a creditor must accept it, and your big quote clearly supported this, discussing only the common misconception that legal tender means it can't be refused in an offer to purchase. You are arguing backwards yourself.
The law says that coinage is valid legal tender for an offer to settle a debt but the counterparty is not required to accept it... unless they contractually agreed to do so.
Only the U.S. government is required to accept payment in coins. Many states also require their agencies to accept payment in coinage but some have laws limiting the size of debts that can be paid this way.
That link is, again, about the difference between offers to purchase and offers to settle debts.
Think logically about this. What do you think legal tender even means otherwise? Why would you need a special term to denote a form of payment that a creditor can accept if they want to? I could accept settlement in jelly beans if I wanted to. The entire point is that you must accept legal tender, that is what makes it different from everything else.
Not the greatest example. If someone has incurred a $100 debt to you, then, from a legal perspective, you must consider delivery of a thousand dimes as having paid the debt. You don't get a choice on that without prior contractual agreement.
I think the key is that these are trademarked for a particular domain. If I tried to sell software as Alphabet I'd be shut down, but my kid's teacher doesn't need a license to teach the alphabet.
If all that's missing is 'a nasm compatible assembler', did they try just swapping it out for nasm, which seems to have a readily available alpine package?
No, the piracy part come in when that dumped game is distributed, and guides are made so even the most computer illiterate people are able to play Nintendo games free of charge.
Personally, I don't think an emulator or ROM dump should be banned. However, I cannot deny that these exist primarily to pirate games. In the long run, I think paying customers will feel stupid for spending money when other people aren't so they'll stop too. Eventually, it will get to a point where Nintendo can't make a profit.
I think if you love the games, which I personally do, the moral thing to do is pay so that those games can continue to be made. But that's my moral, not legal, assessment.
No, my argument is that the information on the web is about how to pirate games, no matter how it is couched in the tool documentation.
The case for homebrew is in the homebrew software that is available, and all of the homebrew software that I have ever seen is absolute shite. Toy programs and simple SDK test tools, nothing of value other than the 3rd party SDKs themselves.
It does not matter if you make a legitimate backup copy of a cart you own for safekeeping, emulation of legitimately owned copies of retail games is not an exemption of the DMCA.
It doesn’t matter if you own a copy of the game, making a copy for any reason is not in accordance with the DMCA, as far as I’m aware. Exemptions to the DMCA are granted every few years, and some exemptions are rescinded at the same time. Copying game cartridges has never been an exemption.
And even if it was, you can’t put your copy back onto a legitimate blank cartridge to regain playability if the original is destroyed.
It’s a shitty situation to be sure, and it is wholly unfair. Blame gamers who are “morally opposed” to paying for games that they play. There are a lot of them, and they play a lot of games, and are often popular streamers on YouTube and Twitch.
If people stopped pirating games so much, the homebrew and legitimate use people would have a solid defense and maybe even support in government, but the amount of piracy that goes on absolutely dwarfs legitimate uses of unlocked hardware.
I personally am fascinated with Nintendo hardware and the choices made when they design their systems, and despite repeated efforts to get a Switch dev kit, I have been denied approval time and time again. I have no interest in piracy, I have interest in hardware platforms. But I am in the extremely small minority with that focus.
If piracy slows somewhat dramatically, Nintendo won’t be able to do this with impunity like they do today. They will simply not have a leg to stand on when they say emulators are purely piracy mechanisms. But today, they really are.
How many new games come out for the SNES every year? How many SNES emulators are there under active development? Are you going to say that all of those emulators and all of that time spent making them and perfecting them, making them cycle-perfect is done so that 1-2 games can come out every 1-2 years? EMULATORS ARE PRIMARILY USED FOR PIRACY.
Until that changes, Nintendo will keep doing this.
This is not a piracy issue. Please stop framing it as such.
The primary use case is to run all your games on a single device.
Nintendo want to lock customers into their ecosystem rather than competing on game quality alone.
They know if you have a switch then you will likely buy other switch games etc. If you buy and play Mario Kart on your PC then you are much less likely to invest in the rest of the ecosystem.
We should carve out legal provisions for emulators and circumventing DRM.
Nintendo can then still go against individual people pirating software.
I don't think emulators or ROM dumps should be banned. However, it is a piracy issue because in the real world these are used almost exclusively for piracy.
I think a "know nothing" type argument is very weak.
It is absolutely a piracy issue. Nintendo are using the DMCA to fight piracy.
It is extremely cut and dried in their eyes: emulation = piracy.
Mario Kart exists on many non-Nintendo platforms legitimately already. The existence of Mario Kart in the arcade or on mobile devices brings people into the Nintendo ecosystem, not draw them out of it.
It is a waste of time to fight individuals downloading games when the tools of emulation exist out in the open. that is why Nintendo are going after emulators themselves, at the current time, emulators are the big, easy wins.
Even if that is true (and I guess for that you'd have to classify downloading abandonware as piracy): Valve founder Gabe Newell famously said that piracy is a "service issue".
So if you give emulator users the option of playing or buying legitimate copies without jumping through hoops, then piracy rates will drop.
I don't understand how you consider the Nintendo Switch to be abandonware.
Nintendo aren't taking down SNES emulators. They're not taking down GameCube emulators, they're taking down Switch emulators.
The emulation community, -- again, mostly pirates -- have zero chill. The lesson that they desperately need to learn here is this: Do not bite the hand that feeds you.
Emulating latest generation systems and games that are currently for sale for that system is biting the hand that feeds you.
> I don't understand how you consider the Nintendo Switch to be abandonware.
I don't? My comment was on the primary use of emulation.
And Nintendo Switch is not the predominant console that is emulated. In fact Switch emulation runs only on relatively modern systems, not on Android and not on the myriad of cheap emulation hardware that is sold on AliExpress.
You aren’t licensed to play that game on a PC or in an emulator. It doesn’t matter if you paid for the game and paid for the Nintendo Switch to play it on. If you used a hacked Switch to dump the console or you downloaded the rom, that’s piracy. It is not legal to circumvent the DMCA for fair use reasons.[1]
It is very cut and dried in the legal world, and it would take a very significant case and a few appeals which uphold a decision that changes how the DMCA is interpreted in the courts.
If you’re not in the US and not a US citizen, then I have no idea what laws apply to you or how they are interpreted.
> If you’re not in the US and not a US citizen, then I have no idea what laws apply to you or how they are interpreted.
I live in the EU, where consumers are authorized by Directive 2009/24/EC to reproduce and translate computer programs on other systems in order to achieve interoperability.
But even in the US there is the Sony v. Connectix precedent that creation of interoperable products is compliant with the DMCA and the anti-circumvention provisions do not apply to them.
Also it doesn’t matter if something is “abandoned.” It’s still got an owner, and pirating that thing is still against the law.
The law is what matters when discussing legal matters. Nothing else has any meaning at all. Law and precedent are the only things lawyers care about.
If Nintendo wanted to go from a company that is tolerated to a company that is beloved, they would stop this, but they don’t. They are happy to be hated if it stops piracy of their games, clearly.
Computers are primarily used for copying data, and thus by extension are perfect piracy machines. Should Nintendo go on an epic crusade to ban computers because many people use computers for piracy?
It doesn't help that the copyright system is heavily unfavorable for the common folk and society as whole. Some pirate as a workaround or in protest of the current draconian copyright rules.
Morally there is an argument to be made for recent works against piracy, but why should we care about old stuff that arguably should have already entered public domain like SNES games from the early 90s?
Yes, and in the case of older games it is incredibly difficult for a company to prove any kind of damages against an individual since they no longer make the games available for sale.
I would imagine the games don't disappear from existence the very moment a new console is released.
The reality is that emulating and dumping current games is almost exclusively used for piracy purposes. Naturally, this isn't true for something like the N64.
It also happens to be the reality. I won't entertain delusions that switch emulation is not primarily used for piracy.
I mean really, archival? The games are in every Walmart, Target, and GameStop in the country. Within a 5-mile radius of you at any given time there's dozens of games. Get real.
Pirates never openly admit they are pirating. What, everyone with an emulator is an archivist? Give me a break.
What does an emulator have to do with preservation, anyway? Nothing. You don’t need an emulator to preserve a game. You need an emulator to play a game that you can’t get the hardware for anymore, and the Switch is very much still on store shelves, so the “archiving” excuse evaporates as soon as it is uttered.
Pirates will however say they are doing all kinds of legitimate things in order to keep pirating. I include all those fools who pirate everything they play as a matter of principle, as if that is a legitimately defensible position in reality.
It is funny that when you replace EMULATORS with GUNS and PIRACY with INJURING AND KIllING, the very same people advocacing for the bans of emulators are the ones arguing that guns are for protection and not to harm others.
But all emulators user combined don't harm nearly as much humanity as a single gun owner does.
How did you reach the conclusion that there's a strong correlation between being a Corporate IP Rights Diehard and a Gun Rights Diehard? Was there some study that showed a link between supporting the individual right to own a firearm and support for banning individuals from owning software emulators? Perhaps the NRA or the 2AF file an amicus brief in support of Nintendo's fights against emulators recently? Or the other way around maybe? Did Nintendo of America file a motion in favor of striking down some firearms laws?
> At the risk of sounding judgemental, I think this preference for always squashing PRs comes from a place of either not understanding atomic commits, not caring about the benefits of them, or just choosing to be lazy. In any case, the loss of history inevitably comes at a cost of making reverting and cherry-picking changes more difficult later, as well as losing the context of why a change was made.
1) Why are you ever reverting/cherry-picking at a more granular level than an entire PR anyway? The PR is the thing that gets signed-off on, and the thing that goes through the CI build/tests, so why wouldn't that be the thing kept as an atomic unit?
2) I don't think I've ever cared about the context for a specific commit within a PR once the PR has been merged. What kind of information do you expect to get out of it?
Edit: How does it remove the context for a change or make `git bisect` useless? How big are your PRs that you can't get enough context from finding the PR commit to know why a particular change was made?
> The PR is the thing that gets signed-off on, and the thing that goes through the CI build/tests, so why wouldn't that be the thing kept as an atomic unit?
Because it often isn't. I don't know about your experience, but in all the teams I've worked in throughout my career the discipline to keep PRs atomic is almost never maintained, and sometimes just doesn't make sense. Sometimes you start working on a change, but spot an issue that is either too trivial to go through the PR/review process, or closely related to the work you started but worthy of a separate commit. Other times large PRs are unavoidable, especially for refactorings, where you want to propose a larger change but the history of the progress is valuable.
I find conventional commits helpful when deciding what makes an atomic change. By forcing a commit to be of a single type (feature, fix, refactor, etc.) it's easier to determine what belongs together and what not. But a PR can contain different commit types with related changes, and squashing them all when merging doesn't make the PR itself atomic.
> I don't think I've ever cared about the context for a specific commit within a PR once the PR has been merged. What kind of information do you expect to get out of it?
Oh, plenty. For one, when looking at `git blame` to determine why a change was made, I hope to find this information in the commit message. This is what commit messages are for anyway. If all commits have this information, following the history of a set of changes becomes much easier. This is helpful not just during code reviews, but after the merge as well, for any new members of the team trying to understand the codebase, or even the author themself in the future.
> I find conventional commits helpful when deciding what makes an atomic change.
I already know if I’m doing a fix, a refactor, a “chore” etc. Conventional commits just happen to be the ugliest way you can express those “types” in what looks like English.
Yeah, well, that's just, like... your opinion, man.
But I've worked with many, many developers who don't strictly separate commits by type this way. I myself am tempted to do a fix in the same commit as a refactor many times. Conventional commits simply suggest, well, a convention for how to make this separation cleaner and more explicit, so that the intent can be communicated better within a team. I've found this helpful as a guide for making atomic changes. Whether or not you write your commit messages in a certain way is beside the point. But let me know if you come up with a prettier way to communicate this in a team.
> Because it often isn't. I don't know about your experience, but in all the teams I've worked in throughout my career the discipline to keep PRs atomic is almost never maintained, and sometimes just doesn't make sense. Sometimes you start working on a change, but spot an issue that is either too trivial to go through the PR/review process, or closely related to the work you started but worthy of a separate commit. Other times large PRs are unavoidable, especially for refactorings, where you want to propose a larger change but the history of the progress is valuable.
In my experience at least, PRs are atomic in that they always leave main in a "good state" (where good is pretty loosely defined as 'the tests had to pass once').
Sometimes you might make a few small changes in a PR, but they still go through a review. If they're too big, you might ask someone to split it out into two PRs.
Obviously special cases exist for things like large refactoring, but those should be rare and can be handled on a case by case basis.
But regardless, even if a PR has multiple small changes, I wouldn't revert or cherry-pick just part of it. Just do the whole thing or not at all.
> Oh, plenty. For one, when looking at `git blame` to determine why a change was made, I hope to find this information in the commit message. This is what commit messages are for anyway. If all commits have this information, following the history of a set of changes becomes much easier. This is helpful not just during code reviews, but after the merge as well, for any new members of the team trying to understand the codebase, or even the author themself in the future.
Yeah but the context for `git blame` is still there when doing a squash merge, and the commit message should still be relevant and useful.
My point isn't that history isn't useful, it's that the specific individual commits that make up a PR don't provide more useful context than the combined PR commit itself does.
I don't need to know that a typo was fixed in iteration 5 of feedback in the PR that was introduced in iteration 3. It's not relevant once the PR is merged.
I don't have time to reply to all your points, but regarding this:
> I don't need to know that a typo was fixed in iteration 5 of feedback in the PR that was introduced in iteration 3. It's not relevant once the PR is merged.
I agree, these commits should never exist beyond the PR. But I go back to my point about atomic commits being misunderstood. It's not about keeping the history of _all_ changes made, but about keeping history of the most relevant ones in order to make future workflows easier. A few months from now you likely won't care about a typo fix, but you will care about _why_ some change was made, which is what good commits should answer in their commit message. Deciding what "atomic" truly means is often arbitrary, but I've found that making more granular commits is much more helpful than having fewer and larger commits. The same argument applies to the scope and size of PRs as well.
> Why are you ever reverting/cherry-picking at a more granular level than an entire PR anyway? The PR is the thing that gets signed-off on, and the thing that goes through the CI build/tests, so why wouldn't that be the thing kept as an atomic unit?
The PR could contain three refactoring commits and one whitespace cleanup commit. One main change. The only thing that breaks anything is the main change. Because those others were no-ops as far as the code is concerned.
Yeah, because no one on Linux or Mac would clone a git repo they just found out about and blindly run the setup scripts listed in the readme.
And no one would pipe a script downloaded with wget/curl directly into bash.
And nobody would copy a script from a code-formatted block on a page, paste it directly into their terminal and then run it.
Im not going to go so far as to claim that these behaviors are as common as installing software on Windows, but they are still definitely common, and all could lead to the same kinds of bad things happening.
I would agree this stuff DOES happen, but typically in development environments. And I also think its crappy practice. Nobody should ever pipe a curl into sh. I see it on docs sometimes and yes, it does bother me.
I think though that the culture of robust repositories and package managers is MUCH more prominent on Mac/iOS/Linux/FreeBSD. It's coming to Windows too with the new(er) Windows store stuff, so hopefully people don't become too resistant to that.
A developer is much more likely to be able to fix their computer and/or restore from a backup than a typical user is. A significant problem is cascading failures, where one bozo installing malware either creates a business problem (e.g. allowing someone to steal a bunch of money) or is able to disable a bunch of other computers on the same network. It is not that common for macOS to be implicated in these sorts of issues. I know people have been saying for a long time that it’s theoretically possible but it really doesn’t seem that common in practice.
Is this the first Summoning Salt video you've seen?
I don't know enough to say that he doesn't use an LLM during his writing process, but I do know that I haven't noticed any appreciable difference between his newer videos and ones that were released before ChatGPT was made available.
Is it possible that this is just the way he chooses to write his scripts that you interpret as sounding like they are written by an LLM?
I've watched most of them actually. It's a really great channel. Notably, I watched his Mike Tyson video released 6 months ago and didn't notice anything like this.
The only way to be sure would be to ask him directly, but some parts of the video set off my GPT radar _hard_. I tried to find them now by watching random segments but all of the ones I did were fine. It was probably inaccurate for me to say "sheer amount" or "clearly", but that's the impression I was left with after the video.
To clarify: I don't think he even took any information from an AI, it's just the style of the script that's iffy.
To be fair, if you've seen one Summoning Salt video, you've basically seen them all. They all cover similar events and are structured the same way. Even the music that's used is recycled every video to the point where mention HOME - Resonance is a part of the joke
10 per hour is slightly lower than 100 per 6 hours, but not in any meaningful way from a bandwidth perspective, especially since image size isn't factored into these rate limits in any way.
If bandwidth is the real concern, why change to a more inconvenient time period for the rate limit rather than just lowering the existing rate limit to 60 per 6 hours?