Many people using yt-dlp have a YouTube account or even an adsence account. Yes, YouTube could ban their partner for breaking the rules. Youtube has issued 1 year temp bans from watching videos for accounts that have downloaded videos. Similarly Honey could be banned for breaking the rules.
I've had Youtube Premium since it was introduced and I still use yt-dlp because it's the most convenient way for me to make an mp3 from a video so I can listen to it offline. I don't think they care. They probably care about music industry getting worked up about it if it was more mainstream though. This is annoying because I don't even download music, just podcasts and interviews.
They care quite a bit, yt-dlp has had to undergo some drastic changes recently to make it faster for its devs to work around frequent changes to YouTube encryption.
"The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me."
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
> The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
The claim of linear runtime is only true if K is independent of the dataset size, so it would have been nice to see an exploration of how different values of K impact results. I.e. does clustering get better for larger K, if so how much? The values 50 and 100 seem arbitrary and even suspiciously close to sqrt(N) for the 9K dataset.
To clarify: K is a fixed hyperparameter in this implementation, strictly independent of N. Whether we process 9k points or 90k points, we keep K at ~100.
We found that increasing K yields diminishing returns very quickly. Since the landmarks are generated along a fixed synthetic topology, increasing K essentially just increases resolution along that specific curve, but once you have enough landmarks to define the curve's structure, adding more doesn't reveal new topology… it just adds computational cost to the distance matrix calculation.
Re: sqrt(N): That is purely a coincidence!
The compiler also tells you that even if you cover all enum members, you still need a `default` to cover everything, because C enums allow non-member values.
`yield` being a function that is passed into the iterator seems like suboptimal design to me. Questions like "What happens if I store `yield` somewhere and call it long after the loop ended?" and "What happens if I call `yield` from another thread during the loop?" naturally arise. None of this can happen with a `yield` keyword like in JavaScript or C#. So why did the Go-lang people go for this design?
> Questions like "What happens if I store `yield` somewhere and call it long after the loop ended?" and "What happens if I call `yield` from another thread during the loop?" naturally arise.
Nothing special. `yield` is just a normal function. Once you realize this, it actually is very easy to reason about. I just think the naming is confusing. I think about it as `body`.
> Questions like "What happens if I store `yield` somewhere and call it long after the loop ended?" and "What happens if I call `yield` from another thread during the loop?" naturally arise.
The fact that you can store `yield` somewhere allows for more flexible design of iterator functions, e.g. (written in a hurry as a proof of concept so will panic at the end): https://go.dev/play/p/QpVYmmC6g5b?v=gotip
Those hairy details may be hard to remember (or even decide), but they won't matter for most of the users - most users will just use `yield` in the simplest way, without storing them or calling them from another goroutine.
It is an explicit goal of the go team to minimize the number of keywords in the language. Simple languages have fewer keywords so go must have few keywords. https://go.dev/ref/spec#Keywords
Look how simple that is.
This is why things like ‘close(channel)’ are magic builtin functions, not keywords (more complicated) or a method like ‘channel.Close’ (works with interfaces and consistent with files and such, so not simple).
Languages where ‘yield’ is a keyword use a fundamentally different design (external va internal iteration). I don’t think it’s plausible that the Go team rejected this design because it would require another keyword. They presumably rejected it because of the additional complexity (you either need some form of coroutines or the compiler needs to convert the iterator code to a state machine).
> It is an explicit goal of the go team to minimize the number of keywords in the language.
It's understandable - because unfortunately people judge languages by very shallow metrics. Several times I've seen people use "number of keywords" as a proxy for language complexity.
However, that's completely misguided. `static` in C++ (and, IMO, `for` in Go) demonstrate that overloading a keyword to mean multiple things is harder to understand than having a larger number of more meaningful keywords.
From my vantage, pointers without arithmetic are typically called references, as opposed to "true" pointers. I did not mean it in a derogatory way, both have their place and Go is a great language even (or maybe despite) without what I would call "true" pointers.
The fact that, in Go, you can have pointers to pointers, and reassign pointer variables like any other, would imply, IMO, that pointers are first-class values, and so they are true pointers, even without being able to pointer arithmetic with them.
That's cheating for dev-rel marketing. And it's contradictory, because many more keywords could be (magical or normal) functions, like in some other languages.
1. If you are first to market and still can't make money off your amazing invention, that might be a skill issue.
2. Patents wouldn't be as forceful if they didn't last that long. A decade or more is basically forever in a fast-moving field like tech.
The patent system certainly needs reform, but I think more along the lines of what gets accepted as a patent. Discovering what I would describe as a 'natural law' should not be patentable (but I think happens everyday), and those ideas should not be kept from human progress, imho. There's a line between research paper and patent, that I believe is blurred for profit.
But a true invention, a novel use of those laws, should be patentable. Are you saying that if you discover a novel use of natural laws, a product that could be capitalized, your own unique idea, that you should not be able to capitalize on it? Maybe this would work in a trek economy, but not with capitalism.
If your worried about innovation, how innovative could we be if discoveries/inventions were squandered because there are no protections if you happen to even mention your idea to someone?
I mean the patent is public information. If you want to have a go at selling it, buy it or license it from me and have at it. Otherwise, invent your own idea or wait for mine to expire.
Not really. `import type` just means that this import is guaranteed to be removed when compiling to JS. TS already fully erases types during compilation, and this is just a way to guarantee that the imports of types are guaranteed to be erased as well.
E.g. for `import { SomeType } from "backend-server"`, you don't want to include your entire server code in the frontend just because you imported the response type of some JSON API. `import type` neatly solves this issue, and it even enforces that you can't import anything but types from your backend.