> After a candidate's defeat in an election, you will be supplied with the "cause" of the voters' disgruntlement. Any conceivable cause can do. The media, however, go to great lengths to make the process "thorough" with their armies of fact-checkers. It is as if they wanted to be wrong with infinite precision (instead of accepting being approximately right, like a fable writer).
> they've gone from barely stringing together a TODO app to structuring and executing large-scale changes in entire repositories in 3 years.
No they didn't. They're still at the step of barely stringing together a TODO app, and mostly because it's as simple as copying the gazillionth TODO app from GitHub.
I’ve used copilot recently in my work codebase and it absolutely has no idea what’s going on in the codebase. At best it’ll look at the currently open file. Half the time it can’t seem to comprehend even the current file fully. I’d be happy if it was better but it’s simply not.
I do use chatgpt most recently today to build me a GitHub actions yaml file based on my spec and it saved me days of work. Not perfect but close enough that I can fill in some details and be done. So sometimes it’s a good tool. It’s also an excellent rubber duck- often better than most of my coworkers. I don’t really know how to extrapolate what it’ll be in the future. I would guess we hit some kind of a limit that will be tricky to get past because nothing scales forever
On the technical side, I believed waiting was due to the lock queue rather that having acquired an ACCESS EXCLUSIVE lock. The ALTER is specifically _waiting_ for any lock lower than ACCESS EXCLUSIVE to be release.
It also makes all new readers/writers to wait for that lock, essentially leading to downtime until the lock is eventually acquired and released. This is the classic readers/writers library example, and you want to avoid starving the writers.
Thats why size of data is the least of your issues - its the access patterns/hotness that are the issue.
I'm not sure information theory deals with this question.
Since this isn't lossless decompression, the point of having no "real" data is already reached. It _is_ inventing things, and the only relevant question is how plausible are the things being invented; in other words, if the video also existed in higher resolution, how close would it actually look like the inferred version. Seems obvious that this metric increases as a function of the amount of information from the source, but I would guess the exact relationship is a very open question.
Many comments are missing the point here (although the article doesn't properly explain neither); it's not about resolution, but about fixing imperfections in filming:
> The recent Cameron restorations were based on new 4K scans of the original negative, none of which needed extensive repair of that kind. [...] The A.I. can artificially refocus an out-of-focus image, as well as make other creative tweaks. “You don’t want to crank the knob all the way because then it’ll look like garbage,” Burdick said. “But if we can make it look a little better, we might as well.”
The only movies which would require upscaling to 4K are those released between about mid-2000s to mid-2010s, the advent of native digital cinema, but filmed in 2K. Everything before was filmed in 35mm film, which can be scanned to 4K with information to spare; everything after is filmed in native digital 4K or more.
Moreover, upscaling which deal only with resolution has absolutely no need of AI. Any TV will decently upscale in _real-time_ a non-4K movie, and more sophisticated techniques can give basically indistinguishable results. 2017's _Alien: Covenant_ was voluntarily produced in 2K but released in 4K through upscaling and the image look just great.
> The only movies which would require upscaling to 4K are those released between about mid-2000s to mid-2010s, the advent of native digital cinema, but filmed in 2K. Everything before was filmed in 35mm film, which can be scanned to 4K with information to spare; everything after is filmed in native digital 4K or more.
Good to call this out, I think this is something that's really lost on people.
It really blows my mind that George Lucas, for all of his apparent obsessive concern about his films supposedly looking dated, chose to shoot Star Wars Episode 2 in 1080p in contrast to Episode I on 35mm film.
I guess 1080p was the big shiny edge thing back at the time. 35mm can supposedly be scanned beyond 8K, so you could theoretically consider 4K filming not good enough neither.
Precision should be part of the spec for integrations. With the integer multiple of minimal unit, that makes it clear in the API what it is.
e.g. it doesn't make sense to support billing in sub-currency unit amounts just by allowing it in your API definition, as you're going to need to batch that until you get a billable amount which is larger than the fee for issuing a bill. Even for something like $100,000.1234, the bank doesn't let you do a transfer for 0.34c.
For cases where sub-currency unit billing is a thing, it should be agreed what the minimal unit is (e.g. advertising has largely standardised on millicents)
Yeah I am more laughing that once encoded in JSON as { "p": 2256, "dp": 2 } you are using 2 floating point numbers. But JSON, and indeed JS wasn't designed.
To be clear, I wasn't advocating for flexible decimal points. There is no "dp" parameter in the solution I was proposing. It's just documented in the API that "price" is denominated in cents (or satoshis or whatever you want)
Then you should store the time as well, because the number of decimals in a currency can change (see ISK). Also, some systems disagree on the number of decimals, so be careful. And of course prices can have more decimals. And then you have cryptocurrencies, so make sure you use bigints
You store it as an integer, but as we just saw in the OP, for general interop with any system that parses JSON you have to assume that it will be parsed as a double. So to avoid precision loss you are going to have to store it as a string anyway. At that point it's upto you whether you want to reinvent the wheel and implement all the required arithmetic operations for your new fixed-point type. Or you could just use the existing decimal type that ships on almost every mature platform: Java, C#, Python, Ruby, etc.
In dollars, what do you get up to with a double of cents without precision loss? It's in the trillions, I figure? So a very large space of applications where smallest-denomination-as-JSON-number is going to be fine.