> The completion seems good to me. In fact suggesting "speed" as a parameter seems unexpectedly good rather than nonsensical.
I can't see any way the "speed" could ever be the next item in the series "velocity, acceleration, jerk, snap", could you explain how it could be? It's an extremely common sequence so it should be something a LLM can get right easily. Even the auto complete in a Google search gives the correct answer.
> Foes anyone really expect better than what your images show which in my opinion is pretty damn good. Do you really expect to write a program just mindlessly hitting tab all day long?
The other examples (and most of my other tests) show variables and types being invented that don't exist anywhere in the code base, how is that good? In both of these cases the LLM is strictly worse than my regular auto-complete that gives the correct result. I fail to see how a downgrade from that can be considered good.
You cant see any connection between speed and velocity or acceleration? Speed and velocity are literally synonyms at least in the common use and certainly as close to velocity as acceleration.
But hey noone is forcing you to use this tool right so just turn it off I guess.
I've never seen this. Usually you don't have the LaTeX source of a paper you cite, you wouldn't know which label to use for the reference, when the cited paper is written in LaTeX at all. Or something changed quite a bit in recent years.
Can you link to another paper's Figure 2.2 now, and have LaTeX error out if the link is broken? How does that work?
I feel like it should be mentioned that on the page that Chrome presents showing that the extension was removed they actively suggest installing UBO Lite. I'll leave it up to the reader to decide what that says about Google, but it's a pretty important detail that the original poster chose not to include.
One massive upside is there's now a less comprehensive mode that doesn't allow the extension to view and edit the complete DOM of every page, which has actually made me comfortable enough to use it for regular browsing as opposed to just turning UBO on for certain sites.
Another upside is that sites can now just use these banners that Lite cannot block.
We can finally block ads in our webpages and have ads! Win-win. This couldn’t be ever achieved without the truly titanic coordination efforts from google.
Will they though? I don't think they will for 2 reasons:
1. Ad blocking on Safari has worked like this for years and none of the major ad networks have chosen to act on that fact.
2. It has always been possible for ads to work around UBO in various ways but no major ad network has even tried (e.g. server-side embedding of randomly obfuscated JS).
Ad blocking on safari is obscure, I regularly see tech threads where people aren’t even aware it exists. Mobile ads blocking for default browsers was always a puzzle. This means too little market share to address.
It has always been possible for ads to work around UBO in various ways but no major ad network has even tried
Cause arms race with a dynamic blocker is expensive and futile. You are trying to s/UBO/UBO Lite/ in this sentence and convince things will work the same way, they won’t.
In general, uBOL will be less effective at dealing with websites using anti-content blocker or minimizing website breakage because many filters can't be converted into DNR rules (see log of conversion for technical details).
Not really. Not sure why you’re expecting me to dig through the sites and ads networks that don’t use differentiating techniques too often due to basically non-existing ubol userbase to this date. We’ll see how it goes in just a few months, I guess. I’ll be baffled if this fuzz was really about user security and not pushing ads down the throats. I’m also expecting to see more ads on mobile safari for it becoming compatible with the new widespread ad vulnerability.
You were referring to ad banners that Lite supposedly cannot block. I'm just asking for an example.
If you encounter them sufficiently frequently that they are a problem, you should encounter one over the next 14 days while this discussion is open and be able to share the link. If you can't, then either there is no evidence of the problem you assert exists, or the problem occurs so infrequently as to be of negligible concern.
UBO has 39M users on Chrome while UBOL has 2M, that doesn't seem non-existent. AdBlock and AdBlock Plus also both have slightly more than UBO and have both been using Manifest V3 for a while.
I personally find it much more ergonomic to have the allocator attached to the type (as in Ada). Aside from the obvious benefit of not needing to explicitly pass around your allocator everywhere, it also comes with a few other benefits:
- It becomes impossible to call the wrong deallocation procedure.
- Deallocation can happen when the type (or allocator) goes out of scope, preventing dangling pointers as you can't have a pointer type in scope when the original type is out of scope.
This probably goes against Zig's design goal of making everything explicit, but I think that they take that too far in many ways.
There is no reason you can't attach an Allocator to the type (or struct, in Zig).
A fairly common pattern in the Zig stdlib and my own code is to pass the allocator to the `init` function of a struct.
If what you mean is that allocation should be internal to the type, I don't agree with that. I much prefer having explicit control over allocation and deallocation.
The stdlib GPA for example is pretty slow, so I often prefer to use an alternative allocator such as an arena backed by a page allocator. For a CLI program that runs and then exits, this is perfect.
It's around 10% invalid bugs and another 10% duplicates. A lot of them that I've seen, including one of mine, are a result of misinterpreting details of language standards.
I think it's just common for people to assume they're wrong and change things blindly rather than carefully checking the standard for their language (assuming their language even has a standard to check). It doesn't help that before AddressSanitizer and co. existed compilers would just do all sorts of nonsense when they detected possibly undefined code in C and C++.
> better tone mapping, allowing me to watch HDR movies on SDR without it looking bad.
I've never really understood why it's a desirable feature to have automatic HDR-to-SDR in the first place. No one is making HDR-only content and the official SDR master for every movie and TV show done by a human is always going to beat a fancy LUT.
When performing colour grading you're always deciding what parts of the image need what amount of contrast etc. based on what's important to see in the context of the scene. When grading to HDR you're going to make different choices simply due to the fact that you have a wider dynamic range available. Even putting aside the fact that you're not starting from the same raw inputs that the production studio is working with, automatic tone-mapping is never going to be able to look at a scene in the same way that a human does and decide which parts are important and which parts aren't.
In this case I would think people would prefer to just have the SDR version which will look good on all displays rather than the HDR version that will look better on a HDR display but will look horrible on a SDR display. It's down to personal preference of course, but I can't imagine someone caring about having HDR videos while simultaneously putting up with the bad results of automatic HDR-to-SDR.
Just as an example since I didn't include one before: Which of the two below images using different algorithms is correctly tone-mapped? Do we care more about showing the details of the car and the driver or do we care more about the rest of the scene? It's simply not possible to decide which of these is better without context, which an algorithm will never have. A human watching the scene can make that decision, or pick something completely different.
Nope. I personally have two HDR devices and two non-HDR ones. I don't want to keep two separate copies as this takes up more storage, requires me to get two different versions whcih means the progress won't be synced ebtweent the two, the subtitles may not be exactly the same either, etc.
I want to have one and only file that will optimally on my main device (HDR) but which also work reasonably well on a secondary non-HDR device.
As I said already, it's down to personal preference. I just don't imagine that most people care about having the best looking video they can get on one device but don't care on another to the point where they'd sacrifice quality on device A for better quality on device B.
It’s exactly the case. Every room isn’t a home theater. We want the best quality for the home theater and “good enough” for the rest. To my surprise, others do notice the washed out effect of watching HDR content on an SDR TV, or worse the wrong colors when watching DV content on an SDR TV.
Acquiring, storing, and organizing multiple files for the same video is a hassle. Even more hassle is asking others to know which is the right one for that TV.
I can't see any way the "speed" could ever be the next item in the series "velocity, acceleration, jerk, snap", could you explain how it could be? It's an extremely common sequence so it should be something a LLM can get right easily. Even the auto complete in a Google search gives the correct answer.
> Foes anyone really expect better than what your images show which in my opinion is pretty damn good. Do you really expect to write a program just mindlessly hitting tab all day long?
The other examples (and most of my other tests) show variables and types being invented that don't exist anywhere in the code base, how is that good? In both of these cases the LLM is strictly worse than my regular auto-complete that gives the correct result. I fail to see how a downgrade from that can be considered good.
reply