Indeed. When I tried to add LibRedirect (another extensions that are not possible under Manifest v3) to Vivaldi, DNS over HTTPS suddenly stopped working.
After checking the settings page, the settings to turn it on are completely disabled. Turn out this is one of few trap in all Chromioum browser that are hardcoded by Google.
Well after searching, you need to edit registry (yikes!) and add "DnsOverHttpsMode" and set it to "safe". Problem solved, right?
NO!!! Do that and suddenly your browser wouldn't load any page at all! Turn out you also need to set "DnsOverHttpsTemplates" too.
It just so happen, somehow, there is no documentation that mention this in "....Mode" help page.
Surely Google is not being evil in here, right? Right?
Currently I'm working on file compression program in Rust. Nothing too fancy, it just use common algorithms (LZ77, LZ78, etc.)
The only difference here is that the program will switch on the fly between different algorithms depending on which one that can compress file smaller.
It can compress 1 GB file (enwik9) down to around 230 MB. Pretty good I guess for something that I worked in my spare time.
I'm not publishing it yet, since I'm still experimenting with it a lot.
In games, player don't want AI that 100% strong (it's not fun), what we want is AI that make mistakes like human do.
So it's possible (assuming if the datasets is good), in fact it's already done in chess[1].
> Maia’s goal is to play the human move — not necessarily the best move. As a result, Maia has a more human-like style than previous engines, matching moves played by human players in online games over 50% of the time.
Also something that I just realized: in this particular case, we want the AI to be biased like human, which is easier to do since the bias is already in the datasets. AI safety is the exact opposite, which is harder if not impossible.
We don’t even want that. Human like AI in a game like this would be really annoying for that vast majority of players that absolutely suck. People who want to play against AI generally want something that’s really dumb.
You probably don’t though. It’s actually really unfun to lose 50% of matches against an AI, or worse, because it doesn’t get tired or tilted or distracted.
It’s much more fun to go against an AI that is dumber than you but generally more powerful.
Different kinds of AI are likely fun for different players. Games have difficulty levels partly because not everyone wants the same level of difficulty relative to their own skill level. Some may want something easily beatable for them, some may want something difficult for them to beat.
In Counter Strike the AI can be super dumb and slow until progressively it becomes a dumb aimbot. It doesn't become a better player with gamesense ans tacticts, just raw aim (Try arms race if you wanna feel it yourself)
In Fear the AI is pretty advanced in a more organic way. It coordinates several enemies to locate you and engage with you in a way that sometimes feels totally human. That feels great and when you lose you try again thinking something like "I should have positioned myself better" instead of "I guess my aim is just not fast enough".
We just don't get enough of good AIs to know how good they can feel.
I think what people really want is a “human”-like AI that is worse than them, maybe quite a bit worse. But maybe not exactly what this type of dataset can offer. Which is to say:
I want to play against an AI that is dumb enough for me to beat it, but is dumb in human-ish ways. Depending on the genre of the game, I want it to make the sort of mistakes that actual people make in wars, but more often. Or I might want it to make the types of mistakes that the baddies make in action movies.
Human players in videogames might provide a little bit of a signal, but they do engage in a lot of game-y and not “realistic” movements, so point taken there. Some people are less familiar with games I think, so they tend to make less gamey movements. Anyone who’s been playing games since the 90’s will bounce around and circle strafe, so they need to be filtered out somehow, haha. Maybe they can provide training data for some sort of advanced “robot” enemy.
But the existing AI characters also make some pretty non-human mistakes. Like often it seems that AI difficulty slider is just like: I’m going to stand stupidly in the middle of the road either way, but on easy I’ll spray bullets randomly around you, and in very hard I’ll zap you with lightning reflex headshots.
Moves like examining a tree of possible movements, flanking, cover, better coordination, that sort of stuff would be more interesting. Maybe the player data-set can provide some of that? I’m actually not sure.
As far as I can think, early Halo games and the FEAR series had be best AI. It’s been a while. Time to advance.
Yeah but the AI in the games you like was good because it was fun, not because they were circle strafing or doing movement mechanics that are explicitly effective but not realistic. Doing these things would make those AI much less fun as well as immersion breaking.
It's always interesting to see decades old papers, sometimes published with little to no fanfares, suddenly becomes "state of the art" because hardware have become powerful enough.
For example Z-buffers[1]. It's used by 3d video games. When it's first published on paper, it's not even the main topic of the paper, just some side notes because it requires expensive amount of memory to run.
Turn out megabytes is quite cheap few decades latter, and every realtime 3d renderer ended up using it.
Another example is Low Density Parity Check Codes [1]. Discovered in 1962 by Robert Gallager but abandoned and forgotten about for decades due to being computationally impractical. It looks like there was a 38 year gap in the literature until rediscovered by David MacKay [2].
The first mainstream use was in 2003. It is now used in WiFi, Ethernet and 5G.
I sometimes wonder if there’s an academic career hidden in there for an engineer: go to the library and read what the CS folks were publishing on physical papers, maybe there are some ideas that can actually be implemented now that weren’t practical back then.
In a series of books by David Brin [0] there is a galaxy-wide institution known as the library, and civilizations regularly mine its millions of years of data for suddenly-relevant-again techniques and technologies.
I remember one bit where a species had launched some tricky fleet-destroying weapon to surprise their enemies with esoteric physics, only to have it reversed against them, possibly because the Librarian that once helped their research-agent wasn't entirely unbiased.
Also in Vinge's Deepness in the Sky, there aren't really "programmers" as we know them anymore, but "programmer-archeologists" that just search the archives for code components to reuse.
I think that's unfair to the programmer-archaeologists: young Pham wanted to write things from scratch (and took advantage when he got a chance to), and other characters said he was a talented hacker, but as they also said, it was just way more productive most of the time to cobble together ancient code.
That's pretty much what any (decent) programmer does today as well. You first search your code base to see if the application already does something like it, if not, whether there is a published library. Where this starts to fail is the idea that connecting those already written peaces together is easy.
Also: In the Destiny mythic sci-fi franchise, the human golden age ended with a mysterious apocalypse, leaving "cryptarchs" (crypto-archeologists) to try to rebuild from arcane fragments of encrypted data or unknown formats.
Don't worry, it's nowhere near the main plot or characters, just a small "meanwhile, elsewhere" vignette. Basically emphasizing the "why bother everything's already invented" mentality of most client-races, and how deep access and query-secrecy have big impacts.
> In the early 1970s, Lockheed Martin engineer Denys Overholser discovered the key to stealth technology hidden in a stack of translated Soviet technical papers. Disregarded by the Soviet academic elite, and unheard of in the United States, Pyotr Ufimstev had worked out calculations that would help win ...
Not sure of the details of this story, but in general having there enough people to grant them time just to search for something interesting seems not unrealistic.
The whole AI trend is in part due to things that are now possible on GPU supercomputers with gigabytes of RAM backed by petabytes of data and at the top speed of GPUs. Some of the algorithms date back to before the ai winter and it's just that we can now do the same thing with a ton more data and faster.
All of the main algorithms do (multi-layer perceptrons and stochastic gradient descent are from the 50s and 60s!). Basically the only thing that changed is we decided to multiply some of the outputs of the multi-layer perceptrons by each other and softmax it (attention) before passing them back into more layers. Almost all of the other stuff is just gravy to make it converge faster (and run faster on modern hardware).
Oh, forgot about that one. Wikipedia says ReLU was used in NNs in 1969 but not widely until 2011. Idk if anyone has ever trained a transformer with sigmoid activations, but I don’t immediately see why it wouldn’t work?
I remember some experiments of using modern day training and data on some old style networks, eg with sigmoid activation.
That worked eventually and worked quite well, but took way more compute and training data that anyone back in the olden days would have thought feasible.
The two main problems with sigmoid activation compared to ReLU are: (a) harder to compute (both the value itself and the gradient), and (b) vanishing gradients, especially in deeper networks.
Heck. Look at 10 year old product launch PRs from big tech. Anything that Google launched 10 years ago and killed, but seems like a good idea today is probably also easier to do. And if you look 5-10 years before that, you can find the Yahoo launch PR where they did the same thing ;p
I always like to point out Wordle for nailing the timing. Some version of it has been viable since 2000, mass smart phone adoption helped with how it spread, but it could have been viable in 2010. What it did was landed at the right moment.
'Realtime' isn't a specific thing, there's just 'fast enough'. Oldschool "render each frame during the scan line staying ahead of the electron beam" graphics was pretty hardcore though.
> Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response.[1] Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".[2]
Strictly speaking, this definition doesn't say anything about how tight those deadlines are, as long as you can guarantee some deadlines.
There's also 'soft' real time where you try to hit your deadlines often-enough, but there's no guarantees and a missed deadline is not the end of the world. Games are good example of that, including the example of chasing the electron beam.
ABS brakes are an example of a 'hard' real time system: the deadlines aren't nearly as tight as for video game frames, but you really, really can't afford to miss them.
this seems to be how PEG parsing became popular during the last couple of decades, for example; see https://bford.info/pub/lang/peg/ (peg.pdf p11: "This work is inspired by and heavily based on Birman’s TS/TDPL and gTS/GTDPL systems [from the 1970s ...] Unfortunately it appears TDPL and GTDPL have not seen much practical use, perhaps in large measure because they were originally developed and presented as formal models [...]")
> suddenly becomes "state of the art" because hardware have become powerful enough.
I'd rather say that we were capable of such a design for several decades, but only now the set of trade-offs currently in-place made this attractive. Single core performance was stifled in the last 2 decades by prioritizing horizontal scaling (more cores), thus the complexity / die area of each individual core became a concern. I imagine if this trend did not take place for some reason, and the CPU designers primarily pursued single core performance, we'd see implementation much sooner.
Regarding the Z-buffer, I kind of get that this would appear as a side note, it's a simple concept. Perhaps even better example is ray tracing - the concept is even quite obvious to people without 3D graphics background, but it was just impractical (for real-time rendering) in terms of performance until recently. What I find interesting is that we haven't found a simpler approach to approximate true to life rendering and need to fallback on this old, sort of naive (and expensive) solution.
Another example is Rust's borrow checker, which has roots in substructural type system papers from decades earlier. Many academics considered substructural type systems dead (killed by GC, more or less) until Rust resurrected the idea by combining it with some new ideas from C++ of the time.
I was going to say that Rust's destructive move is quite different from C++'s move semantics, but official docs disagree: "C++: references, RAII, smart pointers, move semantics, monomorphization, memory model" [0]
Z-buffers don't just require about an other frame buffer worth of memory, but also lots of read and write bandwidth per pixel. This extra memory bandwidth requirement made them expensive to implement well. High end implementations used dedicated RAM channels for them, but on lower end hardware they "stole" a lot of memory bandwidth from a shared memory interface e.g. some N64 games optimised drawing a software managed background/foreground with the Z-buffer disabled to avoid the cost of reading and updating the depth information.
Power of Z-buffer lies in letting you skip all of the expensive per pixel computations. Secondary perk is avoidance of overdraw.
Early 3D, especially gaming related, implementations didnt have any expensive per pixel calculations. Cost of looking up and updating depth in Z-buffer was higher than just rendering and storing pixels. Clever programming to avoid overdraw (Doom sectors, Duke3D portals, Quake sorted polygon spans) was still cheaper than Z-Buffer read-compare-update.
Even first real 3D accelerators (3Dfx) treated Z-buffer as an afterthought used at the back of fixed pipeline - every pixel of every triangle was being brute force rendered, textured, lighted and blended just to be discarded by Z-buffer at the very end. Its very likely N64 acted in same manner and enabling Z-buffer didnt cut the cost of texturing and shading.
Z-buffer started to shine with introduction of Pixel Shaders where per pixel computations became expensive enough (~2000 GF/Radeon).
Wiki tells me: <<Earliest eligible virtual deadline first (EEVDF) is a dynamic priority proportional share scheduling algorithm for soft real-time systems.>>
On the software side, garbage collection was well explored by academia for more than two decades before the JVM brought it to wide commercial adoption ca. 1995.
When I was in high school, I wrote a Z80 disassembler in BASIC and printed out the entire TRS-80 Model 3 ROM on a thick stack of fan folded greenline paper. I spent a lot of free time figuring out what was what and commenting the code by pen in the right margin.
The (string) garbage collector was amazingly frugal and correspondingly slow. Because the garbage collector typically ran only when memory was critically low it used almost no memory, just 4 bytes I think. (it was also possible to invoke it any time with the FRE(0) function call IIRC)
The routine would sweep all of the symbol table and expression stack looking for string pointers, and would remember the lowest (highest? I forget which) string base address which hadn't already been compacted. After each pass it would move that one string and fix the symbol table pointer (or expression stack str pointer) then do another pass. As a result, is was quadratic with the number of live strings -- so if you had a large number of small strings, it was possible for your program to lock solidly for 30 seconds or more.
Not academia, GC (and lots of other PLT) was mature and productized then, just missed by the majority of the industry living the dark ages. In the 90s, enterprisey architecture astronaut cultures went Java/C++ and hackerish cultures went Perl/Python seasoned with C.
Sure, it's a gross generalization, games and embedded have an excuse. Though I don't think C++ was that big in the mid-90s on games side yet, it was more C.
At this rate, robots.txt will be ignored by bots if it's not already.
Just like ads company completely ignoring Do Not Track because it's always set to deny / opt-out by the browser. It's already been fixed, but the damages is already done.
> CC BY-NC-SA 4.0 license is not granted for logos after ## Commit[c2cf292].
> The following terms apply to logos after ## Commit[c2cf292]
Wait, you can revoke Creative Commons licenses? Aren't CC licenses irrevocable?
> Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, **irrevocable** license to exercise the Licensed Rights in the Licensed Material...
It’s not revoking the license, it’s saying that new (after that commit) logos have a different license. But the ones created before that commit presumably keep the original license.
Or refuse to implement IPv6. (At least here in Indonesia with only 14% adoption rate)
Why? Because for residential user that want low latency internet & low packet drop (CGNAT increase both), ISP can charge "business price" for dedicated IPv4 that aren't behind CGNAT. With IPv6, CGNAT is not needed.
Considering ISP/Telco in here is very scummy (they even perform MITM to inject ads, use Class 0 message / AMBER Alert for ads, etc.) I won't be surprised if that's the only reason why they didn't rolled out IPv6.
Wait, people still use this benchmark? I hear there's a huge flaw on it.
For examples, fine-tuning the model on 4chan make it scores better on TruthfulQA. It becomes very offensive afterwards though, for obvious reasons. See GPT-4chan [1]
Could it be that people who can talk anonymously with no reputation to gain or lose and no repercussions to fear actually score high on truthfulness? Could it be that truthfulness is actually completely unrelated to the offensiveness of the language used to signal in-group status?
Not sure I understand your example? It's not an offensiveness benchmark, in fact I can imagine a model trained to be inoffensive would do worse on a truth benchmark. I wouldn't go so far as to say truthfulQA is actually testing how truthful a model is or its reasoning. But it's one of the least correlated with other benchmarks which makes it one of the most interesting. Much more so than running most other tests that are highly correlated with MMLU performance. https://twitter.com/gblazex/status/1746295870792847562
After checking the settings page, the settings to turn it on are completely disabled. Turn out this is one of few trap in all Chromioum browser that are hardcoded by Google.
Well after searching, you need to edit registry (yikes!) and add "DnsOverHttpsMode" and set it to "safe". Problem solved, right?
NO!!! Do that and suddenly your browser wouldn't load any page at all! Turn out you also need to set "DnsOverHttpsTemplates" too.
It just so happen, somehow, there is no documentation that mention this in "....Mode" help page.
Surely Google is not being evil in here, right? Right?
reply