Also unrelated, but more linux gamers proves my personal observation that on the spectrum of computer literacy gamers are just below powerusers and programmers. We see more less technical people migrate over to Linux gradually and now it's gamers turn. Well, that's kind of obvious for everybody except Microsoft apparently.
But it can't, we see models get larger and larger and larger models perform better. <Thinking> made such huge improvements, because it makes more text for the language model to process. Cavemanising (lossy compression) the output does it to the input as well.
but some tokens are not really needed? This is probably bad because it is mismatched with training set, but if you trained a model on a dataset removing all prepositions (or whatever caveman speak is), would you have a performance degradation compared to the same model trained on the same dataset without the caveman translation?
I'd have mined the copied libraries with something that makes it possible to later change terms and extract fees, as it'd be expected that nobody reads the terms for such service
I'll never touch any git wrapper, because they've lied to me before and I can use git already. Everything that was there to be sped up has already been made into zsh functions.
Maybe, but will they have to fight with borrow checker for doing some other than (the very OOP) DOM components? They'll obviously use both for a long time in the future, so more functional places can get Rust, while more OOP places can benefit from C++
reply