It’s probably also reflective of the fact that google are throwing all their new resources at AI, as soon as you’ve hit cache invalidation you’re gone, and anything new that’s crawled is probably ranked differently in the post llm world.
This explains the 1000s of “no core no vram” listings for 5090s
If it were due to parts substitution for repairs, would have expected they would be RMA’d rather than salvaged as they’d all be within warranty.
Being constrained by page sizes is “a feature, not a bug” in most contexts. If I’m calling out numbers on the 3rd line of page 38 of a report, it helps if that’s consistent.
Aren’t they already supply constrained? Seems like this would be counterproductive in further limiting supply vs a strategy of commoditizing your complements.
This seems closer to PR designed to boost share price rather than a cogent strategy.
Huh. I view it the other way. If you’re supply constrained go straight to the consumer and capture the value that the middlemen building on top of your tech are currently profiting from.
Even if they don't increase their GPU production capacity, that's not "limiting" supply. It's keeping it the same. Only now they can sell each unit for a larger profit margin.
Certainly possible if their calculation of income included non operating income otherwise excluded from revenue. (Eg. A gain on sale that dwarfs the underlying business). Such presentation is prevalent and a disclosure of “gross profit” isn’t uniformly required under GAAP
I wonder if 2) is a result of published bias for positive results in the training set. An “I don’t know” response is probably ranked unsatisfactory by human feedback and most published scientific literature are biased towards positive results and factual explanations.
In my experience, the willingness to say "I don't know" instead of confabulate is also down-rated as a human attribute, so it's not surprising that even an AGI trained on the "best" of humanity would avoid it.
Totally get where you're coming from — I had a similar experience when going through Teach Your Child to Read with my eldest. The book’s emphasis on phoneme recognition over rote memorization really worked for us too. That said, we hit a bit of a wall in that transitional stage in terms of reading content — our kid was still relying on those visual cues (like ligatures and vowel variants), and jumping straight to standard text was a stretch.
To bridge that, I actually built a font that keeps those phonics-aligned features and allowed us to use stories from things like Project Gutenberg. It’s based on the open-source TeX Gyre Schola, ( kind of like what is used in the Spot books) with OpenType features that auto-connect common digraphs (like “th”, “sh”, “ch”)— but in a way that can gradually phase out. Just put it up on GitHub if you're curious: Reading Guide Font. Open for any feedback or criticism!
In the example text, I think "hōt" and "joke" should be "hot" and "jōke" instead. Also, the vowel in "to" is different yet again, so maybe it needs its own glyph. ⊚?
Just wanted to mention this, but actually it has more issues.
trouble / about: the 'u' should be marked, at least for 'trouble' to make it silent (or probably in both cases but differently, not sure about other similar words). But then there's 'o' in lemonade which is different from 'o' in 'trouble'. Also 'oo' in 'loot' seems strange (should be ⊚⊚ with the recommendation above). Or am I misunderstanding something in the point of the markings? Anyway, it hurts my eyes.
I’m working on a workflow to automating font weight and sizing to cover silent letters and prosody which should cover a bit of that.
One of the key aspects though as a transitional learning tool is to teach children the diversity of sounds. So it’s intentional to not have a 1:1 mapping between phonemes and graphemes.