I always find it really weird when somebody on the anonymous internet talks about local places as if we're all neighbors or something. Googling "Richmond Hill" gave me multiple pages of results that had nothing to do with the one that Attenborough lives at.
Not to sound hipster about it, but if it's done in this way I find it charming. I also had to piece it together, which took me on a little virtual travel tour, and had me wonder about what Richmond Hill means to the locals. Rather fitting in context, too.
The "everyone on the internet is American" stuff in e.g. politics or job market convos is a lot more grating.
In hindsight it maybe should have also been obvious from the language alone. "Richmond Hill" feels a bit like saying "Rich Hill Hill" which is basically like saying "Wealthy Desirable Area."
BTW there is a linguistic tradition of “hill hill”. When new immigrants come to an area and ask the locals what that hill is called, the locals say “big hill” in their language. The newcomers call it “bighill” hill in their language. I forget the examples but this has happened enough in England that there are places whose names are five hills deep (Brythonic -> Latin -> Saxon -> Norse -> Norman).
Sometimes the simplest things are hidden in plain sight :) Most people point with their fingers/hands. Unlike rayman, who has vector-like hands, biological beings have them connected to their body, which makes them behave like spinors. But Dirac actually knew about this, after all there is a belt trick named after him.
Same. I rarely use mine and find that they just about always still have battery. And even when they don't I'll just plug them in and go find something else to do for a few minutes and by the time I come back they are usable for hours. The talk about the power button is super strange to me.
> It fixates on one particular basis and it results in a vector space with few applications and it can not explain many of the most important function vector spaces, which are of course the L^p spaces.
Except just about all relevant applications that exist in computer science and physics where fixating on a representation is the standard.
In physics it is common to work explicitily with the components in a base (see tensors in relativity or representation theory), but it's also very important to understand how your quantities transform between different basis. It's a trade-off.
Fwiw, my favourite textbook in communication theory (Lapidoth, A Foundation in Digital Communication) explicitly calls out this issue of working with equivalence classes of signals and chooses to derive most theorems using the tools available when working in ℒ_2 (square-integrable functions) and ℒ_1 space
> I own an M4 iPad Pro and can't figure out what to do with even a fraction of the horsepower, given iPadOS's limitations.
Literally everything you do gets the full power of the chips. They finish tasks faster using less power than previous chips. They can then use smaller batteries and thinner devices. A higher ceiling on performance is only one aspect of an upgraded CPU. A lower floor on energy consumed per task is typically much more important for mobile devices.
Right but what if I don't notice the difference between rendering a web page taking 100ms and it taking 50ms? What if I don't notice the difference between video playback consuming 20% of the chip's available compute and it consuming 10%?
The difference in usefulness between ChatGPT free and ChatGPT Pro is significant. Turning up compute for each embedded usage of LLM inference will be a valid path forward for years.
That's a JIT. It uses the same compiler infrastructure but swaps out the AoT backend and replaces it with the JIT backend in LLVM. Notably, this blog post is targeting on-device usage which a custom JIT is not allowed. You can only interpret.
Because the usefulness of an AI model is reliably solving a problem, not being able to solve a problem given 10,000 tries.
Claude Code is still only a mildly useful tool because it's horrific beyond a certain breadth of scope. If I asked it to solve the same problem 10,000 times I'm sure I'd get a great answer to significantly more difficult problems, but that doesn't help me as I'm not capable of scaling myself to checking 10,000 answers.
Without reading an entire novel's worth of text, do they explain why they picked these dates? They have a separate timeline post where the 90th percentile of superhuman coder is later than 2050. Did they just go for shock value and pick the scariest timeline?
Only gripe I have with the tool is that once you've gotten a country right a few times it zooms in too far. I still had no clue where Eritrea was after getting it right like four times. Just got lucky.
But now that the map only shows me three possible countries I can trivially remember which one it was. Ask me again tomorrow while only showing me the full map and I might guess it's in South America.
reply