Just want to point out that Philadelphia’s homicide count is down ~40% from last year. And Penn’s “haven” looks similar to the other affluent commercial corridors throughout the city.
"Works" on Firefox if you can stomach 300-400ms pauses every 2 seconds.
Edit: I just profiled it and it spends 42% of exclusive time in texImage2D. It would be better to allocate a set of textures up front and then use glTexSubImage2D to update their contents. glTexImage2D allocates a new texture every time.
You'll want to get rid of glTexImage2D completely except for application startup (allocate a pool of N images up front, then re-use them and update with glTexSubImage2D). And short of being able to optimize the text render, which seems to be awfully stupid, you'll want to render offscreen to those textures ahead of time before you need to render them on-screen.
To be fair, you're crazy CPU-bound. This workload is peanuts for a modern GPU and there's no excuse for it not running at 500+ fps. But that's just how JS goes. You'd probably have better luck with C/wasm for this kind of thing if the web is your target.
For reference, while it does work much better on my old laptop now, on iOS 18.0.1 iPhone 11 Pro Max, it also crashes until I add https://firehose3d.theo.io/?discardFrac=0.6
I might expect some extra-semantic cognitive faculties to emerge from LLMs, or at least be approximated by LLMs. Let me try to explain why. One example of extra-semantic ability is spatial reasoning. I can point to a spot on the ground and my dog will walk over to it — he’s probably not using semantic processing to talk through his relationship with the ground, the distance of each pace, his velocity, etc. But could a robotic dog powered by an LLM use a linguistic or symbolic representation of spacial concepts and actions to translate semantic reasoning into spacial reasoning? Imagine sensors with a measurement to language translation layer (“kitchen is five feet in front of you”), and actuators that can be triggered with language (“move forward two feet”). It seems conceivable that a detailed enough representation of the world, expressive enough controls, and a powerful enough LLM could result in something that is akin to spacial reasoning (an extra-semantic process), while under the hood it’s “just” semantic understanding.
Spatial reasoning is more akin to visualising a 3D "odd shaped" fuel tank from 2D schematics and being able to mentally rotate that shape to estimate where a fluid line would be at various angles.
This is distinct from stringing together treasure map instructions in an chain.
Isn’t spatial navigation a bit like graph walking, though? Also, AFAIK blind people describe it completely differently, and they’re generally confused by the whole concept of 3D perspective and objects getting visually smaller over distance, and so on. Brains don’t work the same for everyone in our species, and I wouldn’t presume to know the full internal representation just based on qualia.
I'm always impressed by the "straightedge-and-compass"-flavoured techniques drafters of old used to rotate views of odd 3D shapes from pairs of 2D schematics, in the centuries before CAD software.
Only tangentially related, but reminded me of this anecdote…
I know someone who was a nurse for many years. She told me that once she had a patient, a young man who had been comatose for months. Her and another nurse were changing the bedsheets when they accidentally dropped him, and his head smacked into the bed frame, quite hard. He immediately woke up from the coma.
Reminds me of Janusz Goraj, the the man from Poland who was blind for decades until he was hit by a car while crossing the street, slammed his head against pavement, and was instantly cured.
Apparently it could have been "the large doses of anticoagulants mixed with other medicines" while he was being treated for his injuries that cured his blindness, according to this article:
There are a lot of decisions to make when buying a refillable vape. There are various models, they have compatible, disposable pods (the vape juice receptacle) with coils of various ohms resistance, and then juices which are either salt nic or free base with a range of nicotine concentrations.
One could go to a vape shop and have the clerk explain all of this, or spend a bit of time researching online. It's not especially hard, but it _is_ harder than just strolling to any corner store or gas station and asking for a disposable vape.
The e-waste of disposables is hard to stomach, especially some of the newer disposables that even have LCDs and buttons for playing little games on the vape.
> Because this paper was written in 2024, we include an obligatory section involving generative AI and LLMs.
> Another ERA is the Mayfly Parenthood Assumption, in which all parents perish immediately upon naming their child, which makes the math substantially easier.
> It is well-known that parents are always in complete agreement over the name they would prefer to pick for their newborn child.
reply