> [if] you want to retain as much precision as possible and still use floats, don't store it in a float with range [0.0,100.0]. Store it with the range [0.0,1.0].
I just tested this out and it doesn't seem true.
The two storing methods seem similarly precise over most of the range of fractions [0,1], sometimes one gives lower spacing, sometimes the other. For instance, for fractions from 0.5 to 0.638 we get smaller spacing if using [0,100], but for 0.638 to 1 the spacing is smaller if storing in [0,1].
For very small fractions (< 1e-38), it also seems more accurate to store in the range [0,100] since you are representing smaller numbers with the same bit pattern. That is, because the smallest nonzero positive float32 is 1.40129846e-45, so if you store as a float32 range [0,1] that's the smallest possible representable fraction, but if you're storing as a float in range[0,100], that actually represents a fraction 1.40129846e-47, which is smaller.
For the general result, see for yourself in python/numpy:
x = np.linspace(0,1,10000)
plt.plot(x, np.float64(np.spacing(np.float32(x*100)))/100) # plot spacing stored as [0,100]
plt.plot(x, np.float64(np.spacing(np.float32(x)))) # plot spacing stored as [0,1]
How is CUDA-C that much easier than OpenCL? Having ported back and forth myself, the base C-like languages are virtually identical. Just sub "__syncthreads();" for "barrier(CL_MEM_FENCE)" and so on. To me the main problem is that Nvidia hobbles OpenCL on their GPUs by not updating their CL compiler to OpenCL 2.0, so some special features are missing, such as many atomics.
Did you read in English? I read it in french and loved it, then looked at Montcrief's translation - I would not have made it through. Montcrief turned Prousts clear and precise (but long) sentences into a kind of esoteric word puzzle. Lydia Davis' translation looked much better.
For War and Peace, I didn't know russian, so I tested different translations before going ahead. The translator makes a big difference, I found some translations hard to read.
I had your feeling with Ulysses, though. No translation issue there. Couldn't make it very far.
My French is simply not up to it. A neighbor said that he tried reading Proust in high school French class (Richmond, Virginia, many years ago), and only at the end of the year realized that the kids who got As had used cribs.
But Nancy Mitford wrote about how much better Proust is in French.
The article says the problem was a dropped frame from the camera, but that just further piques my curiosity:
Presumably they use some kind of Kalman Filter, but those are easy to program to account for missing frames, or frames at non-discrete timepoints, perhaps even for screwy camera images if the programmer had a reasonable prior for the likelihood of it happening. Kalman Filters by design account for measurement error.
It seems like the issue wasn’t that there was a dropped frame, it’s that the time slot for that frame got filled by the next frame, then every subsequent frame was off by one resulting in a persistent timestamp offset of the vision data from reality for the remainder of the flight.
I didn’t read into it too much so I may not have all the details right, but I think this is the gist of it.
That would be a straight-up, avoidable software/hardware bug: The incoming timestamp is incorrect, and garbage in is garbage out.
That would make me curious how the timestamp error occurred: software, hardware? Camera or Navigation code? I assume they have very high standards, what was the process failure point?
Thank you for your comment, because it triggered an interesting chain of thoughts about a semi-related problem I’m working on at work.
Usually with a Kalman filter, you’re taking into account the spatial measurement error (gyro-measured roll rate error, accelerometer-measured acceleration error, etc) but I don’t think I’ve ever encountered a system that explicitly modelled sensor latency variation relative to timestamps. Based on the description of the problem they encountered here, I suspect what happened is that it lost a frame but didn’t adjust the “photo timestamps” appropriately; every frame that came along afterwards would have had an incorrect timestamp? Even if the Kalman filter was set up to handle “this photo was taken 20ms ago” when doing its forward integration, if they didn’t model “this photo was taken 50ms ago but is reporting that it was taken 20ms ago” then you’d pretty readily get the kinds of oscillation they were getting.
HoloLens provides a timestamp with every frame from each sensor, which can be used for sensor fusion outside of the system usage. Windows.Perception.PerceptionTimestamp can be used for either recorded data (e.g. camera) or for future predictions (e.g. predicted device position). The predicted latency is also used to adjust the render to ensure the viewer's perspective is correct even though the draw calls may be lagging slightly behind the viewer's position.
Sperm cells can be seen as a haploid phase of many organisms' lifecycle, so are alive by pretty much any definition.
In humans, the haploid phase of the lifecycle is single-celled, while the diploid phase is multicellular. In contrast, in mosses and fungi the haploid phase is multicellular while the diploid phase (sporophytes/zygote) is single-celled.
IBM's new and rising supercomputer architecture, POWER9, supports hardware IEEE binary128 floats (quad precision). Their press claims the current fastest supercomputer in the world uses POWER9.
The ppc64 architecture (still produced by IBM) supports "double-double" precision for the long-double type, which is a bit hacky and software-defined, but has 106 bit mantissa.
And ARM's aarch64 architecture supports IEEE binary128 long-doubles as well, though it is implemented in software now (by compiler). Maybe they plan a hardware implementation in the future?
I've read Chomsky's opinion on this before in his essay ""Science, Mind, and Limits of Understanding", and I think he misunderstands the physics.
He seems to think Newton accidentally disproved the concept of locality through his theory of gravity. It's true that philosophers largely gave up on locality in the 18th century because of Newton, but that was only temporary: In the 19th century the principle of locality came back with a vengance after Maxwell.
Today the principle of locality is a key component of the Standard model: The Hamiltonian of the standard model is local, meaning you can compute what happens at a point in spacetime knowing only what is going on in an infinitesimal region around it. Even outside the standard model, LIGO proved that graviational waves exist, and therefore gravity is a local phenomenon.
Einstein was famously prepared to give up on quantum mechanics because it seemed to violate the principle of locality, which he thought was more important. That is still debated sometimes, though whether quantum nonlocality exists seems to be a matter of interpretation and is also different from the kind of locality chomsky is talking about. Locality is still a key principle in physics.
> what is going on in an infinitesimal region around it
A "field" is just a name for spooky action at a distance. It's a description, not an explanation. There is no mechanical contact, only "locality" of a field.
Or are you saying that fields are really mediated by particles... so there is mechanical contact?
People call these thing gravitons but they are particles in the sense that light and electricity are particle based - which is to say that in the limit it turns out that a discrete particle doesn't describe everything that's happening and some wave like properties in space and time are a good fit too. The particles are sort of like a manifestation or a partial mathematical description of the thing that's underneath. The particle is an excitation of a field - all particles are, so mechanical interactions reduce to fields. The thing to remember is that your intuitive and perceptual apparatus was largely evolved to help you get fruit in a forest, and later to help you catch rabbits and shell fish. The ideas that are obvious are approximations that allow you to navigate the world of the past - but they are not "right".
Yes, forces transmitted through fields act "at a distance", but is that really "spooky"? Do you think it is "spooky" that if you make a wave at one end of a pond, the wave reaches the other end? I don't. I consider the propagation of waves to be a "local" non-spooky phenomenon.
Disturbances in a field propagate through space similarly. A disturbance of the field at a point only affects the value of the field in the immediate spacetime surroundings, just like a water wave. I would call that "local" and non-spooky. Whether or not there is "mechanical contact", whatever that means, is irrelevant.
This is in contrast to Newton's theory of gravity, where the force of gravity was spookily felt instantaneously across space.
> A disturbance of the field at a point only affects the value of the field in the immediate spacetime surroundings, just like a water wave.
Ok, I see that's local (though not mechanical, as you say).
I think a magnetic field (as from a magnet, not a wave) is not local though? So, the transmissiin of modulation is "local", but the field itself is "at a distance"?
You can find plenty more if you google. I personally agree with the ideas of that paper in a broad sense if not in detail, as do many people.
Second, if you are trying to argue that locality is no longer a guiding principle, note how the standard model is quantum-mechanical so obeys bell's inequality, yet we still call it "local". Locality was a key guiding principle of the standard model.
> But many smaller projects/components are volunteer-only.
Someone is paying for it with their spare time. Spare time doesn't last forever.
If you're not paying for something, you should see it either as a temporary shortcut that eventually needs a more sustainable fix, or something you don't care about if it disappears.
While I understand, and kinda agree, the idea that Volunteer only projects are not susceptible longer term and your implication that commercial products are more sustainable over a long term is provably false
Many many many many many commercial software projects, that people paid money for, are killed every single year.. Many other volunteer only projects last for decades, some of which have been critical to the very foundation of the web.
So no "if you are not paying for you should see it as a temporary shortcut" is a completely false statement
It's subsidized by those volunteers' time. Time that could otherwise be spent on other things.
Granted, there's nothing nefarious about this, but there isn't anything necessarily nefarious about the fact that anything you get for free is being paid for somewhere else. Just potentially nefarious, if you don't know what's paying for it.
He's not talking about Bitcoin, do people realize there's literally 1000+ projects (Most irrelevant) with some clear innovation going on. We're already way past bitcoin and PoW as the future of blockchain tech.
> Universities will just start offering education for free instead of waiving the tuition
Aren't you supposed to be taxed on the value of a good/service/gift received even if it was given to you for free? What is the "true value" of tuition?
I don't think waiving tuition will work, assuming tuition is taxed like most other things.
I just tested this out and it doesn't seem true.
The two storing methods seem similarly precise over most of the range of fractions [0,1], sometimes one gives lower spacing, sometimes the other. For instance, for fractions from 0.5 to 0.638 we get smaller spacing if using [0,100], but for 0.638 to 1 the spacing is smaller if storing in [0,1].
For very small fractions (< 1e-38), it also seems more accurate to store in the range [0,100] since you are representing smaller numbers with the same bit pattern. That is, because the smallest nonzero positive float32 is 1.40129846e-45, so if you store as a float32 range [0,1] that's the smallest possible representable fraction, but if you're storing as a float in range[0,100], that actually represents a fraction 1.40129846e-47, which is smaller.
For the general result, see for yourself in python/numpy: