Just encode the text as a ascii codes after the decimal dot of a zero. (0.656168.. etc). Then just mark that ratio of the sticks length and you're done...
Stick encoding with graphite resolution (0.335 * 10^-9 meter) : "Uti" (31 bits -> 3 UTF8 characters)
Stick encoding with Planck resolution (1.616255 * 10^-35 meter) : "Utility ought " (115 bits -> 14 UTF8 characters)
Complete first sentence: "Utility ought to be the principal intention of every publication." 
It appears that this storage scheme may not be suited towards the safekeeping of literature.
Resolution smaller than a wavelength can be achieved with vernier techniques (like in a caliper), but those require a pair of light sources with precise frequency/phase relationships between themselves, which are difficult to make at such high frequencies, so it is hard to improve much the resolution.
I have not looked to see if there have been any progresses in recent years, but I would guess that a very approximate limit for the resolution of length measurement would be around 100 nm. So measuring the length of an 1 meter stick might provide up to log_2(10^7) bits, so about 23 ... 24 bits.
Of course, any temperature fluctuation would change the length of the stick by much more than the resolution.
However that can be avoided by encoding the information not in the absolute length, but in the ratio between the lengths of 2 segments marked on the stick.
No matter what, it is possible to write much more bit symbols on any stick than it is possible to encode in the measurements of one or a few lengths marked on the stick.
That is due to the fact that halving the size of a bit symbol doubles the quantity of information written on the stick, while halving the length corresponding to the measurement resolution provides just 1 single bit of extra stored information.
Not exactly. If the 115bits are the hash/retrieval key of the actual content, then that can be a lot of information. Just have to have a big enough DB.
Reality, in being geometrical, is
infinitely informationally dense (with a discrete conception of information).
This distinction between geometrical space and time, and discrete algorithmic computability is unbridgeable.
And hence there is an extemely firm footing on which to reject: AI, brain scanning readers, teleporters, etc and most sci-fi computationalim.
Almost nothing can be simulated, as in, realised by a merely computational system.
So assuming continuous space/time and discrete information then I'd agree, but as far as we know space/time aren't continuous, but just appear that way to us. It doesn't seem like we know for sure that it is discrete, but at the least I'd say it's solid evidence that stuff like brain scanning is definitely in the realm of possibility.
This answer from the Physics StackExchange nicely covers how time/space could appear continuous to us even if they are in fact discrete at lower levels. Also some interesting discussion in the other answers
Ie., there is an irreducible geometrical continuity in the sense that no discontinuity can ever appear. The state density is maximal.
via this route we reporduce the same point: computationalism/simulation'ism' is then just the thesis that computers qua measurably discrete systems can realise dense unmeasurable discrete systems.
This can be shown to be impossible with much the same argument: spatial and temporal geometrical properties obtain in virtue of dense discreteness; and fail to obtain at measureable levels.
The key property of continuity is its irreducibility to measurably discrete systems. That irreducibility isn't, however , limited to continuity .
Wolfram makes this point about the failures of reductionism in a perfectly discrete context, ie., that no CA can compute a CA whose complexity is greater than it can summarise.
I prefer to press a continuous angle: our best theories of all of reality are continuous and geometrical . That energy levels are discrete in bounded quantum systems has almost nothing to do with the indispensability of contintuous mathemtatjcs in every known physical system -- including that very bounded wavefn
I disagree that this necessarily implies any difficulty for the possibility of brain scanning and AI.
Just as things sampled faster than the nyquist frequency (or twice it or whatever) of a uh, band limited thing, can be perfectly recovered, (I mean there’s still discritization of the amplitudes but I hear this can also be handled), I don’t see why uh, arbitrarily high frequency (in space and time) should be necessary in order to model the behavior of a brain to the point of long-term indistinguishability.
(That being said, I don’t particularly expect whole brain emulation to ever be achieved, I just don’t see “spacetime is continuous (or well approximated as continuous)” as being a strong argument for it being impossible.)
I’m not sure what you mean by computationalism.
If you mean the idea that the way the world works is computable in the abstract sense (not requiring any practical bounds on the computational resources needed), then the idea that the world is discrete and finite, merely with extremely fine grains, then this poses no issue for computation in that abstract sense (just make the imaginary computer even bigger).
If you mean like, an accurate simulation of the past of the world being run within the world, yeah that doesn’t work.
1) Intelligence requires being spatiotemproally acting-on and being acted-on by a spatiotemporally dynamic environment.
2) Dynamical spa'temp properties of the body enable (1)
3) These properties are continuous features of the bodily (eg., organic plasticity)
4) continuous properties are irreducible to measurably discrete ones
5) computers are systems which we build that have measurably discrete systems
C) computers cannot be intelligent
The issue is the word "computer" means not "device we have made" but "universal turing machine".
Ie., a computer is any system which realises a function from the Naturals to the Naturals.
Physics barely, if at all, has any use whatsoever for those functions. It is a very important point.
Computer scientists (ie., discrete mathematicians) are not the people who are even able to describe, engineer and build whatever is needed for an AI -- if, as I claim, continuous dynamical properties are needed.
(As, for example, they are needed by pretty much every system.)
This may seem weird, and indeed, it's far less weird if you just say "continuous".
But here's an intuition: spatio-temporal continuity is "scale-free" in the sense that stuff happening at the sub-proton is affected by stuff happening at the galactic.
Thus reality has to be able to "zoom" from the sub-proton to the galactic.
In the case of organic plasticity, I do think that macroscopic effects which are whole-body distributed (including, eg., thoughts) have to drive protein expression at the sub-sub-celluar.
Consider simulating that with a low-state discrete computer: it is many orders of magnitude more data than a planet-sized computer could store and many more years than the lifetime of the universe (consider the number of molecules to store, and their interaction effects from whole-body down).
Running operations at anything in the nanoseconds makes this simulation impossible. It simply does need to be much closer to O(1/L^2).
One is that I don't think it follows from the premise that the continuity of the physical world precludes AI, brain scanning, etc. Even if the physical world were continuous (likely not, see below), an arbitrary degree of approximation could be attained, in principle. At the very least I would not call the footing "firm".
The second is that the universe is very likely not continuous anyway. The Beckenstein bound puts an upper limit on the number of bits of information a region of space may contain. If the ruler tickmark were either measured or localized to the precision required to encode the information, the information density would cause it to collapse into a black hole. This would happen once your measurement needs to be about as precise as a Planck length, which would allow you to encode about 115 bits of information with your tickmark.
(This in of itself is independent of the fact that you would need to construct the ruler out of objects that the universe permits; your ruler tickmark would need to be made of and measured with discrete fundamental particles, which by their very nature are quantized).
But I think that example just shows how few bits you can really get out the exercise!
In practice, you "only" need ~42 digits of pi to draw a circle spanning the entire known universe (diameter of 8.8 * 10^26 m) and it will deviate from the ideal circle by less than the size of a proton (0.8 * 10^−15 m).
Having a theoretically infinite precision does not mean that it makes a measurable difference.
But that doesn't mean brains aren't special -- it means brains are special and computers are special. Even more: it seems to imply computers, AI, etc. can be as special as ourselves, sentient, and perhaps even more special in ways we haven't realized yet.
It's difficult to even imagine a physical theory with unbounded local information. It seems to open the possibility to crazy things like hypercomputation, which do not seem very well defined. (For example: every 1-1/n seconds, (n>1) from now, flip a switch ON/OFF. At what state will the switch be at t>1s? An at exactly t=1s?)
Note: while information and information flow itself bounded (hence no hypercomputation), I don't know of any obvious objections to continuous time. (I'm not sure the continuity of time has any profound implications)
How do you go from computation to feeling a toothache?
It's like telling me a cucumber is really a Porsche (but worse).
Pain/pleasure seem to have easy enough analogue to at least something like a reward function, but what really gets me is colors. I feel like if honestly considered, the word "color" is all that's necessary to disprove materialism. Color is. How? Dunno.
I agree with the latter, but the former? Classical Newtonian mechanics is easy enough to imagine.
What baffles me a bit is that quantum mechanics seems to be linear, but we seem to see chaos in the real world. (And exactly that (mathematical) chaos is also what the article exploits.)
See eg https://physics.stackexchange.com/questions/33344/is-the-uni... for a discussion, and https://www.scottaaronson.com/papers/island.pdf for a longer treatment.
One universe has "observer" (a bunch of variables, really) in one state, another has it in different state. Those variables encode some information about themselves, so indeed in different universes the different "versions" of the observer are different, and each perceive it as universe randomly deciding to chose their version of existence. So what?
We're parts of the physical reality so trying to describing physics from the point of view of an out-of-universe observer, while psychologically attractive, is ultimately futile: in QM, such approach breaks very quickly.
I'm not sure whether eg the Born rule is enough by itself to explain how (non-linear) chaos arises from linear QM.
no algorithm running on a cpu can move a muscle --
it is precisely that movement is a spatiotemporal property which means no turing machine can realise it
movement isn't a symbolic operation
If you are making a broader claim about the limits of hardware vs. wetware the you will need to clarify things a bit further.
Also, by the same logic you apply to space, you could say that time is infinitely divisible, so you could create a computer which finishes an infinite amount of steps in a finite amount of time.
A consequence of this is that classical physics is not really deterministic: this is because, in general (ie. including chaotic systems), the evolution of a system depends on a set of initial conditions that are specified by full real numbers, impossible to measure with finite precision. So, the use of real numbers is hiding the indeterminism in the initial conditions, much like the function in this article encodes a dataset in a single parameter.
Basically, the universe can just as well be approximated in two different ways. One, it's all straight lines, and anything that looks like a circle/curve is actually a piece of a veeery many sided polygon if you look closely enough at it. Two, it's all curves of some curvature, and anything that looks like a straight line is actually a segment of a veeery large circle. The first perspective corresponds to taking the rational numbers as physical. The second one corresponds to taking pi as physical (and then it's unclear whether e is also physical, or which other transcendental numbers are, but 1 is definitely un-physical then).
Probably the constructive framework is the best way to express this concept, just starting from whatever constant we chose to define as the unit.
However, even if rarely used outside of proofs, they are the vast majority of the reals, in fact there are only countably infinite constructive/computable numbers.
i specifically said with a /discrete/ conception , geometry is infinitely dense (of discrete states)
Not to mention, the finer the distinctions between two states of a system, the more energy you need to distinguish them. So, the less impact these differences can have, unless the system is extraordinarily energetic (and even then, you end up in fundamental limits of energy per volume, like the Schwarzschild radius).
So again, there is no sense in which a finite part of the universe is universally dense.
Even worse for your argument, all currently known laws of physics use computable functions (the randomness in QM notwithstanding). So, by definition, all known laws of physics can be simulated by an ideal Turing machine (again, give or take some randomness in QM, depending on the interpretation you chose to believe in and on how you chose to simulate the QM system).
I assure you there are plenty of groups out there simulating systems that operate with similar densities (but lower volume) to the brain.
Elementary particles, for example, are discrete. You could argue that they have continuous effects vis a vis the EM field and spatial positioning, but ensemble effects usually render that irrelevant at large enough scales.
I believe there are some theories of quantum gravity that do rely on the idea that space-time is quantized in integer multiples of Planck's length, but these are far from definitive theories.
A much more relevant limit in terms of possible information density is Heisenberg's uncertainty principle, which essentially puts a limit on the maximum possible precision for any measurement.
You might be interested in the Bekenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound):
> In physics, the Bekenstein bound (named after Jacob Bekenstein) is an upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximal amount of information required to perfectly describe a given physical system down to the quantum level. It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite. In computer science, this implies that there is a maximal information-processing rate (Bremermann's limit) for a physical system that has a finite size and energy, and that a Turing machine with finite physical dimensions and unbounded memory is not physically possible.
Lots of math works out well that as a continuous approximation, eg the Navier-Stokes differential equations seem to describe fluids well on everyday scales. But we know very well that water is made of molecules, so we know that this particular continuous approximation will fail at small enough scales.
By the way, the closest thing we have come to for teleportation is cutting-and-pasting of quantum states. So no classical, digital computers involved there.
Instead, let me throw out another extreme (but fun) view in the opposite direction: Finitism .
These guys not only reject the existence of the continuum; they reject all infinities altogether! In finitism, even discrete things exist only as finite objects (that is further constructable – Ultrafinitism ).
So no infinite universe, no "set of all natural numbers", no "limits" and other ideals over infinite domains. Screw Platonism. Hello Wittgenstein (and Wolfram).
I don't know how far that theory can be taken in a practical sense – most body of science is built on Platonism  – but I have to say finitism does appeal to my CS heart and my earthly experience.
A trivial example: you can partition the environment into an infinite number of objects. And which partition scheme you choose is, in some sense, abitary.
Eg., "object: the edge of the glass", "composite: edges of glasses on the table", etc.
Reality admits an infinite number of such schemes, and also forcloses an infinite number (eg., if "pen"=pen, then "paper"!=pen).
I dont think one can meaningfully speak "of reality", ie., provide a discrete linguistic/propositional account, which avoids these infinities.
I also think there is no meta-scheme, so one cannot even order (in terms of fundamentality) which scheme is 'the really real' one.
It's my view that the reason for this issue is that cognition is discrete but reality continuous. Since discrete aggregations arent enough, likewise "aggregative models of atoms" arent enough for chemistry.
An aqueous solution, just like a society, is much more than merely the sum of the properties of its members. When we partition the world with a discrete scheme, we introduce "emergent properties" which are only the "leftovers from our reductive failure".
The only problem infinity poses is to being realised by a discrete sequential process. That can never be actually infinite. But everything else can!
This is like saying it's impossible to build a water pump without solving the quantum mechanical interactions that govern water flow.
Edit: As a fraction of appended ascii codes.
The "mark on a stick" channel has a capacity like any other channel. If you're sending just one symbol, you could easily calculate the information capacity given a desired probability of bit-error.
Assuming you can put the mark in exactly the right spot, you can model the "noise" as a distribution over the values that the reader will measure. If you model this as `mark + zero-mean normal distribution` with a known variance, then your stick is just an AWGN channel.
So, if you want to simulate every possible interaction in realtime, sure. But you can increase overall capabilities by making sacrifices along several axes: duration, speed, precision, persistence of changes from a generative baseline and lots more.
In other words it depends entirely of your purpose for simulation.
It is gratifying to know that the mathematician Georg Cantor demolished AI some hundred odd years before any engineer had thought seriously of it.
Citation needed. Btw nothing in this thread has anything to do with AI. Perhaps you are using a very unusual definition of AI that programmers don't use? Or maybe I just fell for satire. It's really hard to tell on this site.
It's also a resolution of Xeno's Paradox.
That can't possibly be true, because then there would be no point to space and time. If a single location could hold an infinite amount of information, then the rest of reality would be redundant.
>hence there is an extemely firm footing on which to reject [the Matrix]
This seems to be the opposite of the conclusion that your premise implies. I'm the one that doesn't believe in matryoshka simulated universes, but infinite information density is what would make it possible, no?
If we lived in a non-discrete universe, why would computation be unable to exploit it?
It is quite likely (rather obvious, really) that what we call reality is continuous in design, infinite in abstractions, relationships, and descriptions, yet at the same time cohesive in a single entity.
Those type of systems are not representable in computers.
You or I may be wrong about any specific aspect of reality, or fail to be aware of it.
However, we can be sure there is such a thing.
"Reality is that which, when you stop believing in it, doesn't go away." (Philip K. Dick)
It's kind of like the concept of agnosticism.
> Each letter of the message is represented in order by the natural order of prime numbers—that is, the first letter is represented by the base 2, the second by the base 3, the third by the base 5, then by 7, 11, 13, 17, etc. The identity of the letter occupying that position in the message is given by the exponent, simply: the exponent 1 meaning that the letter in that position is an A, the exponent 2 meaning that it is a B, 3 a C, 4 a D, up to 26 as the exponent for a Z. The message as a whole is then rendered as the product of all the bases and exponents. Examples. The word 'cab' can thus be represented as 2^3 x 3^1 x 5^2, or 600.
Excerpt From: Frederik Pohl. “Starburst.”
If you arrange your symbols and contexts carefully, you can even use this as a technique for progressive or lossy compression -- i.e. the more accurately you specify the ratio, the higher fidelity your result.
The specific one I'm thinking of is just spit out a scrollable numeric string of Pi, and make the user scroll until their phone number was the digits of Pi that matched it.
My (7 digit) phone number occurs after digit position 9_000_000, and occurs 21 times in the first 200_000_000 digits.
(Assuming that phone numbers are less than, say, 100 digits.)
But it would be an interesting exercise to do, I guess.
EDIT: I replaced "transcendental" with "normal" after reading Scarblac's comment below: https://news.ycombinator.com/item?id=28699622 -- many important transcendental numbers, including π (Pi), are thought to be (but have not been proven to be!) normal.
It hasn't even been proven that pi contains all number sequences.
See https://math.stackexchange.com/questions/216343/does-pi-cont... for more details.
E.g. I can't imagine that pi with all the '1' digits replaced by '2' isn't transcendental, but it clearly doesn't contain every sequence of digits.
I think you mean normal numbers.
IIRC, most real numbers are normal, and many important transcendental numbers are thought to be (but have not been proven to be) normal.
I edited my comment. Thank you for the correction!
Which means that most real numbers violate all present and future copyrights.
Second, you could as easily embed an infinite size dataset into an infinitely long binary string and say that you've reproduced your dataset with a 'single' parameter! That's sort of what this is doing, with some extra steps.
>  “I’m not very impressed with what you’ve been doing.” As recounted the famous physicist Freeman Dyson himself, this is how Nobel laureate Enrico Fermi started their 1953 meeting. “Well, what do you think of the numerical agreement?”, Dyson countered. To which, Fermi replied “You know, Johnny von Neumann always used to say: With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. So I don’t find the numerical agreement impressive either.”
> In addition to casting doubts on the validity of parameter-counting methods and highlighting the importance of complexity bounds based on Occam’s razor such as minimum description length (that trade off goodness-of-fit
with expressive power), we hope that fα may also serve as entertainment for curious data scientists
So... use a PRNG?
AIC, BIC, et al. need to be reformulated for each model in which they're used, and not all parameters can be treated as fungible.
Haven't read the posted article but sounds like the same idea and motivation
-  πfs - the data-free filesystem
So I'm sad that when I tried to recreate the elephant scatter plot, I haven't been able to. Anyone find exact parameters that work for tau and alpha?
Just wish they'd given a complete list of the decimal places for those animals. It would've been something to plot them yourself.
Every ml tutorial start with splitting your data in into a training set and validation set.
oh, uncountable infinities.