I met Steve for dinner once and we had a great chat about how badly the first Mars life-detection experiments were designed.
Why this reminds me of color isn't so much the weird part, but the fact that color is a continuous surface. What would adding new colors do to the color space? Does the space remain two-dimensional or do new colors start blending a third dimension into the topology? I wish those mystics had access to spectrometers in Heaven.
In humans—-and most other animals—-the visual system represents color using a relative system: is something redder than it is green? Bluer than it is yellow? To create this, neurons receive excitatory input from (e.g.) a red cone and inhibitory input from nearby green cones. This representation makes sense, given the cones’ spectral sensitivity, but it also makes some colors “impossible”. Since each color is essentially a point along these two axes, something can’t be blue and yellow at the same time, reddish green, or even some shades of “hyper-green”[1,2].
The mantis shrimp, near as we can tell, doesn’t have this sort of representation. The anatomical pathways don’t seem to be there, and some behavioral work with trained(!) mantis shrimp also suggests that they have independent color channels, and, as a result, their color sensitivity is actually not amazing. Interestingly, they may do something to “fake” color opponency: different receptor types are in different parts of the eye, and the shrimp ‘drags’ its eye across the scene to produce a sort of temporal context.
That’s more than you probably wanted to know but...shrimp are neat.
 To a first approximation, anyway.
 There is a place called Reddish Green, which I assume is not invisible, but I’ve never been to Stockport, England.
 More seriously, there are some tricks you can play to (briefly) perceive some of these impossible colors. The general approach is that you stare at something of one color, then quickly switch to looking at something of the opponent color. This “fatigues” (adapts) cells that signal the first color, so they provide less inhibitory input in response to the second color.
My personal favorite impossible color is Stygian Blue. Something is so energizing about seeing a color you know shouldn't exist.
I’m red-green colorblind.
As of a few years ago, they had a much wider wavelength range (400-700 nm) than the best man-made ones, which were only achromatic over ~100 nm.
But a different kind of rod altogether, firing off some completely different signal, who knows what the experience would be like. Psychophysics is a mysterious junction indeed.
Although I have my doubts that this is what the mystics you're referring to are talking about.
Generally through measurement. Colors are just differences in electromagnetic wave frequency, change the electromagnetic wave frequency enough and you are out of “visual light rays” and into other types of rays (radio, gamma, x, etc...). Until people start “seeing” those other non visable light rays, we have some indication we are all seeing the same things (light waves).
As to the “colors” of the various frequencies within the visual light wave spectrum, we may receive/interpret them differently (ie color blindness), but the wave frequencies are the same and it is us who work differently.
That’s why we can have other objective factors and measurements (primary colors, mixing of colors, etc...) that suggest for the most part we are all seeing the visual light spectrum waves similarly. In otherwords if you and I saw the visual light wave differently it should be obviously once we start mixing colors to produce new colors (if you saw black and white where I saw blue and yellow, then you couldn’t see green - it would be black). Still your use of “experience” of colors likely is inheriently true we probably all do experience the colors differently (some are my favorites, those don’t need to be your favorites, some may trigger certain emotions in me and not in you) but that doesn’t change the fact that the electromagnetic wave is the same for both observers.
But more importantly, I think it's like arguing whether all computers are big-endian, or whether some computers stores
[R,G,B] values in memory as [G,B,R]. It's not actually mysterious.
If you point a measurement device at a particular point, you can get a histogram of light intensity over several frequency bands. A histogram with many narrow bands gives a more accurate profile of the measured light.
The human eye usually has only three frequency bands: blue, green, and red. They overlap. They are not always the same width, or centered on the same frequency. The width and center of the red bar is defined on the X chromosome, so some people with two X chromosomes from lineages that see red differently have a fourth histogram bar, and therefore have better ability to distinguish between similar color profiles at the red end of the visible spectrum. Some people with only one X chromosome have a defective copy of the gene encoding for the red histogram bar, and thus only have two bars in their histogram.
So while color is objectively the same, different eyes--connected to different visual cortexes--quantize and encode color information differently. Each brain may interpret the data in the color channels differently to establish an individual's world model.
What I see as "red" is unique to me. If someone were to bridge my brain to someone else's, my "red" would not match to someone else's "blue". My color channels would be as their octarine, smaudre, and refulgine--three completely different color channels that they have never experienced before. But if our optic nerves were linked, rather than somewhere deeper in the brain, I would see their red as my red, even if their eye reads reds slightly differently because of a minor difference in their X chromosomes. The translation layer between raw visual data and personal worldmodel is not guaranteed, or even likely, to produce similar results.
Because the words "red" and "blue" can be calibrated by reference to external objects. You and I both call stop signs "red" and the sky on a clear sunny day "blue". So if, for example, somebody invented a device that could "translate" your experiences into my brain, and the device "translated" your experience of looking at a stop sign into something that my brain decoded as "blue", I wouldn't conclude that your experience of colors was different from mine: I would conclude that the translator device was broken.
Sure, you can be taught that when you hear a deafening sound to say "wow that's really soft" but that doesn't change the fact that it's a loud sound and you're experiencing a loud sound.
You know that red is red, but you don't know that your red is anyone else's - which may be why I like green and you like red, we may have two totally different experiences of the colours.
However, I have rationalized away this theory with a thought experiment. Imagine someone messed with your brain and remixed your colors. So now maybe what you once saw as red you now see as green. However, since your brain has no other reference, you will not even be aware this change has happened.
Now if we put a device in your brain that changes your colors every second or so, you still won’t even know. You’ll never know! Because you have no way to compare a previous representation with a new one.
Imagine looking out now into the world around you, and being told every second your colors are being rewired. Since you can only have one dictionary of colors in your mind, you see nothing change, because that would imply there is an even deeper meta-representation of colors you can compare with.
Ultimately, colors do not exist. This is why my favorite color is black, as it is the only “true” color, and not to mention everything looks good in black.
So am I.
Some people lack nerve 'analyzers' and do not "feel" pain/heat/cold etc, just pressure:
Likewise some people lack emotional analyzers and do not "feel" emotions:
I don't think there's any way for me to tell which (if either!) is the more "correct" perception, eh?
What aspects of reality are off limits to us due to our limited space of possible experiences? A rat will never have the experience of understanding prime numbers. What are we missing?
Color on the other hand is measurable and regular across humans to a far greater extent, such that we can measure with a fairly high degree of accuracy just how color-blind a person is. That's what's seductive about it.
You don't have that with other aspects of sensory perception.
You've experienced it already... by shining a black light on ultraviolet pigment.
It's the same principle behind color blind test patterns: https://cdna.allaboutvision.com/i/eye-exam-2017/color-blind-...
A color blind person sees one color - a normal vision person sees two.
There is infinitely more wavelength info even in the visible light of nature than the crude three dimensional mapping our eyes can present to our brains.
One simple example: We can't distinguish green light from mixed blue and yellow light.
So my point, which I know I'm annoying slow to get to, is that we're nowhere near seeing "the colors of nature".
For sound we do much better.
Scroll down a bit and look at the Notes section. Colors seem to have different shapes. Red occupies a corner of the cube. Cyan a whole edge. Trying to flatten the space by removing the lights and darks just doesn't work.
Color is weird.
Without seeing the cube viz in my link, I'm likely to look at CIELAB and say, "oh, sure, but you can just take out the brightness and then it's all flat again." The cube makes it clear what happens when you do that.
The whole collection deserves a content warning, but if you enjoy Black Mirror then you'll enjoy these.
I'm not sure what you mean by that. Who needs to be warned and of what?
Thank you for the recommendation though, there's never too much sci-fi to read as far as I'm concerned.
I don't know that specific series, but Greg Egan thinks ideas further than usual in entertainment, usually ends on a note that's neither down nor up, but nondescript and even nihilistic toward normal-human values, like setting yourself into an endless loop of trivial emotional experience of no more than a few seconds content in re. The author often plays with minds like that. Depending on your make-up, you could interpret the described personas as tortured people, or free, or ... whatever. Somewhat Camus-like. Just with hard sci-fi.
It's not a book to read to cheer yourself up. It's interesting and clever, but I don't feel comfortable unreservedly recommending it to people I don't know. Hence the warning, it's so people who know that they're sensitive to difficult topics, either in general or right now, don't go ploughing in unawares.
I’m also glad the article stressed the importance of polymerase. For the uninitiated, if this molecule cannot be replicated with polymerase, then it severely constrains its applicability. Most research labs do not synthesize their own DNA - they replicate it in cells or with PCR.
Expanded genetic systems are most likely to work with natural enzymes if the added nucleotides pair with geometries that are similar to those displayed by standard duplex DNA. Here, we present crystal structures of 16-mer duplexes showing this to be the case with two nonstandard nucleobases (Z and P)
> Can any of the new letters be methylated?
Z has an amine group where the methylation would go on cytosine, and P has a ketone group instead of the amine group where the methylation would go on adenine, so presumably not, or at least not in the same way.
But there are pragmatic considerations to consider that make trinary difficult. Since most of computation currently happens with voltage, that's a one-dimensional quantity that we would have to divide into 3 levels to discriminate "bits". This requires more sensitivity and precision than merely two levels. (Edit: although maybe not, some historical computers were trinaryband reportedly more efficient; time will tell I suppose).
It would be easier to achieve trinary if we had two axes with which to encode, but that intrinsically yields quaternary; maybe that's how DNA operates.
 It is via radix economy: https://en.wikipedia.org/wiki/Radix_economy#Radix_economy_of...
There are some tricks to transform the 20 aminoacids in other aminoacids after they are in the proteins, so that increase the number of used aminoacids a little https://en.wikipedia.org/wiki/Non-proteinogenic_amino_acids And there are small variations of the genetic code that include other aminoacids https://en.wikipedia.org/wiki/Genetic_code#Alternative_genet.... So the total number of aminoacids used in the wild is approximately 30.
With 4 bases and 3 bases per codon, you can encode up to 64, so there is some redundancy and room for a few new aminoacids.
You don't want too much redundancy, because you have to synthetize the tRNA. And in principle you need a tRNA for each possible codon, so you want to minimize the number of codons. (Actually, the genetic code has some patters and instead of 63 tRNA the cells have at most 41 tRNA. https://en.wikipedia.org/wiki/Transfer_RNA
I guess that increasing the number of bases makes it more easy to make mistakes, and more difficult for the enzymes to distinguish them.
2-amino-8-(2-thienyl)purine and pyridine-2-one
7-(2-thienyl)imidazo[4,5-b]pyridine and pyrrole-2-carbaldehyde
7-(2-thienyl)imidazo[4,5-b]pyridine and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole
2-(2-Deoxy-β-D-erythro-pentofuranosyl)-6-methyl-1(2H)-isoquinolinethione and (1R)-1,4-Anhydro-2-deoxy-1-(3-methoxy-2-naphthyl)-D-erythro-pentitol
These all work fine when copying DNA sequences using existing cellular mechanisms and PCR. As far as I know, it remains to be seen whether they can encode for proteins.
RNA transcription is complicated by "wobble pairs" with uracil, inosine, and uridine variants, occurring in RNA, with the four bases present in DNA, and with each other. There isn't a 1-to-1 correspondence from DNA base to RNA base. It may be that our DNA uses only the four specific bases guanine, cytosine, adenine, and thymine because wobble pairings provided additional mutation resistance, or offered additional structural options for transcribed proteins.
Actually, you don't even need new DNA letter to do that. DNA codons can encode 64 different amino acids (63, as one codon must encode the end of sequence), but only 20 amino acids are actually used.
Adding another amino acid is theoretically possible, but this would require rewriting the whole DNA to reencode, say leucine from CTG to another codon to assign CTG to some other acid.
More likely, the new bases can be used as a higher-density data storage medium for those folks interested in making biological data stores.
biochemistries listed here are denoted as bonding atom/solvent pairs.
I think for now we are still a ways off from this being interesting at the protein level. I would think you would need new tRNAs to recognize the new bases in-order to really utilize them at a protein level, and those tRNAs would need to bind to different amino acids than we currently have for there to be any new protein function that we can't already accomplish with ATCG.
That being said, you can still do a lot of interesting stuff with nucleic acids like DNA and RNA, more and more research these days show they can do more than just encode information for proteins.
Possibly the RNA could have some secondary function in the folding of the protein or as a complex inside it...
In other words, evolution never does whole system redesign. It's all legacy code from the beginning with incremental optimization steps in response to selection pressure at the moment.
Genes and amino acids are very close to the starting point, it's unlikely that they are optimal outside the starting environment. 4 amino acids was enough to get things going, inserting new amino asides later would require redesigning all the machinery starting from scratch. Evolution can't do that, but humans might be able to insert news stuff.
E.g. the evolution of flight. Flying is a huge advantage, but evolving towards a body capable of flight without being able to fly, should require millions of years of disadvantages to get the flying advantage. But incrememtal optimization and survival of the fittest contradicts a very slow process over several disadvantaged steps.
Environmental niches filled by squirrels, mice, small dogs, foxes, coyotes, raccoons, hawks, eagles, and numerous others would be left wide open. Those rats would start evolving into any empty niches, that were close enough. Over time/generations the further niches would become available. Smaller rats would take advantage of food sources that the mice used to eat. Better climbers would evolve to take advantages of what squirrels used to eat. Rats that specialize in eating insects would start to specialize.
Some rats would even start to specialize in eating other rats. Other rats would specialize in not getting eaten and fill the niche left by rabbits.
The rats that evolved into squirrel like mammals might specialize in jumping ever further to avoid the ground where the rat-dogs roam looking for rat-squirrels to eat. Said jumping might even evolve into flying squirrel like rats. Given enough time active flight would evolve. Rats would even start to populate the oceans.. much like whales did.
Evolving into a bird requires efficient lungs, light weight (things like hollow bones), and not wasting weight on things like powerful legs, thick skin, etc. But every step of the way would be better for some niches.
That's what I have trouble with. E.g. the climbing rat would be able to get all these higher food sources on trees. But evolving wings loses the ability to climb. And now it's competing with faster and more sturdy rats on the ground again.
I can only think about the disadvantages being no real disadvantages, because there is abundance of food and no serious competition/predators.
Evolution will try everything. If there's sturdy carnivorous rats around, the rest of the rats will try their niche. Some will try faster, some will try playing dead, others will dig holes to hide in, or climb trees. Some might grow thicker skin, or looser skin, or venom, or just being poisonous to eat.
These empty niches can be filled pretty quickly. After removing top predators like wolves for instance, the average size of coyotes has been steadily increasing over the last 50 years.
A side note, plants figured (in the evolutionary sense) that birds were great seed spreaders compared to the non-birds. Some evolved impressively strong spices (Capsicum) that birds are impervious to (up to several % by weight) that eliminated non-bird consumption of their berries. So to protect bird feed from squirrels at Capsicum.
Any bird like feature will help with a particular niche. Feathers, beaks, dexterous claws, bird songs, gliding, and of course powered flight. So it's not like you need a huge jump from land based mammal to full flight before you have any evolutionary advantage for a particular niche.
I think you are skipping steps as you try to imagine the process. Wings are at the extreme, there are intermediate features. Checkout a video of "flying squirrels", you will notice that they are more like "gliding squirrels".
As for why are there 20 encoded amino acids (+ selenocysteine and pyrrolysine)?
I've read that codon similarities hint at an original more compact alphabet of two base codons (allowing for up to 15 distinct amino acids and a stop).
I've also read that the "more recent" amino acids, like cysteine, are more readily oxidized and thus provided some advantage as atmospheric oxygen levels increased.
But all of these are guesses about the world before LUCA, which is really lost to us.
Well, I'm sure that if you evolved life for long enough it might find a way to switch bases. But this looks far harder than the sorts of changes that take a billion years, like photosynthesis or eukaryotes. So I wouldn't expect it to happen before the Sun boils off life on Earth in another billion.
An interesting corollary would be that, if we find them outside Earth, then, probably, they didn't evolve on Earth
https://en.wikipedia.org/wiki/Meteorite#Meteorite_chemistry, which cites https://www.nasa.gov/content/nasa-ames-reproduces-the-buildi...
6P3 vs 4P3;
Artificial bases might help us understand why live evolved the way it did. Maybe there is an interaction between the bases and the way DNA folds? Or were ACTG just an accident?
Artificial bases might be useful as part of other techniques, perhaps for tagging DNA sequences. Maybe they can one day be used to disrupt or alter certain biological processes.
I am not a biologists, so I have no idea if that makes any sense, but the research is interesting.
"No one will ever need a 100 megabyte drive!" "No one could ever use a 1 megabit network connection!" "No one will ever use more than 256 megabytes of RAM!"
But I don't know enough biology to understand if that's the case.
As for increasing data density, again, do you suggest converting computers to ternary to do the same?