Yeah it's kinda contradictory. It's part critique and part self reflection on an ideology that I have mixed feeling towards. In the same vein as the Programming Language Checklist for language designers.
"Below" in this context means "directly below and connected by a line" (the lines are the edges of the cube). So you can have a blue vertex that is vertically below a white vertex, so long as they are not connected by an edge. The first time this can happen is for a 3 dimensional cube. You can have blue at the top, then 2 blue and 1 white below that, and then 1 blue (under and between the 2 blues in the layer above) and 2 white in the layer below that, and then white for the bottom vertex. This configuration can be rotated 3 ways and this takes us from 17 to 20.
The post assumes a 2% annual rate of growth in energy consumption. So, due to the nature of exponential functions, most of the energy loss would concentrated towards the end of the 1000 years, as the energy consumption approaches 400 million times present day energy usage. The first two centuries of use would not have a noticeable impact.
So I'm not a moral relativist, like, at all. But in this case, it seems like we westerners have constructed one particular set of norms for encouraging innovation, where we decide that it's possible for ideas to be owned. It's not like there's anything intrinsically wrong with copy-pasting code, it's just that we have a legal framework where we've traded away the right to freely copy-paste code so that we can grant a temporary monopoly to its author. We do this in the hope that more useful code will be written than otherwise. But if the people of China decide that that's not a trade-off they want to make, then I don't think we westerners get to say that they've committed a moral wrong in making that decision. It's just that they have a different way of doing things.
Like I said, I'm not a moral relativist at all. Murder is still wrong in China, imprisoning people not convicted of any crime is still wrong in China, lying is still wrong in China. But I just don't see how copyright infringement is universally an immoral act.
OK, no, it's not intrinsically or uniquely western. Any person who has an idea has no obligation to share their ideas, that's universal. This revolves around the expectation set at the point of sharing their ideas, under which they say, effectively, "Here's something I came up with. I'm sharing it with you in exchange for your agreement that you will attribute it to me and not take the idea and publish it as your own." The ownership and control under closed source could (somehow) be argued in the way you're suggesting, but for open source it's a matter of blatant disrespect and refusal to adhere to and of the requirements set by the initial sharers of their ideas.
Naw... it's just the hypocrisy. If any other company/country were doing it, they should be called out too.
If you actually stop doing the wrong thing and decades later complain about others still doing it (the Western world can complain about slavery) that's moving up and on, but complaining about TikTok maybe being banned when FB/Google/Twitter and TikTok itself are banned in your own country?
Same with copyright... it's not like China's government doesn't issue copyrights and enforce copyrights, it's just that there's not really rule of law since it's enforced so haphazardly. There's little plan other than individuals "what can I get right now".
Getting downvoted by the group-think majority. Just know I tend to agree with you. Our country does not get to dictate how the world operates no matter what our beliefs are. And the idea of intellectual property is just a belief, and a bad one at that.
Operating under a different set of rules doesn't change the fact that they violate ours. They can be simultaneously right under their own standards and wrong under ours. We can and should judge them under our moral standards.
> Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction.
Geoff Hinton is not a billionaire! And the field of AI is much wider than LLMs, despite what it may seem like from news headlines. Eg. the sub-field of reinforcement learning focuses on building agents, which are capable of acting autonomously.
Pretty sure the angst is about the AGI killing everyone. What's the connection between not killing people and enslavement? I don't kill people, yet I don't consider myself enslaved. The entire point of worrying about this at all is that a sufficiently smart AI is going to be free to do whatever it wants, so we had better design it so it wants a future where people are still around. Like, the idea is: enslavement, besides being hugely immoral, obviously isn't going to work on this thing, so we'd better figure out how to make it intrinsically good!
It's much easier than that! Living cells already have ribosomes that construct proteins and all the other molecular machinery needed to go from DNA sequence to assembled protein. You can order a DNA sequence online and put it into e-coli or yeast cells and those cells will make that protein for you.
That’s like saying anyone who has a computer can hack into the NSA. In principle yes, but the amount of know-how and troubleshooting is being underplayed here. Not to mention the question of what you do with the protein once you produce it.
Tissue Nanotransfection reprograms e.g. fibroblasts into neurons and endothelial cells (for ischemia) using electric charge. Are there different proteins then expressed? Which are the really useful targets?
> The delivered cargo then transforms the affected cells into a desired cell type without first transforming them to stem cells. TNT is a novel technique and has been used on mice models to successfully transfect fibroblasts into neuron-like cells along with rescue of ischemia in mice models with induced vasculature and perfusion
> [...] This chip is then connected to an electrical source capable of delivering an electrical field to drive the factors from the reservoir into the nanochannels, and onto the contacted tissue
> In a paper published today in Nature, researchers report refashioning Photorhabdus’s syringe—called a contractile injection system—so that it can attach to human cells and inject large proteins into them. The work could provide a way to deliver various therapeutic proteins into any type of cell, including proteins that can “edit” the cell’s DNA. “It’s a very interesting approach,” says Mark Kay, a gene therapy researcher at Stanford University who was not involved in the study. “Where I think it could be very useful is when you want to express proteins that can do genome editing” to correct or knock out a gene that is mutated in a genetic disorder, he says.
> The nano injector could provide a critical tool for scientists interested in tweaking genes. “Delivery is probably the biggest unsolved problem for gene editing,” says study investigator Feng Zhang, a molecular biologist at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology and the Broad Institute of M.I.T. and Harvard. Zhang is known for his work developing the gene editing system CRISPR-Cas9. Existing technology can insert the editing machinery “into a few tissues, blood and liver and the eye, but we don’t have a good way to get to anywhere else,” such as the brain, heart, lung or kidney, Zhang says. The syringe technology also holds promise for treating cancer because it can be engineered to attach to receptors on certain cancer cells.
> "I’m skeptical that biological systems will ever serve as a basis for ML nets in practice"
>> First of all, ML engineers need to stop being so brainphiliacs, caring only about the 'neural networks' of the brain or brain-like systems. Lacrymaria olor has more intelligence, in terms of adapting to exploring/exploiting a given environment, than all our artificial neural networks combined and it has no neurons because it is merely a single-cell organism [1].
https://www.youtube.com/watch?v=BYRTvoZ3Rho