I know little to nothing about GA, but... it only makes sense to take time if the procedures are still done somehow manually, like speaking with human operators out there. It true, the dependence on a fallible human attention for something that can be automated, that right there is a safety hazard to me.
That is not (quite) true. The lifeless bodies of flora and fauna won't rot, but they will decay. Macro-molecules (like proteins), the basis for all life, are fragile and easy to break down due to a range of physical causes, most likely of which is just the non-cryogenic temperature. Smaller organic molecules are more sturdy and will last longer, but in the long term they'll end up under heavy layers of sediments and turn to fossil fuel (coal & oil).
The most relevant (practical) questions to me are, how does this capacitor behave long-term? How does it fare over a large number of charge/discharge cycles? I'd like to assume that, since it's not a battery, thus not based on chemical process for energy storage, it will retain its initial performance for a long time, but the question is - how long? A human generation (i.e. 25-30 years)? Or maybe longer, as for at least ten generations? That would directly affect the demand for long term investments in the construction sector. And if things may sound rosy from its storage capacity capability, if it will get used for combined structural and energy storage use, what long term impact on structural properties may render this dual use?
"keep in mind that silicon wafers are still quite expensive and probably too expensive for human scale objects"
That is wrong. The expensive part in silicon chip production is the process. It's just that hard to make working circuitry close to the molecular scale and a lot of sophisticated technology had to be developed for it (i.e. a lot of R&D invested money), hence the high cost of production nowadays. Here however, it's about close to no fabrication process (by comparison) and the purity of the raw material involved will not matter that much either.
Silicon wafers come from a large single crystal that has to be grown. It's not a trivial engineering challenge, as you would suggest. There's a supply shortage for the entire semiconductor industry that is driving up prices.
"Don’t allow arbitrary abbreviations of subcommands. [...] you allowed them to type any non-ambiguous prefix, like mycmd ins, or even just mycmd i, and have it be an alias for mycmd install. Now you’re stuck: you can’t add any more commands beginning with i, because there are scripts out there that assume i means install."
Please avoid the use of short arguments in scripts. It makes the least sense there. The short arguments (along with aliases, abbreviations, and whatnot) are a convenience for human usage, to reduce the amount of manual typing. In scripts you can be explicit with minimal cost (and you also should, considering the ratio of writes vs. reads).
Think about what it would mean to endeavor into project of reviving an extinct species. Not a bacteria or some worm, but one of the most complex organisms as target. This by all accounts is a monumental task, never-done-before kind of stuff, expect development of pioneering bio-engineering technology in there and billions worth of tried-and-true expertise gained in the process. Then open your eyes and look again how you are being told, with a straight face, "we're doing it for... Dodo", don't rise an eyebrow now. Well, this, more than anything, looks like a master piece of project's public image grooming to me. If I would ever plan to develop daring things that may scare people, and I'd also want to find ways to avoid the costs required for total secrecy, I'd remember Dodo, alright.
I don't think splicing dodo genes into pigeons is quite the scientific and engineering moon shot that you think it is. It would be a very big deal. It would not be the precursor to the genepocolypse. Biochem and MCB undergrads do cloning experiments.
Not the OP, but my experience when I had interviews and worked on improvement wasn't that much on technical side as it was on "soft skills" that had a bigger impact on the interview outcome than I was previously willing to admit. But even if the study is technical, that's OK. There are a lot of pieces that rarely occur outside academia and hiring interviews, and after years you'd want to fresh things up, even if it's just to improve recall and answer times during the interview, if nothing else.
Besides what I've already said about Rust in the past¹, the only thing that I understand Rust has over C++ is additional cross-checks, performed at every compilation. In C++ these kind of additional expensive operations are covered by optional 3rd party tools, in this particular case by static code analyzers. There is a good (i.e. useful) reason to engineer the compilation in such separate steps. Of course, the option of having the entire batch of operations tightly integrated has benefits (as well as drawbacks), but that's just about it.
Are you kidding? Just one very obvious example: Rust has had good interfaces for std:: optional and std::expected since v1. C++ is finally getting them in c++23 and c++26 (maybe) respectively.
Moreover, you only have to learn Rust once because the language doesn't have nearly the same number of weird limitations and radical overhauls every 3 years. I attend the conferences and write C++ regularly. I still run into issues when reviewing teams where I have to consult the standard because I'm not sure if some weird construct is valid because everyone has their own subset of C++ they're comfortable with. God forbid you have to onboard someone new to the language and explain all the different kinds of initialization, a topic that's utterly trivial in literally any other language.
As an aside, static code analyzers are not able to provide the same kind of guarantees rustc does to arbitrary C++.
"As an aside, static code analyzers are not able to provide the same kind of guarantees rustc does to arbitrary C++."
Yes, and now the question becomes - is this due to hard limitations of the C++ as a language (in the sense that provision of such guarantees is simply impossible for static code analyzers, no matter what), or is it that static code analyzers just didn't provide so far (but can and most likely will)?
I personally recognize and am thankful for Rust's effort to raise general awareness on the safeness aspect. This (awareness rising) would have been indeed hard to do without forcing this safety to the language core design, although I don't see it working well for Rust in the long run (after people will start having quality alternatives and won't have to pay the "safe by default" price that Rust asks of them).
C and C++ are both very mature languages at this point and there have been 30-40 years of fairly intense efforts to try and retrofit memory safety onto them with analyzers, language extensions, and runtimes. I wouldn't bet money that it's provably impossible, but I think it's safe to say that we're unlikely to see a general solution in the near future.
Rust has a more automatic type system, not just inferring LHS with `auto` like in C++. And overall newer language features, a more modern module system, and a nicer standard lib IMO.
Also, the robot voice. A while ago, SF movies were projecting robot voices easily distinguishable from human ones. I made peace with the idea that I'd hear something in line with those vocal timbres when the time would come, but nowadays the phone chatbots (and also other use cases where they resort to employing assisted human-robot interaction) are all imitating human voices. I find it jarring, it's like when I figure out that some (human) stranger was trying to trick me by impersonating someone I know.
I know little to nothing about GA, but... it only makes sense to take time if the procedures are still done somehow manually, like speaking with human operators out there. It true, the dependence on a fallible human attention for something that can be automated, that right there is a safety hazard to me.