I'm convinced naming things is equivalent to choosing the right abstraction, and caching things is creating a correct "view" from given normalized data.
the greatest travesty of modern science is that fraud is not illegal.
in every other industry that i can imagine, purposely committing fraud has been made illegal. this is not the case in modern science, and in my opinion the primary driver of things like the replication crisis and the root of all the other problems plaguing academia at the moment.
It's hard to prove when it isn't investigated. How many of the debunked psychology professors took federal funding? How many have been criminally investigated?
my own institution launched an internal investigation into a professor who i know for a fact committed fraud and was "unable to prove intentional wrongdoing". academic institutions have taken the "this never happens because we are morally pure" approach which we all know is a load of baloney, they are perversely incentivized to never admit fraud.
the witness and reportee who i am friends with was directly instructed by this professor to falsify data in a more positive light in order to impress grant funders. multiple people were in attendance in this meeting but even that was not enough to see any disciplinary action.
duke also has a notorious reputation for being a fraud mill.
Note both those guys were found guilty for taking government money under false pretenses (to do with fake science, not for doing fake science, which is more supporting evidence that fake science is legal.
If you think of the training data, e.g. SO, github etc, then you have a human asking or describing a problem, then the code as the solution. So I suspect current-gen LLMs are still following this model, which means for the forseeable future a human like language prompt will still be the best.
Until such time, of course, when LLMs are eating their own dogfood, in which case they - as has already happened - create their own language, evolve dramatically, and cue skynet.
More indirection in the sense that there's a layer between you and the code, sure. Less in that the code doesn't really matter as such and you're not having to think hard about the minutiae of programming in order to make something you want. It's very possible that "AI-oriented" programming languages will become the standard eventually (at least for new projects).
One benefit of conventional code is that it expresses logic in an unambiguous way. Much of "the minutiae" is deciding what happens in edge cases. It's even harder to express that in a human language than in computer languages. For some domains it probably doesn't matter.
> Some were fast but modeled the spec loosely, making it hard to build correct tooling on top. Others were closer to the spec but used untyped maps everywhere, which made large refactors and static analysis painful.
> The brain has intrinsic understanding of the world engraved in our DNA.
This is not correct. The DNA encodes learning mechanisms shaped by evolution. But there is no "Wikipedia" about the world in the DNA. The DNA is shaped by the process of evolution, and is not "filled" by seemingly random information.
I think std::rvalue would be the least confusing name.
reply