Hard to quantify, but as an opinionated developer I've found that AI systems with too much leash will often head off in directions I don't appreciate and have to undo. This is desirable in areas where I have less knowledge or want more velocity, though, but a tighter cycle with smaller steps enables me to maintain more of my taste through making more concrete decisions rather than merely pointing a direction.
It is the copyleft-next project itself that has been restarted:
> We excitedly announce that the two of us (Richard and Bradley) have been in
discussions [1] for a few months to restart, revitalize, and relaunch
copyleft-next!
> Today, GPLv3 turns exactly 18 years old. This month, GPLv2 turned 34 years
old. These are both great licenses and we love them. Nevertheless, at
least once in a generation, FOSS needs a new approach to strong copyleft.
I'm not a reverse engineer or a white hacker but I like reading about it. Most of the malware is made for Windows OS because of the Windows' enormous market share.
Majority of information about Windows malware I get from big computer security companies' research blogs like:
Majority of the research combes down to researching malware's capabilities regarding malware persistence, anti-VM techniques and anti-debugging techniques.
Here is for example good compilation of malware's anti-debugging and anti-VM techniques:
Before optimizing, I always balance the time I'll need to code the optimization and the time I (or the users of my code) will effectively gain once the optimization is there (that is, real-life time, not CPU time).
If I need three weeks to optimize a code that will run for 2 hours per month, it's not worth it.
But by not optimizing, you don't grow your profiling/optimizing skills and you miss out on a reduction in how long optimizing takes you for future work. Therefore many more codes will not meet the threshold and your skills may not grow for a long time.
You couldn't know, but my job is 50% about optimizing computational workloads. But many times, when questionning my users, it happens that they want an optimisation for some code that will run 2 or 3 times. So eventhough they'll have to wait a week for the computation to run, it'll take me just as much time to make the optimisation run :-)
But if code happens to be used 10 times a week and takes about a day or two to run, it's a no brainer: spending a month optimizing for 10% speed increase is worth it !
The one question that needs to be asked is would the users run it more often if it didn't take so long? There is nothing a computer can do that a room full of clerks in 1800 couldn't do, but the runtime would be so slow (or the cost in clerks so high) that nobody dared ask those questions.
Exercise for the reader, given an unlimited budget to hire 1800's clerks how many FFS could you achieve running doom. (obviously the number is too low to make the game playable)
From what I understand, alignment and interpretability were rewarded as part of the optimization function. I think it is prudent that we bake in these "guardrails" early on.
reply