Hacker Newsnew | past | comments | ask | show | jobs | submit | wiz21c's commentslogin


> the code base is now 100% (unsafe) Rust > ... It’s a hobby project. Like gardening, but with more segfaults.

From now on, it's like gardening, but in hell :-)


> which might not be the ones you want

And in your experience, how often is that ?


Hard to quantify, but as an opinionated developer I've found that AI systems with too much leash will often head off in directions I don't appreciate and have to undo. This is desirable in areas where I have less knowledge or want more velocity, though, but a tighter cycle with smaller steps enables me to maintain more of my taste through making more concrete decisions rather than merely pointing a direction.

If being self aware consists of saying: "I am self aware", well, dunno...

Release date: 2016-04-29

It is the copyleft-next project itself that has been restarted:

> We excitedly announce that the two of us (Richard and Bradley) have been in discussions [1] for a few months to restart, revitalize, and relaunch copyleft-next!

> Today, GPLv3 turns exactly 18 years old. This month, GPLv2 turned 34 years old. These are both great licenses and we love them. Nevertheless, at least once in a generation, FOSS needs a new approach to strong copyleft.

https://lists.copyleft.org/pipermail/next/2025q2/000000.html


any good read on how good they are nowadays ? (my background is cracking games 35 years ago :-))

I'm not a reverse engineer or a white hacker but I like reading about it. Most of the malware is made for Windows OS because of the Windows' enormous market share.

Majority of information about Windows malware I get from big computer security companies' research blogs like:

https://www.trendmicro.com/en_us/research.html

https://www.proofpoint.com/us/blog

https://research.checkpoint.com/

https://blog.talosintelligence.com/

https://www.welivesecurity.com/en/

Microsoft also got good security research blog: https://www.microsoft.com/en-us/security/blog/

Majority of the research combes down to researching malware's capabilities regarding malware persistence, anti-VM techniques and anti-debugging techniques.

Here is for example good compilation of malware's anti-debugging and anti-VM techniques:

https://anti-debug.checkpoint.com/

https://github.com/CheckPointSW/Evasions


Malware targeting Macs is booming, and, IMO, the most interesting malware targets iOS.

https://taomm.org/

https://citizenlab.ca/

https://objective-see.org/blog.html


Lower bound tells you how much it's worth to improve the SOTA. It gives you a hint that you can do better.

So it's more like polar star. Maybe not directly practical, but it will lead tons of people in the right direction.


Thanks. I think the title is quite misleading then? I would have expected better from Scientific American.

Before optimizing, I always balance the time I'll need to code the optimization and the time I (or the users of my code) will effectively gain once the optimization is there (that is, real-life time, not CPU time).

If I need three weeks to optimize a code that will run for 2 hours per month, it's not worth it.


But by not optimizing, you don't grow your profiling/optimizing skills and you miss out on a reduction in how long optimizing takes you for future work. Therefore many more codes will not meet the threshold and your skills may not grow for a long time.

You couldn't know, but my job is 50% about optimizing computational workloads. But many times, when questionning my users, it happens that they want an optimisation for some code that will run 2 or 3 times. So eventhough they'll have to wait a week for the computation to run, it'll take me just as much time to make the optimisation run :-)

But if code happens to be used 10 times a week and takes about a day or two to run, it's a no brainer: spending a month optimizing for 10% speed increase is worth it !


The one question that needs to be asked is would the users run it more often if it didn't take so long? There is nothing a computer can do that a room full of clerks in 1800 couldn't do, but the runtime would be so slow (or the cost in clerks so high) that nobody dared ask those questions.

Exercise for the reader, given an unlimited budget to hire 1800's clerks how many FFS could you achieve running doom. (obviously the number is too low to make the game playable)


From the article abstract: "All experiments were done with safety precautions (e.g., sandboxing, human oversight)."

Do the authors really believe "safety" is necessary, that is, there is a risk that somethign goes wrong ? What kind of risk ?


From what I understand, alignment and interpretability were rewarded as part of the optimization function. I think it is prudent that we bake in these "guardrails" early on.

as a regular human he may just have hallucinated :-)

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: