Hacker Newsnew | past | comments | ask | show | jobs | submit | gregod's commentslogin

https://github.com/GoogleChromeLabs/critters Might be a good starting point. It’s designed to inline the css afterward so it’s more focused on extracting used css than removing unused.


Slurm [1] is a open-source HPC scheduler thats a good example. Its featureful and (surprisingly) easy to run if no user accounting (i.e., accounting for individual computation budgets) is needed.

[1]: https://slurm.schedmd.com/documentation.html


It’s also really good for when accounting is needed too, it’s just more complicated to setup. :)


Yes, that was poorly worded. It has a very neat accounting system! Its just a little more complex to setup and keep in sync.


The Evolutionary Computation Bestiary [1] list a wide variety of animal behavior inspired heuristics.

The foreword includes this great disclaimer: "While we personally believe that the literature could do with more mathematics and less marsupials, and that we, as a community, should grow past this metaphor-rich phase in our field’s history (a bit like chemistry outgrew alchemy), please note that this list makes no claims about the scientific quality of the papers listed."

[1]: https://fcampelo.github.io/EC-Bestiary/


The entire field of metaheuristics is in dire need of a shakeup. Many of the newer publications are not actually novel [0, 1, 2, 3, 4, 5], the metaphors used to describe these methods only disguise their inner workings and similarities and differences to existing approaches and shouldn't justify their publication [6, 7]. The set of benchmarks used to verify the excellent performance of these methods is small and biased [8, 9]. The metaphors don't match the given algorithms [10], the given algorithms don't match the implementation [11] and the results don't match the implementation [12].

It's junk science with the goal of increasing the authors citation count. One of the most prolific authors of papers on "bioinspired metaheuristics" (Seyedali Mirjalili) manages to publish several dozens of papers every year, some gathering thousands if not tens of thousands of citations.

[0]: https://doi.org/10.4018/jamc.2010040104

[1]: https://doi.org/10.1016/j.ins.2010.12.006

[2]: https://doi.org/10.1016/j.ins.2014.01.026

[3]: https://doi.org/10.1007/s11721-019-00165-y

[4]: https://doi.org/10.1007/978-3-030-60376-2_10

[5]: https://doi.org/10.1016/j.cor.2022.105747

[6]: https://doi.org/10.1111/itor.12001

[7]: https://doi.org/10.1007/s11721-021-00202-9

[8]: https://doi.org/10.1038/s42256-022-00579-0

[9]: https://doi.org/10.48550/arXiv.2301.01984

[10]: https://doi.org/10.1007/s11047-012-9322-0

[11]: https://doi.org/10.1016/j.eswa.2021.116029

[12]: https://doi.org/10.1111/itor.12443


Related: MS Office 2010 was/is available as "click-to-run" edition, based on a virtual streaming file system.

https://learn.microsoft.com/en-us/office/troubleshoot/office...

The tech/product behind it has its roots in game distribution: https://en.wikipedia.org/wiki/Microsoft_App-V


is that the same "Office Click-to-Run" that I've had to disable running from start-up 15 times?


My god is that thing genuinely cancerous, the autoruns software helped me get rid of it for good


the real problem here isn't office ctr, it's the fact that there are 10 different ways to run a third-party program from start-up, many of which are not only opt-out but opt-out somewhere deep in the mysteries of the regisphere


This is a good example for an interactive non-SPA. JS only for cosmetics and all state is fully stored in the URLs. The main pages could even be statically generated (858705 combinations * 3 views * 8KB = 20GB).


https://www.wikifolio.com/ does something similar for the German speaking market. One part is a social portfolio sharing / tracking component. The other is that they create securities from popular portfolios under a revenue share model with the creators.


I find the costs to be prohibitevly high though…


I think the following minor optimizations can further decrease the runtime:

* There is no use in first pop()ing a candidate from the "reacts" list and then potentially re-pushing them later. Getting a pointer to the last element, and removing it if it is not needed, is more efficient.

* Reordering the character equality condition could help a tiny bit since the inequality check should be faster than the ignore case check.


> Reordering the character equality condition could help a tiny bit since the inequality check should be faster than the ignore case check.

I think the given ordering is better: in a conjuction, you typically put the less likely condition first, not the cheaper one.


Good point. Here it is premature optimization anyway and too dependent on the input. In general, I think a balance between likelihood and cost should be found.

Out of curiosity, I just did a small benchmark (criterion) using the input.txt provided, and reordering results in a 10% difference:

  Old Order time:[307.28 us 309.12 us 311.17 us]
  Reversed  time:[270.82 us 271.18 us 271.54 us]
This feels weird since the input.txt contains few pairs of equal characters, so there is likely something else going on. The inequality is very inexpensive but the ignore case compare should not be that expensive either.


What if you add PGO to the mix, how fast it is going to be?


Weirdly enough, PGO equalizes and increases the runtime, where both methods take approximately 330 us. (I simply applied PGO to the full benchmark harness, no idea if this is proper). Them being equal sounds more reasonable, but the increase in runtime doesn't feel right.

Micro benchmarking remains a fickle beast :)


I remember the talk where two different users measured different performance due to the username length..


Btw: The Apple Notes application has this functionality built-in if you are just interested in scanning.


And in the files app, long tapping gives the option to scan a document, which works great. I’m trying to retain fewer pieces of paper and this is super convenient.


Wohooo! And: What is it with Apple and not documenting features?


Could you elaborate?

I'm working on a project where the evaluation of a 4th degree polynomial is on the hot path. My micro benchmark (evaluating two polynomials + little bit of addition) shows that using simple Horner's rule is already somewhat faster, and implementing Horner's rule with fused-mul-add is more than twice as fast.


If you do not need the more granular firewall configuration options there is also classic port knocking (https://en.wikipedia.org/wiki/Port_knocking) where the daemon sits behind the firewall so all ports can be closed by default.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: