https://github.com/GoogleChromeLabs/critters Might be a good starting point. It’s designed to inline the css afterward so it’s more focused on extracting used css than removing unused.
Slurm [1] is a open-source HPC scheduler thats a good example. Its featureful and (surprisingly) easy to run if no user accounting (i.e., accounting for individual computation budgets) is needed.
The Evolutionary Computation Bestiary [1] list a wide variety of animal behavior inspired heuristics.
The foreword includes this great disclaimer:
"While we personally believe that the literature could do with more mathematics and less marsupials, and that we, as a community, should grow past this metaphor-rich phase in our field’s history (a bit like chemistry outgrew alchemy), please note that this list makes no claims about the scientific quality of the papers listed."
The entire field of metaheuristics is in dire need of a shakeup. Many of the newer publications are not actually novel [0, 1, 2, 3, 4, 5], the metaphors used to describe these methods only disguise their inner workings and similarities and differences to existing approaches and shouldn't justify their publication [6, 7]. The set of benchmarks used to verify the excellent performance of these methods is small and biased [8, 9]. The metaphors don't match the given algorithms [10], the given algorithms don't match the implementation [11] and the results don't match the implementation [12].
It's junk science with the goal of increasing the authors citation count. One of the most prolific authors of papers on "bioinspired metaheuristics" (Seyedali Mirjalili) manages to publish several dozens of papers every year, some gathering thousands if not tens of thousands of citations.
the real problem here isn't office ctr, it's the fact that there are 10 different ways to run a third-party program from start-up, many of which are not only opt-out but opt-out somewhere deep in the mysteries of the regisphere
This is a good example for an interactive non-SPA. JS only for cosmetics and all state is fully stored in the URLs. The main pages could even be statically generated (858705 combinations * 3 views * 8KB = 20GB).
https://www.wikifolio.com/ does something similar for the German speaking market. One part is a social portfolio sharing / tracking component. The other is that they create securities from popular portfolios under a revenue share model with the creators.
I think the following minor optimizations can further decrease the runtime:
* There is no use in first pop()ing a candidate from the "reacts" list and then potentially re-pushing them later. Getting a pointer to the last element, and removing it if it is not needed, is more efficient.
* Reordering the character equality condition could help a tiny bit since the inequality check should be faster than the ignore case check.
Good point. Here it is premature optimization anyway and too dependent on the input. In general, I think a balance between likelihood and cost should be found.
Out of curiosity, I just did a small benchmark (criterion) using the input.txt provided, and reordering results in a 10% difference:
Old Order time:[307.28 us 309.12 us 311.17 us]
Reversed time:[270.82 us 271.18 us 271.54 us]
This feels weird since the input.txt contains few pairs of equal characters, so there is likely something else going on. The inequality is very inexpensive but the ignore case compare should not be that expensive either.
Weirdly enough, PGO equalizes and increases the runtime, where both methods take approximately 330 us. (I simply applied PGO to the full benchmark harness, no idea if this is proper). Them being equal sounds more reasonable, but the increase in runtime doesn't feel right.
And in the files app, long tapping gives the option to scan a document, which works great. I’m trying to retain fewer pieces of paper and this is super convenient.
I'm working on a project where the evaluation of a 4th degree polynomial is on the hot path. My micro benchmark (evaluating two polynomials + little bit of addition) shows that using simple Horner's rule is already somewhat faster, and implementing Horner's rule with fused-mul-add is more than twice as fast.
If you do not need the more granular firewall configuration options there is also classic port knocking (https://en.wikipedia.org/wiki/Port_knocking) where the daemon sits behind the firewall so all ports can be closed by default.