Hacker News new | past | comments | ask | show | jobs | submit | n3t's comments login

The author mentions MRI in the article.


I won't directly answer your questions but consider this. We have a function F that accepts an argument of size N and returns the result after N² instructions. Its time complexity is O(N²).

Now consider F'. F' is like F but we limit N to a fixed constant -- for example we say that N must be ≤ 2³². In this case, _for every input_ F' returns the result after at most (2³²)² = 2⁶⁴ instructions. Its time complexity is O(1). Note that O(2⁶⁴) = O(2⁶⁴ * 1) = O(1).

---

Technically, all implementable-on-real-machines algorithms either return after O(1) instructions, or loop after O(1) instructions. But being O(1) doesn't imply being fast in practice.

You might be also interested in reading about galactic algorithms: https://en.wikipedia.org/wiki/Galactic_algorithm.


At a previous job we had a service that handled around 1 query per second.

I crafted a oneliner that `tail -f`'d the logs and played a note for each response. I believe there were different notes for different HTTP status codes but it was years ago so the details flee me.


Interesting, could set the "bad" response codes to be in a minor keys and and "good" as major and feed that into a generative player. Listen to the stream and if it starts sounding moody and broody, check monitoring.


A lot of shady scrapers run on residential IP addresses.


Let's forget about the small shady scrapers. What about OpenAI? My bet is that they run their scrapers in Azure/AWS/GCP. Let's ban the big cloud providers.


> there are about 4 different ways to implement object moving

What are they?


0. no moves. This is very often needed for FFI callbacks, among others.

1. trivial copies. This is similar to 2, but means you do not have to do `swap`-like things in cases for non-destructive moves (which are, in fact, also important, they just shouldn't be the only kind of move. Note that supporting destructive moves likely implies supporting destructuring).

2. trivial moves. You can just use `memcpy`, `realloc`, etc. This is the only kind of move supported by Rust. C++ can introspect for it but with severe limitations. Note that "trivial" does NOT mean "cheap"; memory can be large.

3. moves with retroactive fixup (old address considered dead). What you do here is call `realloc` or whatever, then pass the old address (or the delta?) to a fixup function so that it (possibly with offset) can be replaced with the new address. Great caution is required to avoid UB by C's rules (the delta approach may be safer?). The compiler needs to be able to optimize this into the preceding when the fixup turns out to be empty (since generic wrapper classes may not know if their fields are trivially-movable or not).

4. full moves (both old and new address ranges are valid at the same time). C++ is the only language I know that supports this (though it is limited to non-destructive moves). One major use for this is maintaining "peer pointers" without forcing an extra allocation. Note that this can be simulated on top of "no moves" with some extra verbosity (C++98 anyone?).

Related to this, it really is essential for allocators to provide a "reallocate in place if possible, else fail" function, to avoid unnecessary move-constructor calls. Unfortunately, real-world allocators do not actually avoid copies if you use `malloc_usable_size` + `realloc`. If emitting C, note that you must avoid `__attribute__((malloc))` etc. to avoid setting off the bug-laden-piece-of-crap that is `__builtin_dynamic_object_size`.

Random reminder that "conditionally insert and move my object(s) into a container (usually a map), but keep my object alive if an equal one was already there" is important, and most languages do it pretty badly.

Linear types are related but it's all Blub to me.


Arithmetic underflow, obviously.


Searching for "all articles are available offline and without an internet connection" (with the quotes) in your favorite search engine might help.


It's the third result for me, after Rachel By the Bay and... your comment.


Can you share what your typical flashcard for the book looks like?


There's a bunch of different subtypes I've used. But one surprisingly high ROI example has been private vs public IP flashcard examples.

E.g. "10.1.0.0 - private or public?" "Private - everything in 10.0.0.0/8 is considered private". "192.169.0.0?" "Public".


I run fishnet[0] to help Lichess[1] analyze chess games.

[0]: https://github.com/lichess-org/fishnet [1]: https://lichess.org/


This is super cool. Lichess deserves all its praise and more, what a great idea.


That is pretty cool. Thank you for the recommendation!


> which of problematic in K8s where the visible CPUs may change while your process runs

This is new to me. What is this… behavior? What keywords should I use to find any details about it?

The only thing that rings a bell is requests/limit parameters of a pod but you can't change them on an existing pod AFAIK.


If you have one pod that has Burstable QoS, perhaps because it has a request and not a limit, its CPU mask will be populated by every CPU on the box, less one for the Kubelet and other node services, less all the CPUs requested by pods with Guaranteed QoS. Pods with Guaranteed QoS will have exactly the number of CPUs they asked for, no more or less, and consequently their GOMAXPROCS is consistent. Everyone else will see fewer or more CPUs as Guaranteed pods arrive and depart from the node.


If by "CPU mask" you refer to the `sched_getaffinity` syscall, I can't reproduce this behavior.

What I tried: I created a "Burstable" Pod and run `nproc` [0] on it. It returned N CPUs (N > 1).

Then I created a "Guaranteed QoS" Pod with both requests and limit set to 1 CPU. `nproc` returned N CPUs on it.

I went back to the "Burstable" Pod. It returned N.

I created a fresh "Burstable" Pod and run `nproc` on it, got N again. Please note that the "Guaranteed QoS" Pod is still running.

> Pods with Guaranteed QoS will have exactly the number of CPUs they asked for, no more or less

Well, in my case I asked for 1 CPU and got more, i.e. N CPUs.

Also, please note that Pods might ask for fractional CPUs.

[0]: coreutils `nproc` program uses `sched_getaffinity` syscall under the hood, at least on my system. I've just checked it with `strace` to be sure.


I don't know what nproc does. Consider `taskset`


I re-did the experiment again with `taskset` and got the same results, i.e. the mask is independent of creation of the "Guaranteed QoS" Pod.

FWIW, `taskset` uses the same syscall as `nproc` (according to `strace`).


Perhaps it is an artifact of your and my various container runtimes. For me, in a guaranteed qos pod, taskset shows just 1 visible CPU for a Guaranteed QoS pod with limit=request=1.

  # taskset -c -p 1
  pid 1's current affinity list: 1

  # nproc
  1
I honestly do not see how it can work otherwise.


After reading https://kubernetes.io/docs/tasks/administer-cluster/cpu-mana..., I think we have different policies set for the CPU Manager.

In my case it's `"cpuManagerPolicy": "none"` and I suppose you're using `"static"` policy.

Well, TIL. Thanks!


TIL also. The difference between guaranteed and burstable seems meaningless without this setting.


Even way back in the day (1996) it was possible to hot-swap a CPU. Used to have this Sequent box, 96 Pentiums in there, 6 on a card. Could do some magic, pull the card and swap a new one in. Wild. And no processes died. Not sure if a process could lose a CPU then discover the new set.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: