Although using a low discrepancy sequence (eg Halton, Sobol, R2) might be a good starting point, these sequences are designed to maximize the distance between points assuming that the left-and-right edges wrap, as well as the top-and-bottom.
As it seems that the OP does not require this condition, I wold presume you get better solutions which involve lots of going to and from the far-left to the far-right, etc..
(Ordered dither matrices might offer another good starting point.)
This is a fun problem to play with. Like others, I've been trying the general approach of generating random configurations, then iterating by swapping pairs of elements (each time choosing the pair that yields the best local improvement).
Lowest value found, has vertical symmetry (notice how (1,24), (2,23), (3,22), (4,21) and so on are paired)
Noticed that my best was almost horizontally symmetrical apart from two slots - except swapping those slots changes the score from 375.99867 to 376.19293. Most perplexing.
This is good, but it bugs me that the "perfectly ordered" calendar doesn't have the worst score. Maybe consider the calendars tiled on an infinite grid? That way 5 will be one right and one down from 4, instead of four left and one down. Maybe even shift the next line up, so 4 is next to 5...
Just a quick note to say thanks for posting this: I've had great fun exploring it: it's a simple puzzle to understand, yet harder than it first appears, and a low barrier of entry thanks to your posted code. Genuinely interesting seeing the symmetry appearing in the low-scoring solutions, and also an element of friendly competition and shared discovery as we post our scores. Cheers!
The funny thing about this one is while it mostly satisfies the constraint of not having consecutive numbers too close, it has a very obvious pattern to it. A nice example of overfitting, perhaps ;)
What additional constraint could you add to shake this up? Perhaps a rule that, as well as number adjacency being penalized, adjacent distances between consecutive numbers also get penalized?
That's a lot of swaps! I played around with a simulated annealing approach in my lunch break which consistently gets down to 383.xx with about 10000 swaps, or 379.xx in 100000 swaps. I've yet to tweak the annealing parameters to see if I can get down to (or beat) the 378 example in the article.
It's basically the same as the swap() function in advent.py except you sometimes allow swaps that result in worse scores according to an ever-decreasing probability ("temperature"). The added controlled randomness allows you to break out of local minima at the start and hone in on the local optimum towards the end.
EDIT: after a little tweaking of the temperature schedule I got 377.59 after 10K swaps.
gets a score 376.9629. This came from starting with a random matrix, trying all possible swaps and taking the best one, and iterating until a local minimum is reached.
The score() function is a bit slow at the moment. If I get some time later, I might see if I can speed it up a bit, which will allow for some faster experimentation.
EDIT: replaced code with pastebin link to save space in comments
Found by increasing number of iterations to 100000
Sim.annealing followed by an exhaustive pair-swap search to find the local minimum would have found this more efficiently. Possibly there are some further refinements left -- I haven't run the exhaustive pair search on the above!
Added to https://rjp.is/calendars/topthree.png (which is now four but nvm) to show the exceptional symmetry. I'm thinking we might be right on the lower limit here - doesn't seem much room for more symmetricalising.
I'm wondering if vertical symmetry might be involved (and a way of optimising future efforts) - plotting the journeys of the top three on this post definitely seems to imply that.
...only to realise it's the same as yours, but with the rows reversed! Rather surprised, but I now wonder if there is only a small number of very-low-scoring solutions. So perhaps this is less of a coincidence than it first appears.
I'm now using a much more aggressive temperature drop-off to find decent candidates early, followed by a tempering phase to search for nearby solutions, and a final cool-off to refine the final answer. I'm still using only random pair swaps in Python, so probably wasting a lot of cycles, but I'm still quite surprised how quickly it converges to some pretty decent scores. Beyond that I'm just going to try lots of random starting layouts.
I'm interested to see if my method can find any of the other posted solutions or (fingers crossed!) any new ones, but I may need to crunch through a lot more candidates... I will have to translate from Python into something faster to up my game!
Not entirely scientific but starting from the trivial 1-24 board, I did 100M random swaps (in Go, keeping all of them, takes about 4 minutes) and counted how many scores were in each bucket of 10 (ie 370-379.99, 380-389.99, etc.) Having done this a few times and never got anything in 37, I'm thinking the number of solutions below 380 must be relatively small, yeah.
The best known solution is plastic numbers. I expect you could use something like that here rather than brute force.
http://extremelearning.com.au/unreasonable-effectiveness-of-...