Hacker News new | past | comments | ask | show | jobs | submit login
Speeding up my Advent of Code solution with Dijkstra's Algorithm (siraben.dev)
94 points by siraben on Jan 1, 2022 | hide | past | favorite | 42 comments



Nice to see an implementation of priority queues in Haskell, although I think most people would say this is already an integral part of the dijsktra algorithm?

I haven't yet attempted this year's AOC but I suspect this is one where working backwards from the bottom right and dynamically storing the best value at each cell would work much better than dijsktra.


That assumes that the path only ever goes right and down. There’s no basis for that assumption in the general case, it’s entirely possible that the shortest path has to take a “detour” to avoid some high cost sections. This is a shortest-path problem, not a DP problem.


And in fact, there were detours in pretty much all the solutions I've seen posted online for part 2


Ah, I see what you mean. The shortest path could snake around a lot. Not quite as easy as I thought!


Yep, I wasted a few hours trying to force the DP solution to work as well before giving up and just using Dijkstra.


Yeah. Priority queue backing is the most common way of introducing the algorithm, although fib heap achieves better asymptotic performance in the general case.

Here the number of edges is linear in the number of vertices (at most 4x) so it doesn't make a difference (relative simplicity of priority queue probably wins out here in practice) but it's even more efficient to solve via memoized recursion as briefly mentioned in the article, assuming that you don't blow your stack.


The naive dynamic programming solution doesn't work because it's possible to 'backtrack'. But I used Bellman-Ford which is almost as simple and very fast.


I worked on a lot of previous AoC problems in November and then did AoC 2021 in December. I should do a little write-up because there are definitely patterns that repeat, but one thing I noticed was that if you're doing them in Python then the "heapq" module is sometimes so useful it feels like cheating. There are many problems where as long as you have a simple brute force algorithm and a way to rank potential solutions, you're mostly done.


If it works, it works. Priority queues are an extremely powerful data structure if you leverage them right.

I find C++'s set and map and related data structures are also very useful for solving cp problems.


oh yeah it goes without saying that Python dicts and sets are my AOC bread and butter :D


Yeah, that's how we were taught it anyway. Nodes are added to a priority queue as they're discovered while being popped off the queue in order. From the title I was expecting something like going from naive Dijkstra via A* to an optimized jump point search or something.

That said, the title aside, it's still a worthwhile post. It's sometimes easy to accidentally nuke your algorithm's performance with an innocent-looking function call or operation that's secretly turning it quadratic or worse.


I approached this task with, "oh, I need to implement a-star".

And initially implemented it naively with wrong data structures. Which of course resulted in implementation working too slow for everything except first toy example.

Then I gradually optimized it bit by bit. I eventually landed on permanently sorted array of nodes to evaluate with simple insertion sort (with linear search) and it became blazing fast.

Along the way I learned the hard way that the heuristic in a* algorithm should never return higher value than the minimal value that that will be actually assigned to the path to destination. That is, if you want algorithm to result in optimal path. Because somehow I missed that when reading about a*.

I had a blast. Advent of code is absolutely wonderful thing.


> Along the way I learned the hard way that the heuristic in a* algorithm should never return higher value than the minimal value that that will be actually assigned to the path to destination. That is, if you want algorithm to result in optimal path.

That's true, but if you don't mind sacrificing a bit of optimality, then overstating the heuristic can speed up the calculation. This is called "overdo" in 'Fast shortest path computation in time-dependent traffic networks' (Lefebvre and Balmer).


Even cooler, the ratio you overstate the heuristic by is the ratio of how close your result is guaranteed to be to the optimal result. It's a really nice performance slider to have on problems that don't require optimal results (ie. most engineering problems).


You're not wrong, that's really cool. Thanks!


Overstating the heuristic was sadly not an option here though, as sacrificing optimality doesn't give the right answer.


It’s interesting to see the implementation of a priority queue, but the title is misleading: the author is speeding up a slow implementation of Dijkstra by a factor 2700


Thanks, the title has been changed on HN and the article.


This title is arguably disingenuous because the original algorithm wasn't Dijkstra's.

Dijkstra's algorithm is the algorithm as specified with the runtime characteristics as specified. He wrote a different algorithm which can solve the same problem and works a lot like Dijkstra's but is much slower. Then he finally actually implemented Dijkstra's and the result was the speedup.

The title lead me to believe that he found some way to gain big speedups in Dijkstra's algorithm.

This is like me writing an article about prime factorization and doing naive trial division and then making claims like this when I instead choose to use a sieve.


This should be further up. The title is definitely wrong and misleading. "Improving my shortest path algorithm with Dijkstra's algorithm by a factor of 2700!" would be more accurate.

The author is an undergraduate student so probably just missed that part about Dijkstra's algorithm, but the title should be changed nonetheless.


I used to teach Dijkstra to young students a long time ago. I usually started explaining the "naive" version, because IIRC the priority queue was not in the standard library, or they didn't know how to use it.

Going from the obvious backtracking algorithm to the naive Dijkstra algorithm, reduce the complexity from exponential to quadratic, that is a huge difference and is visible even in small boards. Going from quadratic to loglinear is a nice improvement, and it's easier if you have already understood the other part.

Perhaps he was taught the naive version and rediscovered the full version. Perhaps he was taught both and should have been more clear about that. I agree that this is not a new groundbreaking discovery, but I think it's a nice post anyway.


Same, I was excited to read about it as I'm doing Dijkstra maps for a roguelike and that kind of speedup would be an incredible gain. This does go back to the "why is the web so slow?" question from the other day, though: lots of code can naively solve a problem without being efficient, and a lot of development is done on trivial-sized data quantities for which the difference is not as noticeable.


Also even ignoring what's Dijkstra's and what isn't, the speedup isn't a constant 2700 factor but it's a reduction in complexity.


I went down a similar path, but only saw about a 770x speedup. Mine went from 36 minutes to 2.8 seconds.

I was working in Idris. I knew better, but figured the data was small enough that I could just do a linear scan for the next node instead of implementing a heap. In the fast version I ended up using a SortedMap (Int,(Int,Int)) () for the queue.

I only added points to the queue as I discovered them (neighbors of points that I visit), so it maxed out at 913 elements. It sounds like OP may have started out with all points in their queue?


In this particular problem, all edge weights are 1..=9, so you can have a O(1) priority queue (fixed-bound bucket queue) if you make use of that fact.

(example: https://github.com/aldanor/aoc-2021/blob/master/src/day15/mo...)


Actually you don't even need a priority queue. Two lists, you read from one push to the second. When you are done you swap them


I solved it without using Dijkstra's algorithm. I simply used a matrix and updated the minimum distance looking at the four neighbours over and over again, until no updates were made. (I have no idea how to do this in Haskell.) It is a rather dumb algorith and looks not very efficient, but it does not require you to keep a set of unvisted nodes and their smallest tentative distance.


This approach is pretty well-known. See Bellman-Ford [1] and Floyd-Warshall [2] algorithms.

[1] https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm

[2] https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorit...


There is a mistake in the article's statement of the problem. It says

> the possible moves are going right, down or up by one

but the actual problem (https://adventofcode.com/2021/day/15) allows moving in all four directions.

The difference is significant — the naïve attempt described in the article does not work on the actual problem.

The problem as stated in the article is actually quite interesting. The case where the only possible moves are down and right has a standard dynamic programming solution. A variation moving up is also allowed. The time complexity is O(n^2) where n is the grid size, which is faster than using Dijkstra's algorithm.

And regarding the title, I am glad you changed it, but I'd prefer something more similar to the article's new title, such as "Speeding up an Advent of Code solution with Dijkstra's". (At the time of writing, the HN title is "Finding an optimal shortest path algorithm", which doesn't provide much information, and the article's title is "Speeding up my AoC solution by a factor of 2700 with Dijkstra's".)


> The difference is significant — the naïve attempt described in the article does not work on the actual problem.

Thanks! I'll update the article to say this.

> And regarding the title, I am glad you changed it...

Hopefully dang sees this. After a certain amount of time, the title of submissions cannot be changed (I didn't make the most recent changes.)


[Edit: The gibberish sentence "A variation moving up is also allowed." was meant to say "A variation of this solution can be used to solve the case where moving up is also allowed."]


> I imagined, “what if we were able to just find the next vertex of minimum distance from the source in constant time?”

> [...]

> even if you have the right algorithm, the choice of how you represent auxillary data in the algorithm matters.

I think this is a very useful insight and the core of the piece. It just goes to show how important the layout data is with respect to the algorithm using it.


This is true in most cases. It is all about the data and algorithm in concert, rarely about either one. Badly laid out data can kill your cache. Well laid out data with a crappy algorithm may outperform badly laid out data with a fantastic algorithm.

The ability to visualize the impact of your layout on caching is the fastest pay-off you'll ever have as a programmer, just knowing the sizes of your caches can already make a huge difference.

For instance: if you have a bunch of data and you need to do operations on that data it is much faster to pull in the data piecemeal, do all the operations on it bit-by-bit and then to move on to the next bit of data than to do several sequential passes over the data storing the intermediate results.

The difference in speed can be multiple orders of magnitude.


Actually for grid navigation there's no need for a priority queue.

All steps are of length 1, so you can keep two arrays. In the first one all the things that you are currently visiting and push to the second array the neighbors that you'll visit in the next cycle. Once you are done swap arrays and start over.


All steps of the grid didn't have the same cost in this problem though. In this case you'd need 11 different arrays to rotate around.


It's a grid, but with variable costs for entering each cell.


This is what is great about AOC there are so many ways to do it. I’m pretty sure I just did the standard DP solution with 2 passes, once down and right and then up and left which would be O(n) in number of nodes or O(n^2) in grid dimension.


> the lowest sum possible that results from traversing a grid from one corner to the other, where the possible moves are going right, down or up by one.

But, you just need the sum, not the path. There's no need for a shortest path strategy.


Priority queues aren't constant time. They're log time.


Yes, though sometimes it is possible to replace a fully general priority queue with a faster structure that is using the structure of the problem at hand. For example, this problem (AoC day 15) has only ever a count of unique priorities that is 11 or fewer. That allows a queue implementation to be https://github.com/yxhuvud/aoc21/blob/main/day15.cr#L27-L51 , being amortized O(1) in both insertion and deletion. This pushed down the runtime another magnitude for me, the whole day running at 0.014s.


IntMaps in Haskell actually have time complexity of O(min(n,W))[0] = O(1) for this problem, for insertion and getting the minimal element where n is the number of entries in the map (which is 9 here) and W is the word size in bits. Another way to think of it is it's just a fixed-length array where the entries are lists of vertices.

[0] https://hackage.haskell.org/package/containers-0.6.5.1/docs/...


Fibonacci queue is a bit special that insertion is constant but extraction is log.

A-star is very insertion heavy. In easy cases with a factor of 5, so using Fib gives significant boost in performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: