Hacker News new | past | comments | ask | show | jobs | submit login
A ride that takes 10^20k years to complete in Roller Coaster Tycoon 2 [video] (youtube.com)
785 points by Taek on Aug 3, 2020 | hide | past | favorite | 131 comments



Less than 48 hours after that video was published, OpenRCT2 patched their pathfinding algorithm to not prefer some directions more than others: https://www.youtube.com/watch?v=b-5aX2oLOgU.

The GitHub commit message says it was just to mess up MarcelVos, the videomaker



And this comment both explains why it's the way it is, and proposes an improvement:

For solving mazes, here's the most popular algorithm:

Touch the wall or hedge with the hand nearest to it, left or right. Keep that same hand touching the wall and keep walking. This may take you on a horribly long route, but it will eventually get you out.

[...]

Rather than doing complete randomness when choosing directions, I think it would be actually better to assign an strategy for each player (once they enter the attraction), setting their default direction to "always clockwise" (as it was before this PR) or "always counter-clockwise".

https://github.com/OpenRCT2/OpenRCT2/pull/12546#issuecomment...


It is a popular algorithm, especially because it's stateless, but it only works if the maze is simply connected, so a simple loop can cause it to fail.

A simple mechanism that would also make it more interesting would be for wall following to flip handedness at random intervals. That is, the guest would start off by following the wall with his left hand, and then after some random number of steps, start following with his right hand.


> It is a popular algorithm, especially because it's stateless, but it only works if the maze is simply connected, so a simple loop can cause it to fail.

If we’re talking about mazes where the entrance and exit are both on the perimeter wall, those points must be connected by that wall and the algorithm is guaranteed to be successful.


I think GP meant if there's an island in the middle, it won't work following that. But if you start from the entrance, it's fine.


Also, the exit or entrance shouldn't be on that island in the middle. Bridges/tunnels connecting different points will also mess it up.

It's a good algorithm within some very specific constraints.


All of those things are conveniently excluded by the architectural constraint of using live hegdes as the construction medium: the maze must be planar, and the entrance and exit need to be at the outside to allow access. You can have secondary desinations in the interior that aren’t reachable this way, though.


The hedge maze needn't be planar though, that's a faulty assumption. See Honey Pot Hill Orchard's hedge maze for a counterexample: https://www.honeypothill.com/

The trick used in the github issue cited by the other reply is a similar idea, but the change in y axis value is applied to the exit rather than throughout the maze, via a tunnel from the center.


I'm not sure exactly what you're saying here, but if you're saying that the proposed algorithm will work for all mazes in RCT2, I think you're wrong: https://github.com/OpenRCT2/OpenRCT2/pull/12546#issuecomment...


If the entrance were, the exit necessarily would be too; so it would work. (The island would be a corner of the maze, you'd just walk in, then left/right (according to hand) and out.


How does one get to the island by keeping one hand on a wall? Wouldn't an island just be equivalent to a big empty area?


As I said 'if you start from the entrance, it's fine'. The comment up thread that I interpreted that way (it wasn't my own) was in reply to a comment that sounded like you could start anywhere (e.g. attempt the maze in the right spirit and then give up and put your hand on any wall), not just at the entrance.


Very true and thanks for clarifying.


there is exactly an example of that in the github comment thread.


I remember when I was a kid and going to my dad's office. They had Wolfenstein 3D installed and that would keep me occupied for the rest of the day. I remember my dad coming one occasion and exactly telling me that to make any progress if I was lost or blocked in one of the mazes.


That would make guests follow some very strict paths, which would be much less realistic and would severely hamper any regular gameplay.


That's a trope I always find irritating because it only applies to one type of maze: those where there are no freestanding paths, walls, or barriers.


I think it still will work in those cases, as long as both entrance and exit is at the perimeter wall? And that it's the placements of those that determines if left-touching is a viable option. For instance, it doesn't matter if it there's a freestanding wall/loop, as one wouldn't reach that anyway.


Canonical hedge mazes, like the ones in the games, typically don't have any of those.

FWIW, I taught my kids to use "right-following" when exploring caves in Minecraft, and it works pretty even though its randomly generated "mazes" (caves) can and do loop in on each other etc.


My Minecraft cave process was "torches to the right on the way in" so it's easy to backtrack


This is why I always design 3-dimensional mazes.


It's always so wholesome when devs interact positively with 'exploit hunter' types. I wonder why it feels like that.


Probably something to do with the "status games" definition of friendship from Impro. Friendly ribbing in a faux-antagonistic relationship may also come off as really healthy-looking, comfortable, and pleasant, for similar reasons, namely that you can't do that safely with people you aren't on good and (at least in some confined sense) equal terms with. So from the outside it's like vicariously being part of the everyday intimacy of a friendship.


I like this perspective. It acknowledges that they are equal at least in that they both care about the game and express that by paying attention to all of its detail, even small defects.


Maybe because "exploit hunter" types are in fact doing QA for the developer?


It actually is funny.


Indeed!


I was kind of hoping they'd change it to have an easter egg, if it determines it's likely in a left only hedge maze the path finding cheats and just walks to the exit.


According to Youtube recommending a newer video by the same author, you can still build impossible mazes:

https://www.youtube.com/watch?v=b-5aX2oLOgU


That video is not correct, Marcel incorrectly assumes his relationship is still exponential, but it's an unbiased random walk now, which has an expected quadratic solve time.


For a proper fix they would probably want the guest to remember places they had been previously and bias against those places.


Real-life mazes are sorta known for not working like that: https://youtu.be/u9xvrbfyKGQ?t=425


Well, you'd generally remember the last few turns you'd take. The longer the loop, the harder it is to detect. So the algorithm could do something similar - and it would also help with keeping the memory bound sane.


There's a suggested fix for just this on the Github discussion. https://github.com/OpenRCT2/OpenRCT2/pull/12546#issuecomment...

According to their quick test, this will definitely result in guests solving mazes a lot faster.


This approach appears to do something different, which is allow the guests to cheat - it relies on information a real maze solver would not have, including how close a tile is to the exit.


Not going to break immersion for me if guests are casually cheating. It's nice to see:

1. guests wandering around randomly - every part of the maze is regularly visited if the maze is busy

2. guests being not hopeless at actually solving mazes

If the guest makes progress opaquely by sometimes cheating, I'm really not going to notice as a player.

On the other hand, some of the other solutions presented here (like using the left hand rule) would make the mazes a lot less interesting to watch.


Oh, I agree with that, I especially wouldn't want to see a left hand rule solution, but my point was just that it's not just

> remember places they had been previously and bias against those places

Which is something a real maze solving algorithm could do. The linked solution might do that, but mainly it works by cheating (effectively, implementing "gravity" pulling guests towards squares that it knows are closer to the end). That might be just fine for the purposes of making a video game, but it doesn't seem like the sort of approach you were talking about.


That change still leaves a small bias in the case where there are 3 directions to choose from. The effect is presumably small (I don't know how many bits are produced by scenario_rand()), but it would be assuming to try to figure out the maximum maze bias that is possible to extract from this.


Oh, I didn't realize this was OpenRCT2, rather than the original RCT2. Do we know if the original game has this same oddity?


I didn’t even know OpenRCT2 was a thing. I wish I’d discovered that at the start of the pandemic!

I loved Transport Tycoon and have clocked up nearly as many hours on OpenTTD as I had on the original DOS game. I did also have RCT but sadly never had the time to play it as much when it was new.


According to the next video from the youtuber, ye sit does.


It would be interesting maybe to design a system where people deviate a random amount from the optimal path.


In the video, Marcel Vos essentially runs a monte-carlo simulation to try and figure out the growth rate of adding more elements to the maze. Though it is true-to-life, it's also very computationally inefficient.

Based on the logic presented in the video, I wrote a program that does a more efficient simulation, so that we could get a better estimate of the growth rate. Note that my simulation estimates a guest always takes the exact same amount of time to make 1 step, and there may be incorrect edge cases in my simulation.

In my simulation, I attempt each size of make 50,000 times and record the average number of steps it took to complete a maze of each size 1-25.

https://play.golang.org/p/FglwpqbsPPr

Results:

<output truncated for ease of viewing>

20 size: 9966 steps (1.397 growth)

21 size: 14057 steps (1.410 growth)

22 size: 19738 steps (1.404 growth)

23 size: 27372 steps (1.387 growth)

24 size: 38660 steps (1.412 growth)

25 size: 54266 steps (1.404 growth)

Marcel estimated a growth rate of 1.424 per added indent, and my larger sample simulation estimates a growth rate of about 1.4, slightly smaller but very much in a similar ballpark.


This is probably a solvable number, and these 1.4 numbers are suspiciously close to the square root of two 1.41421...


That was my thought as well, but I couldn't justify it being root 2 so I went with the number I got.


For the simplest case you have a system of transitions

    Start → Mid(1, ↑)
      prob: 1, time: 1?
    Mid(1,↑) → Start
      prob: ⅜, time: 3
    Mid(1,↑) → End
      prob: ¼, time: 1
      prob: ⅜, time: 3
(not 100% sure about those times, here they are in units of the timesteps that a person spends to go from one square to another, so the 3 is “go to the left square, go to the middle square but coming from the left, go from here up/down having the normal orientation again”).

There seem to be two ways to do this. First you could determine the average time to leave the park by pumping 1 person into the park every timestep until it reached a steady-state where one person was leaving the park, then just count how many people there are in the park. Assuming steady state, you do not need to use travel times to calculate that exactly 1.6 people occupy Start and 1.6 occupy Mid, with 0.6 making the journey back and 0.4 making the fast journey out and 0.6 making the slow journey out. These two slow journeys taking time 3 can be viewed as having (3–1)•0.6 occupation, or 1.2 people in each, so the total is 1.2 + 1.2 + 1.6 + 1.6 = 5.6 people in the park, so this should be the number of timesteps.

Or if you prefer a straight calculation it is S where

    S = 1 + T
    T = ⅜(3+S) + ¼(1) + ⅜(3)
S being the average time from Start → End and T bring the average time from Mid → End. This can be solved to similarly find S = 28/5 = 5.6 timesteps.

I think the former approach is going to be theoretically easier to understand when you are asking, “I want to convert End to Mid(n+1, ↑) and introduce a new End and also a node Mid(n, ↓), how does adding these three nodes change the system?”

In fact I think to solve it you will want to always calculate three different flows. I have only given you one of them, a steady state where one person arrives in Start and then they leave out of End. The other one is that they arrive in Mid(n–1, ↑) at a steady rate and then leave out of End at that same rate. And finally there is the flow where they arrive in Mid(n–1, ↓) at a steady rate and then leave at the same rate. [I think Mid(0, _) is just Start here.]

If I have those three populations/average travel times then I have a way to add those 3 nodes to the system at each stage and this gives me a recurrence which can calculate the thing exactly.

I would need to think a bit about how to program all of that.


Seems like you could come up with a recursive formula to compute the expected number of steps to reach hedge N. Along the lines of how you compute the expected number of coin flips to get K heads in a row. I'm too lazy to figure it out right now though, so there may be some nuance I'm missing.


Isn’t this just a Markov process, shouldn’t we be able to calculate the exact waiting time?


Yep, this is just a standard Markov process which should be easily solvable.


Instead of running random simulations you could deterministically transport probabilities around an array, (or just set up a linear system whose solution gives the expected time).


Doing a more sophisticated simulation, I think it's trending towards 1.4 exactly (42 to 43 is 1.400003). Idk how to prove it though.


This assumes the code works exactly as described in the video. For old games, this has historically been false; it’s very, very hard to reimplement most game mechanics faithfully, often due to small corner cases that matter. Many gameplay mechanics are an emergent effect of bugs.

The small discrepancy of 1.424 vs your 1.41 is worth digging into. It might have a surprising reason. Or you might be right. Games are simulations, and simulations have emergent behaviors when you throw in a time dimension.


The discrepancy is fully explained by variance. Some of the results that went into computing 1.424 had less than 20 samples, which is not enough to get a precise answer for such a highly variant task.

That doesn't necessarily mean that my simulation is perfect (and, another commentor has made an improvement on it!), but it does mean that this evidence alone is not enough to suspect the simulation.


Small nitpick with your code, you define a step to be both a forward/backward movement and going into the corner and coming back (which I would define as 2 steps because the person has to spend twice as long completing that). Also, you didn't use floats for the expected times. I have made both changes and added them here:

https://play.golang.org/p/dnCuu05Dupc


Thanks for the changes! Interestingly enough, it seems to have had very little / no effect on the growth rate. Nice to have the precision in the outputs.

Run with 5 million iterations per size:

15 size: 2514.4287536 steps (1.413 growth)

16 size: 3543.3930332 steps (1.409 growth)

17 size: 4986.7099864 steps (1.407 growth)

18 size: 7008.892906 steps (1.406 growth)

19 size: 9834.1058896 steps (1.403 growth)

20 size: 13806.3093036 steps (1.404 growth)

Also seems like it converges to a value below sqrt2, which doesn't surprise me. The function that drives the behavior just doesn't feel like it would end up on an irrational number for a growth rate.


I've seen this guy's videos before, and I'd imagine they'd have a fair bit of appeal to HN users who are familiar with Roller Coaster Tycoon. He manages to really get into the details are determine why things work they way they do.

If you are nostalgic for RCT2, I'd suggest trying Parkitect on Steam. It feels like an updated version of RCT2. It sorta lets you play the game with rose tinted glasses on.

There are a few things I think it does better:

* A lot of modernization stuff: better OS support, keyboard layout support, clamping of input for multi-monitor users, steam workshop support, fewer obvious pathfinding issues, removing a few abusable strategies, etc.

* The hauler system. Shops no longer conjure product from the ether. A new worker type, haulers, have to deliver it to the back of the store.

* Janitors need to drop off garbage in trash chutes.

* Guests don't like seeing park infrastructure (eg haulers, employee paths, break rooms). It's not hard to hide this, if you don't feel like putting effort in.

* An option to copy a decoration item under the cursor. It is nice if you want to design a new building to match an existing on.

* Live previews of how your ride will work as you are building it.


OpenRCT2, the open-source project that improves RCT2, has some of these features as well. It has a scenery picker and it also has a ghost train feature for rides under construction. It also makes it run much smoother on modern systems and introduces a keyboard shortcut for almost everything, although vanilla RCT2 also had quite a few already.


Also, there's a basic line-of-sight system for scenery ratings, with lower ratings for haulers, employee-only paths, and employee-only buildings, so you actually have a reason to build walls and fake scenery to block views of the 'backstage' areas that haulers pass through.


Planet Coaster is also great. It's more of a modern take on the theme park sim idea than a faithful recreation of the original. Kind of the Cities Skylines to RCTs Sim City.


The fact that this is done on a computer with a PRNG is significant here. (unless the game sources entropy from the environment, which is unlikely)

The PRNG has finite state. This means that it must be cyclical. This cycle of of states will manifest in a guest which will either loop infinitely without progressing into the maze. Or it will manifest in a guest which will progress a fixed, finite number of steps into the maze before the state of the PRNG cycles.

Therefore the guest either never completes the maze, or the guest completes the maze in time O(n * 2^k) where k is the number of bits of state in the PRNG and n is the number of inlets.


I've not looked at the source, but I doubt that each guest has a unique PRNG. It's more likely that there's one PRNG for the whole game, and so the behavior will depend as much on the number of guests in the park as anything else.


This is true, and it'll extend the size of the loop past the naive "once around the PRNG". However, 10^20k years is so much larger that it's safe to guess that even if you do your best to extend the size of the loop based on the behavior I saw there (the park puts in a certain number of people which stays fairly constant), you still end up with too few states reachable to ever prevent looping.

Assuming a roughly 4Hz update rate, 10^20k years is around 10^2.5trillion iterations.

The PRNG has fewer than 2048 bits (being very conservative), and with (let's round way up) 100 patrons exploring the same let's say 256 slots or so [1], you can encode an additional byte per patron, but you're still a long, long ways away from coming even close to the size where it's even close. That gets you a very generous 2^4608 (approx 10^1387) states you can reach, which isn't anywhere near close to the number of random iterations we can guess this will take. So it's safe to say the simulation as a whole will still loop, even with the state beyond the PRNG itself.

Note you have room to add a looooooooot more state in to my estimates before it's even remotely close. There's nowhere near enough bits available to encode things and avoid loops.

[1]: The number they can explore is not relevant, it's the number they will explore. I'm also naively assuming each of the 256 is equally likely, which is false, but is also a best case for this analysis. In reality the patrons tend to encode much less than that as they are much more likely to be towards the front of the maze.


That's a pretty funny thought.

Even a Mersenne twister won't be enough to represent 10^20,000+ states that'd be needed to complete just one maze run.

At a minimum, you need a PRNG with 100,000+ bits... or a 12.5kB+ state machine for your PRNG before you had a decent chance of one NPC finishing the maze. I don't think any modern PRNG has such state (most states cap out at 64-bits or 128-bits these days, because there's no feasible simulation that'd use all those RNGs. Mersenne Twister was probably the last "old style" PRNG algorithm that tried to make a huge cycle-length with a large-state).


Could be solved by re-seeding the PRNG once in a while. Probably after N samples were taken.


Or the game could just use the OS equivalent of urandom, which adds environmental randomness to the pool when available. I'm pretty sure there's something similar on Windows...


The way to get a closed form solution for the expected time is to note that there are 2N + 1 states. N states for each notch and each direction, and one "final" absorbing state for the end of the maze. You can write a matrix A where the transition probability between states i and j is given by Aij. Then A^n gives the distribution over states after n steps.

It's been a while since I've done this, but I think the solution is something like the last index in the vector

   E[steps] = (I - A)^-1 * (1, 1, 1, 1 ...)
I-A is called the "fundamental matrix" of the markov chain described by Aij.

EDIT I wrote some numpy to verify that this solution matches the empirical growth rate of 1.4 found in the video: https://gist.github.com/mgraczyk/7917a3322f65fe60fa333da7936...


> Then A^n gives the distribution over states after n steps.

Because its an A^n kind of problem, is this a good time to use Eigenvectors?

I'm kinda bad with my Linear Algebra. Just kinda checking myself if I'm following the concept.


A^n can be computed many ways, and it is not necessary to actually do any matrix multiplications. I think what you're referring to is diagonalizing the matrix A=PDP^1, then computing the matrix power of the diagonal matrix A^n = (PDP^-1)^n = P D^n P^-1.

I just let numpy figure it out for me, but if you want an analytical solution something like that may be necessary.


Fans will recognize that RCT is a descendant of Transport Tycoon, which was created by Chris Sawyer, who ported Frontier Elite 2 from m68k assembly to x86 assembly (and inserted an advert, visible in some spaceports, for “Chris Sawyer’s Transport Game”).

Frontier used a “Scaled Word” datatype to represent the vast interstellar scale of the game, and according to [0] this had a range of 2^65536 ≈ 10^20k.

So there’s something relevant to compare the size of the maze solving time to!

[0] http://jongware.com/galaxy4.html


This comment reveals the insight that 10^20k is equivalent to the number of different possible 64kB files


8kB files actually.

2^65536 is 64kiloBITS. Not bytes.


If I’ve calculated correctly, the expected number of steps to pass n indents is

75((7/5)^n − 1)/4 − 7n/2 for left indents,

7n/2 − 29(1 − (5/7)^n)/4 for right indents,

2n²/3 + 2n for unbiased indents.

The constants vary depending on exactly how exactly you count steps and account for edge effects. (I’ve assumed that each indent starts at the middle of a 1 unit section of the main path, and goes 1 unit to the side and 1 unit back.) But the asymptotics are definitely Θ((7/5)^n), Θ(n), and Θ(n²), respectively.


Short version: Roller Coaster Tycoon has a random walk type maze solver, and you can create a maze which takes O(2^N) time to solve.


Is it really O(2^N)? If so, how'd you determine that so quickly? I'd like that intuition.

I do ok at big-O reasoning, but in this case turning probabilities into big-O eludes me.

In the video, there's an opposite maze which biases the guests to walk towards the exit, making it much easier to solve. What's the big-O of that version?


Let's pose the problem mathematically. Initial state is (0, forwards).

State (0, *) transitions to (1, forwards)

For k > 1, state (k, forwards) transitions to (k+1, forwards) with 5/8 probability and to (k-1, backwards) with probability 3/8.

For k > 1, state (k, backwards) transitions to (k+1, forwards) with 1/8 probability and to (k-1, backwards) with probability 7/8.

My intuition is that this very much resembles a biased walk so it should take exponential time. But there is the "momentum" to take into account.

Edit: It looks like we can use https://en.wikipedia.org/wiki/Absorbing_Markov_chain#Expecte...


Came here for this - I knew there must be an analytical solution involving Markov chains.


I'm pretty sure this can be modeled with markov chains.

Your vertices would be (x, y, f), f being facing.

Edges are along the lines of (x, y, 0) - (x+1, y, {0,1,2,3}\1); (x, y, 1) - (x-1, y, {0,1,2,3}\0); (x, y, 2) - (x, y+1, {0,1,2,3}\3); (x, y, 3) - (x, y-1, {0,1,2,3}\2).

Probabilities are modeled in the video and need to be rotated 4 times for the four facings.

After that, I just don't recall my eigenvector calculation anymore. It feels like a good markov exercise, though.


I'm also absolutely not an expert on this (M.Sc. CS).

I would argue using Big O here is wrong and Landau symbols are not well defined on Markov chains/probabilistic turing machines. The point of Landau symbols is to give asymptotic bounds of functions with N approaching a certain value (usually positive infinity). My intuition says that the runtime of a probabilistic turing machine isn't necessarily a function tho (i.e. a deterministic association between any N and time(N)).

Google wasn't really helpful on complexity theory of Markov chains and based on the research papers that pop up it seems far enough away that I would avoid using Big O in this scenario.

Probabilistic turing machines even seem to have their own complexity classes (https://en.m.wikipedia.org/wiki/Probabilistic_Turing_machine).


Sure you can use Big O, just look at it as the expected runtime. For example, QuickSort is O(n^2) in worst case, but expected O(nlog(n)).


For probabilistic algorithms, we can look at the worst-case-w.h.p runtime: that is, a randomized quicksort, although technically still having an "absolute" worst case of O(n^2), actually is considered to have a worst case of O(nlog(n)), with probability 1 - 1 / (n^c) when bounded by Chernoff, as shown here: https://web.cs.hacettepe.edu.tr/~ozkahya/classes/bbm402/Lect...


Note that they likely messed up slightly, it'll be more or less exponential in some base but possibly not 2. However big O notation does require you to get the base correct (or at least provide an upper bound), you can't ignore it like you can ignore a multiplicative or additive constant.


I think the key insights are:

- only the most significant (in terms of exponentially) factor(s) matters for big O (i.e. if you have a linear factor, the constant time factor is ignored)

- You have to determine whether you are talking worst case or average case or best case. Assuming average case I’d guess the opposite maze is O(n) because the time expands linearly with number of segments.

(Disclaimer: I am not formally educated in computer science, so admittedly I am not up to snuff on the mathematical side of things. Hopefully I am close enough here.)

edit: Oops, I failed to actually cover any time complexity calculation at all. I do not know if my approach is correct, but here's how I view the original:

    def walk_path(n):
        // probability of _not_ recursing is (1/n^2)
        // probability of _recursing_ is (1 - 1/n^2)
        // likelihood of recursing doubles with n
        // therefore, the average case of this loop is O(2^n)
        for (i : 0 -> n)
            if (random() < 0.5)
                walk(); // = O(1)
            else
                // Ignoring the actual walking on the branch, we just pretend we turn around directly.
                for (j : i -> 0) // = O(n/2)
                    walk();
                return walk_path(n); 
and the opposite is straight forward, since the branches are 'constant time' - so it's more like a straight loop.


In this instance, you're only concerned with the average case.

Worst case is infinite: you just always take the path back in the direction of the entrance, and never even get past the first inlet.

Best case is O(n): you always take the path forward.


Yep, sorry, it quickly became apparent after rereading this that the original analysis was not worst-case. I have revised my comment to also include how I view calculating the actual time complexity since I didn't actually do that initially.


I'm not 100% confident in my explanation below, but maybe it's something like this, if we don't have prior knowledge of the asymptotic behavior of random walks:

The measured values are empirically exponential with a factor of 1.424 per indentation. If we assume that as an upper bound (which I don't think is quite right), then we have O(1.424^N) where N is the number of indentations.

Any exponent can be written in terms of any other base by multiplying by a constant. Since we often use base 2, we can choose to rewrite it in base 2 as O(2^(K*N)), where K is approximately 0.51.

Constant terms can be ignored in big O notation, so we can just drop the K to get O(2^N) (I am less sure about dropping a constant within the exponent).


You can't drop constant multiplicative terms in an exponent - O(2^n) is not equivalent to O(2^(2n)) as O(4^n) \= O(2^n) (the first one grows a lot faster).


You cannot drop constant terms from exponents in big-O notation.


Note that calling it 2^O(N) (or Omega) would be more accurate, unless you stumbled upon a really lucky coincidence.


Interesting, I'd never seen this kind of thing.

Related: https://en.wikipedia.org/wiki/Big_O_notation#Multiple_uses


Are there any functions f(n) ∈ O(n) such that 2^(f(n)) ∉ O(2^(n))? If not, then either statement is accurate.


Yes, for example 2^(2n) ∉ O(2^(n)). Any function of a different linear coefficient. Big-O notation represents a bound by a constant factor (by modifying the exponent multiplier you get an exponential factor).


Thanks, you're right of course.


It's actually worse than a pure random walk - it had a clockwise bias which could be exploited to create travel asymmetries.

The devs have since updated the game to remove the clockwise bias.


Why does he take the fourth root to find the multiplier per indent? That was the only part where he lost me math-wise.

I wanted to ask if RCT2 was Turing complete, but looks like the same person made a video demonstrating this: https://www.youtube.com/watch?v=RQGa0DPwes0

Would love to see the Game of Life implemented in it!

As a fan of RCT2 in my youth, absolutely loved this. This is some of my favorite type of content to find on HN.


The multiplier of 4.11 was for a tile of length for the test mazes, which is actually 2 more tiles of maze. Every tile has 2 indents so one tile of length has 2*2=4 indents. Therefore we need to take the fourth root to go from the multiplier per tile of length to the multiplier per indent.


Thanks! Great work here :)


Making the maze longer by just one rule increases the number of indents by four. But, for his calculation the number of indents in the hindrance (it is valid to do it as a function of the number of tiles, instead, of you want) but the conversion rate (4:1) is in the power because it’s an exponential process.


"If you want a new way to torture your guests, this is an excellent method".


Hell is on Earth, and it's disguised as children's park building game.


I WANT TO GET OFF MR BONES WILD RIDE


I wonder what simple improvements one could make to the pathfinding algorithm to make it perform better.

I appreciate that a "proper" pathfinding algorithm like A* would not be feasible given the number of guests but maybe taking into account what the guest sees ahead of them would already make it quite a bit more realistic?

You could weight the probabilities for each direction by the square root of the length of the path or something like that. And if you can see the exit set p=0, if you can see the exit set p=1.

I'm sure it would still be easy to fool though - just build very long indents and make sure the entry/exit are not visible until the last moment.


With a one-off precomputation you can do a lot. If you store the distance to exit in each cell you can introduce a slight bias towards the exit. Easy to implement but not very realistic since it assumes things the guests can not know.

More realistic would be observing that the entry and exit are on the outer edge and the maze is simply connected, so a trivial left-hand rule [1] will solve the maze in linear time. On its own pretty boring. You could ad some randomness to make it interesting. But would it still have linear expected solving time if you do that?

[1]: https://en.wikipedia.org/wiki/Maze_solving_algorithm#Wall_fo...


Might also give the guests a slight tendency to get away from each other so they tend to spread all over the maze. That might look nicer to the player. Not too strong though otherwise every guest stays in their own dead end.


Tracking what tiles the guest has been to and disallowing travel in a direction that does not lead to any unvisited tiles while keeping behavior otherwise random would be my preference.


The most efficient way is probably to "cheat" by having all the guests share the path data. Perhaps when a guest enters the maze, she only chooses a winning path 10% of the time, but as she spends more time in the maze she chooses winning paths more frequently, until she is nearly guaranteed to walk right to the exit.


There should be a way to do a precomputation, starting at the last square before the exit and putting a 'signpost' value that points towards the exit. Imagine planting a sign that points to the exit in the square adjacent to the exit. Then iterate over all squares reachable from that square and have them point to the square next to the exit and repeat, building flows that all point to some path to the exit until all connected maze tiles have been reached.

If you still want randomness, give the people a chance to follow the signpost or move randomly, but bias it towards following the signs the longer they've been in the maze.

This should scale pretty well, at the expensive of requiring a one-time recomputation whenever the maze is altered.


Well, the classic algorithm for solving a planar maze with single entry/exit (on the outside) is to always follow the right wall (or always the left wall). That's what the algorithm in the game was apparently originally modelling, with some randomness thrown in so that not all guests do exactly the same thing.


I assume they are simply talking about fixing the bias.

Instead of randomly picking from the 4 sides, pick one from the set of legal moves.


A maze to surpass Mr. Bones' Wild Ride


"I want off Mr. Bones' Wild Ride."


Marcel has another video with a deterministic and much longer than Mr Bones' Wild Ride duration. Highly recommended! https://m.youtube.com/watch?v=QotjNlDr0WU


A precise analysis of how many steps it takes to solve this maze. I am assuming that each step takes exactly as long as any other, and that the character instantaneously turns at the start of its step. Analyzing the video this seems about right.

For each 'dead end unit' there are four states the character can be in after having just taken a step, s = {forward, backward, inside, facewall}. Let us look at the Markov chain transition probabilities if we are in state (n,s).

  - (n,forward) has a 3/4 chance of going to (n,inside) and 1/4 chance of going to (n+1,forward).
  - (n,backward) has a 1/4 chance of going to (n,inside) and a 3/4 chance of going to (n-1,backward).  
  - (n,inside) has a 100% chance to go to (n,facewall).  
  - (n,facewall) has a 50/50 chance of going to (n+1,forward) and (n-1,backward).
Assuming that big N indicates how many 'dead end units' we have, there are two exceptions:

  - (0,s) always goes to (1,forward) for any s.  
  - (N+1,forward) always goes to itself, it is an absorbing state.
Without going into details too much, we can now apply some results from Markov chain theory (see here for more: https://www.dartmouth.edu/~chance/teaching_aids/books_articl...).

If we build up the transition matrix of the above, ensuring that any absorbing state is at the end (we only have one absorbing state), we can take matrix Q as the rows/columns that don't interact with the absorbing state. Then the sum of the first row of (I - Q)^(-1) gives us our mean absorption times.

The code: https://gist.github.com/orlp/0c5fc7264f02e3d211d42da643163bb...

The results (in format N, expected time to solve, ratio with last):

    1 5.6 5.6
    2 13.439999999999998 2.4
    3 25.816000000000003 1.9208333333333338
    4 44.542400000000015 1.725379609544469
    5 72.15936000000002 1.6200150867488055
    6 112.22310400000002 1.555212019618799
    7 169.71234560000005 1.5122763455197248
    8 251.59728384000005 1.4824925255172479
    9 367.63619737600015 1.4612089278745692
    10 531.4906763264004 1.445697350043086
    11 762.2869468569604 1.4342433100158898
    12 1086.8017255997447 1.4257121023530763
    13 1542.5224158396427 1.4193227518003906
    14 2181.9313821754995 1.4145216690337734
    15 3078.503935045701 1.4109077674002157
    16 4335.105509063981 1.408185794311686
    17 6095.747712689574 1.406135952157193
    18 8562.046797765406 1.404593365952747
    19 12016.265516871565 1.4034337583867995
    20 16853.571723620193 1.4025631923626152
    21 23627.200413068273 1.4019105742407632
    22 33111.68057829557 1.4014220897699503
    23 46391.35280961381 1.401057028800373
    24 64984.29393345935 1.400784628983539
    25 91015.8115068431 1.4005816790136818
    26 127461.33610958033 1.400430694396402
    27 178486.4705534125 1.4003185279649442
    28 249923.05877477748 1.4002353119531679
    29 349935.6822846886 1.4001736534444356
    30 489954.75519856386 1.4001280235262301
    31 685982.8572779896 1.4000942944210868
    32 960423.6001891854 1.400069389489103
    33 1344642.0402648596 1.4000510191544548
    34 1882549.2563708038 1.400037482094484
    35 2635620.7589191245 1.400027515880301
    36 3689922.2624867745 1.4000201849980958
    37 5165945.767481486 1.4000147970596988
    38 7232380.0744740805 1.4000108402222013
    39 10125389.504263714 1.4000079365298022
    40 14175604.1059692 1.4000058071840076
    41 19845905.94835688 1.4000042467325944
    42 27784329.92769964 1.4000031039147403
    43 38898124.898779504 1.4000022674651564
    44 54457439.25829131 1.4000016556067978
    45 76240480.76160783 1.400001208283035
    46 106736740.266251 1.4000008814215148
    47 149431504.9727514 1.400000642702783
    48 209204176.96185198 1.4000004684420466
    49 292885919.1465928 1.4000003412933768
    50 410040359.6052299 1.4000002485609422

This makes me conjecture that modulo small effects at the start/end (and I've completely ignored the long middle piece), we have that the number of steps taken to solve a N 'dead end unit' maze is Ө(1.4^N). I've asked a question on the math stackexchange to see if someone can prove it formally: https://math.stackexchange.com/questions/3779234/mean-first-....


When I was a kid I remember creating mazes with straight line from entrance to exit and calling the attractions "Scam 1", "Scam 2" etc. I'd then set the price low enough for quests to enter and I'd snicker when there were alerts like "Scam 1 has broken down".


Maybe I'm missing something here, but there doesn't seem to be anything that guarantees the same number of guests enter each maze and the longer mazes seem to be placed farther from the entrance. Wouldn't this bias the results? (Of course the conclusion that the maze gives the pathfinding algorithm a hard time is the same.)


The guest capacity of each maze is capped, and it looks like the park is being artificially stuffed with enough guests to fill each maze. The time it takes guests to percolate from the entrance to the back of the park is probably insignificant and for almost all of the experiment's run time, all mazes will be full with a standing queue.


Thanks for clarifying. That makes sense :)


There are microfludic rectifiers that work kind of like this. (e.g. https://pubmed.ncbi.nlm.nih.gov/15089471/ )

It would be neat to have a ride that somehow makes use of these diodes to do something interesting beyond delaying people.


The original RollerCoaster Tycoon (1) also had mazes. I wonder if the same algorithm problem applies to it as well? / Edit: The answer turns out to be yes. https://youtu.be/b-5aX2oLOgU?t=61


I love this guy. He's got a lot of great strategies for working with the tougher scenarios, like Fungus Woods, and his general tips for park management have ensured that I don't run out of money so damn quick.


Imagine walking around the visible universe, taking a step every age of the universe, then every time you completed a circle around the universe, picking up one atom... until you pick up the universe.

Then you have begun your journey.


I'm always amazed by what people can think of, given the tools and a field to play with.


I came here to see someone actually solve this rather than say "it's a Markov process, it has a solution". Here is my progress towards something:

First, using Taek's notation for the maze with the correction that entering and leaving an indent results in 2 steps rather than 1. The resulting set of equations are:

X_{0,f} = X_{1,f} + 2.5

X_{0,b} = X_{1,f} + 3

X_{k,f} = 0.625 X_{k+1,f} + 0.375 X_{k-1,b} + 2.5

X_{k,b} = 0.875X_{k-1,b} + 0.125 X_{k+1,f} + 1.5

X_{n,f} = 0.375 X_{n-1,b} + 2.5

Where the first index is the index of the current indent. Thus, n must be at least 1 but the first indent occurs at entry 0. The second index is the direction the maze goer is traveling in.

From here, we can easily get the growth rate of 1.4 which is 0.875/0.625 or more intuitively the ratio of momentum pushing you backwards vs. pushing you forwards. This is the root of the exponential growth.

Finally, I believe there is no "closed form" solution but this will be the solution of the linear system X = AX+b for the A and b shown above. I am able to convince myself that there is no nice solution because having the two dimensions (position and direction of travel) makes it impossible to rearrange the above equations and make a clever change of variable to make the telescoping sum look nice. To see this, realize that each term relies on 2 things and one those things doesn't rely on the first term.

Finally, here is some Python code that implements this:

"""

import numpy as np

n = 25

A = np.zeros((2n+1, 2n+1)) b = np.zeros((2n+1, 1))

# Setup A: the first n+1 entries will be the forward index and the last n entries will be the backwards movement

b_shift = n+1

## X_0

A[0,1] = 1 A[b_shift,1] = 1

## X_k

for k in range(1,n):

    # foward movement

    A[k,k+1] = 0.625
    A[k,k-1+b_shift] = 0.375

    # backward movement

    A[k+b_shift,k+1] = 0.125
    A[k+b_shift,k-1+b_shift] = 0.875
    
## X_n

A[n,n-1+b_shift] = 0.375

# Setup b

## X_0

b[0,0] = 2.5 b[b_shift,0] = 3

## X_k

for k in range(1,n): b[k,0] = 2.5 b[k+b_shift,0] = 1.5

## X_n

b[n,0] = 2.5

X = np.linalg.inv(np.identity(2n+1)-A) @ b

print(f"The number of steps from the beginning is for {n} cutouts: {X[0,0]}")

"""


For comparison, there are only 10^80 atoms in the universe.


How can that be when California alone has 10^100?


I want to get off of Mr.Bones wild ride


This title spoils the video... :(


It's aMAZEing... Ok I'll see myself out ;-)


Hopefully the probability of you finding the exit is high!


That's what I named my first maze ride in every park!


I hope you consistently named the second one bMAZEing and so forth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: