Rather than brute forcing all possible permutations, we start from all possible final states and BFS backwards towards starting states. There are probably tons of opportunities for further optimization, but this takes ~800 ms on my desktop, and it only uses one core. We don't need to track the number of turns required to end each state, since the frontier will always consist of games which require the same number of turns to end.
Edit: Shrinking my data structures dropped runtime by 50%, and easy & cheap parallelism dropped runtime by another 75% on my four-core desktop. New runtime: 110ms (Intel i5 6600k). The key to the parallelism is that since the total amount of money never changes, we can assign a different total amount of money to each thread, and they'll never have to search the same states.
I added this at the end with a link to your solution and your post. Honestly I never expected this post to make it to the front page so thank you so much for taking some time out to do so.
Edit: Is it because each solution must take into account every possible permutation of gameplay?
> EDIT - Thanks to Dolda2000 I have modified the Java version. It is now about the same speed as the GoLang version. Indeed the issue was the the games were created causing the Java version to have to simulate more games to determine if the game went long enough. With the changes it is running in about ~6 seconds now and has restored my faith in Java.
The golden age of this stuff was when I was writing stuff for the Atari ST and Amiga. Fixed hardware, manuals that came with assembly instructions and timings, etc.
My greatest moment was when rewriting the Atari ST text output function. For those that don't remember, the ST had an interleaved bitmap  and no hardware blitter support, so everything was done with the CPU. Therefore writing text on the screen was hard. I wrote a program that speeded up the C code from the OS about 3x, then Darek Mihocka with Quick ST  beat my speedup. I was absolutely aggrieved about this, but spent weeks working out how he did it without any luck.
In the end the solution came to me genuinely when I was dreaming - the Motorola 68000 move.p instruction which worked quite well with the interlaced bitmap. It gave me a nearly 2x speedup and the rest of the optimisations I had meant I beat his code.
I do miss those days of a level playing field where the outcome was the result of your skills, rather than quibbling over the middleware, OS, graphics driver version used, etc etc.
You need to pivot in work, especially as most of us will be working for 40 years+. I'm getting into the security stuff now, which I think is the new hotness.Only thing bringing me back to programming is golang which is absolutely superb for server-side stuff.
As a tech lead of a team that is largely junior, I keep running into places where a lack of sysadmin-type knowledge seriously harms the ability of a CS degree holder (anywhere from BSc to PhD) to do real-world software engineering work unless they have either been self-taught in sysadmin sort of things or have had a couple years experience that has forced that sort of experience on them.
To give an example, a fresh-out-of-school CS degree holder that never played with Linux servers for fun in his own time might be able to implement some software and then easily learn to Dockerize it, but may be stuck for three days on getting it to talk to other services both locally and in a test environment because of essentially zero exposure to ideas in networking - and yet the fix to the problem takes about 5 minutes for someone with exposure to how networking works. ("What's a bridge?")
Another example comes from general familiarity with the server/client model: the rule of thumb to have a max number of worker threads equal to twice the cores available unless there is a reason to re-evaluate that.
All that said, I would still favor sticking with the CS degree as well. Basic practical familiarity with sysadmin concerns can be more or less picked up over time via personal projects and then as needed on the job, but the formal knowledge accumulated over a CS degree would be much harder to do that with. I would only suggest to make sure to spend time on personal projects that, for lack of a better term, are full-stack.
However, if you haven't had networking experience in college, you went to a terrible program. I had exposure to networking (at Cal) in 3-4 classes.
(I used to work as an SRE, before I made my way back into functional programming and finance recently.)
I make as much as my developer counterparts, for what it's worth.
I don't think it's "hard" per se, but it does require intentional effort. I can also see how, with a difficult work schedule, or family commitments, it may become hard or even impossible to carve out the time.
I'm about 23 years into this journey, and although I still remember plenty of esoteric trivia that was useful programming in the 90's, and is useless now, there still appears to be room to learn new things every day
>Everybody has to do everything.
Everybody needs to have _at least some degree of familiarity_ with everything, yes. But that has always been the case. There is not and there never has been a job posting for someone to design algorithms and do nothing else, nor to deploy HTTP servers and nothing else.
I expect that if you pick any part of any software/hardware stack, anyone with a reasonable amount of experience would be able to give real-world examples where familiarity with several (and sometimes all) other parts of that stack are not only beneficial but sometimes necessary. I also expect that illustrations would be able to found from any point in the history of computing, up to and including now.
Next switch is likely starting a business.
37 y.o., still not tired of this!
I can see how someone could get bitter if they stayed in the same role/position most of their career, but in my experience this has tended to be people with a negative outlook/attitude/chip on shoulder in general... It really is what you make of it, people are people wherever you go, some are dicks, some are not.
Just accept the reality of the game and deal with it.
You don't need to spread your aura of doom everywhere you go.
Also, try running multiple java processes on a machine as opposed to multiple go programs.
No GC I know is happy without a heap several times larger than the program's requirements.
OTOH, Java programs tend to be built using frameworks that soak up memory like crazy. So there's that.
What will happen is (after the JVM starts), there will be (by default) 10000 interpreted iterations, then shortly after that a stop-the-world pause to replace slow interpreted code with its native counterpart JITed by a compiler thread, and probably a few more of those either optimising other, less hot parts of the code, or applying more aggressive optimizations on already JITed sections.
Go comes with AOT compilation and as far as I know, doesn't have much initial overhead. For a test that takes few seconds to complete,
I'd say this is not how you want to compare performance.
In Java 8 tiered compilation became the default. After a first threshold is reached, a method or loop will be compiled as a first approximation with further record keeping. At the second threshold, a more aggressively optimized version without record keeping will be put in place. This is kind of like the merging of the old C1 and C2 compilers into one to help get up to speed quicker.
When you want to measure this stuff you use JMH, Java Microbenchmarking Harness, an amazing piece of work that will warmup the JVM before executing performance tests. Be very careful writing your own Java timing code, and for the most part rely on JMH.
Full native code compilation for a hot method can be triggered and completed in a fraction of a second after startup. The net result is that for algorithmic problems that finish in seconds rather than milliseconds, the JVM startup cost may not be an issue. This will only be true if, like this problem, you're loading a small number of classes. Most people's experience with crappy JVM startup time probably has a lot to do with with web application containers like Tomcat that load thousands of classes in the first five seconds. Probably a good test to do to assess real-world performance would be something that does file-system operations and employs some parsing libraries, rather than just building data structures in memory and using System.out.
You're correct that if you want to compare peak performance, you should run the tests differently, so that you're not measuring the JVM startup cost.
Speed comparison between two languages, botched the Java algorithm... and ...?
In fact, I have asked a number of people under the impression that Java is slow to explain why they think it is slow. The answers have two flavors:
1. Its high level abstractions are computationally expensive. Sometimes this idea is supported by saying that garbage collection is very slow, sometimes by saying that the inability to directly manage memory means that it is done inefficiently, perhaps even inescapably so.
2. Compiling a program to an intermediary language that is not assembly but must be instead executed by something else (in this case, the JVM) must be slow.
Until then there are the IBM and Azul extensions, off heap memory, or optimizing in native code that 1% after using the profiler, where it really matters.
1) Controlling memory layout is not as big a deal as people think. The JVM is highly optimized (really it's probably the most optimized system ever devised) for allocating and deallocating objects on the heap.
2) It is in fact possible to control memory layout today because Java has access to offheap memory where you can have all the control you want.
This has actually been the case for nearly half a decade but for some reason a lot of people are convinced that "big arrays" can knock over the JVM.
In any case, bytecode as executable format, goes back to the early days when CPU were microprogrammed.
It was also a common approach on mainframes, with OS/400 being the best example.
High startup costs you say? Then don't use OpenJDK.
Get an AOT compiler to native code like Excelsior Jet or switch to a JVM that supports both AOT compilation to native code and JIT code caching like IBM J9 or Azul.
class scanning ? Again stop using OpenJDK and switch to a JVM that supports class sharing across VM instances like IBM J9.
For those that don't want ever to use anything other than OpenJDK, some of those features might eventually come there, like the newly introduced support for AOT compilation of the base module for Linux x64. With everything else planned for Java 10.
Bloated libraries? Well, there are bloated libraries in any programming language.
And by any change you are thinking of JEE, I doubt you had the pleasure to use C or C++ alongside DCOM, CORBA and SUN RPC, at enterprise level to fully understand the relief it was to change to JEE.
Also as mentioned elsewhere in the thread, trading systems don't suffer from frequent cold restarts, so hotspot work amortizes extremely well. So Java is much more optimal there than e.g. a command line tool.
But on perceptions: naive perceptions don't justify a false statement. So why draw focus to them, why excuse them? Facts trump perceptions.
Also another example is those that perceive the JVM as slow due to using it with Clojure, when the slowness in this case is caused by the Clojure implementation itself.
I use Java since the very early days, and am well aware of them, as are my fellow Java developers, when I work in Java projects.
Even in more enlightened scenarios a REPL on jruby is just eye-wateringly painful unless you've all the "secret speedups" enabled.
In many more modern development cultures such secrets are generally looked upon as "Yak Shaving".
Not as fast as dietrichepp's solution, but not terrible either, and probably the winner in terms of memory use. Since there's no state to keep track of, adding parallelism with OpenMP was trivial, too.
Find all random games for those proportions which end after 12 turns, then change the scaling factor to make it less than 255?
I made this point in more detail upthread, but: for some CLI tasks that need to run in a few seconds, JVM performance will be adequate. For others, it will suffer.
My subjective impression is that this is the first time the author has come across this sort of consideration in implementation and wanted to write about it. There is nothing wrong with that - in fact, I think it's great! It's just that the title leads one to expect, well, quite a bit more.
The title was just taken from the Stackoverflow post which I agree in this context is a little click baity.
I never expected it to get this much attention though, as stated in the first sentence it was just something I did for myself more than anything.
The problem that I see with Prolog in this case is that a simple solution is simple because it is turning the search over to Prolog's built in constraint solver that knows nothing of the problem. This often leads to unacceptable performance with Prolog; performance problems can sometimes be addressed in practice by the appropriate placement of the cut operator within the logical specification of the constraints. This, at least in my experience, makes understanding what the program is doing very difficult.
Generally, I view Prolog (along with languages like Smalltalk) as a language appropriate for some problem domains that have come to the end of their lifetime because mainline, more generally useful, languages have adopted their features or incorporated them in libraries.
It's also not automatic that golang is strictly faster than well tuned java vms.