I don't know what it is, but I love implementing Solitaire to learn new languages and frameworks. It's always a fun exercise, and the engine gets better with every iteration. My latest attempt is https://solitaire.gg - it's a Scala/Scala.js WebGL/websocket Phaser web/native app with hundreds of games.
Kyle, this is brilliantly done. Honestly, I didn't expect the level of polish and fluidity that I experienced when I tapped that link on my iPad. The slight "lean" when cards are dragged around, butter-smooth animations... Well done.
I'd love to see some of your notes behind wiring up the pseudo-physics of the cards. Or was that provided by a library?
EDIT: I need to read more carefully - looks like it's part of http://www.phaser.io, and now I've just wrecked my productivity for the day! Great job on the solitaire game.
Thanks! Phaser.io has a great tweening/animation system I use for some things, but the momentum animation was just a few lines of custom code. You basically just set the angle to a smoothed (distance travelled in last 500ms).
No problem! Of course the really awesome feature would be the ability to drag piles of cards and have it automatically determine whether there is enough space up top to do the swaps. (ie if you have 3 spaces open at the top-left, you can drag a 6543 onto a 7 in one step.) Similarly automatically putting cards up on the top right once they're exposed and there are no lower-value cards left. Obviously those would be rather more work though!
I use http://phaser.io for scene management (animated resizing of the game, tweening cards to predefined points on the table, etc) and the particle effects, as well as dynamically building the card images. I've been really impressed with Phaser.
just gotta say, you bastard, what a lovely comment. if i were developer of any project and got a note like this it would make my day (so im a little envious of kyle_u), if he used a library that does whatever it was that impressed you or not. :P
I think what strikes me most about a program, and what tends to speak to many people, are the little touches that are a delight to discover. Especially when you're not expecting them. So many comments have said "check this thing I made" and the thing proves to be well done, but somewhat utilitarian. Now, I like utilitarian as much as the next gold tunic wearing Terran, but these little touches created by the personal choice of a developer make a random human-computer interaction seem, even if by proxy, a little more human-human.
You're welcome. I appreciate what you're saying. I am not sure that I feel quite as strongly about the importance of the personal, loving little touches as you do, but the idea resonates with me. I actually might agree with you completely, given time to fully digest the idea, but it brings up a similar but maybe tangential point:
Over the last few years, probably most clearly highlighted after I was introduced to and started working with Bootstrap to create half-decent looking webpages (it is an HTML/JS/CSS UI toolkit/library/framework that makes it possible for web devs without a whole lot of design chops to simply create nice looking pages - https://getbootstrap.com/), I have been coming round to the idea that making something aesthetically pleasing and pleasant to visually/tactiley experience for the first time is not just beneficial or a small-but-helpful part of designing a well-liked and good webapp, but a key part of the experience. I really feel like if I have a good looking site versus an ugly one where the clickity-click part of the UX is identical, the good looking one will just really be more inherently comfortable, appealing, and embraceable for people. And not just other people - but me, the dev, who has seen behind the curtain and knows where all the ugly parts of the code or UX are. It's quite possible that if this is really true that it's obvious to loads of people, but it has been a slow realization for me, who used to get haircuts ridiculously infrequently/not shave very often, etc., in some cases thinking "well I eat well and exercise - what's inside is good stuff! Anybody who doesn't appreciate that is <adjective-indicating-poor-appraisal-skils>."
I guess what I'm saying is, spending time on quality presentation is a key part of appreciating your user IMO, and in terms of spurring project motivation I think it can be really useful and inspiring to get a static webpage with dummy UI that just looks and feels good.
I really do feel a little silly saying this because in hindsight it seems like it is one of those things that might be obvious to everyone else, but it was not to me, and maybe there are others out there like me :)
Hey that's nice. I run http://greenfelt.net in my spare time—it came about because my friend and I wanted to learn javascript after seeing how cool google maps was and so we implemented Sea Haven Towers, a solitaire game he had played on his Sun, back in the day. Solitaire is nice because it's easy but still challenging enough to not be completely straightforward. It's fun to work on occasionally and it's rewarding to see random people play your games.
Our games don't work as well as your does on mobile, but I think they're still very nice on computers.
I'm guessing this app looks cool; you might like to know that older systems remix it into glitch art like this: http://imgur.com/a/o3YVy
I thought I'd give it a whirl on the old ThinkPad T43 I'm using at the moment. I never really expect much from its tiny Mobility Radeon X300, but this was especially hilarious, and a most unexpected, refreshing change from the usual "oh, the disk LED is solid... again..." that I usually get. :P
FWIW, the http://aframe.io/ examples all work perfectly - I was actually slightly gobsmacked when I saw them, I can tell you with a fair amount of confidence that the simpler demos run on my system as fast as yours - so this GPU isn't a total lost cause with WebGL, and if it's moderately trivial to fix this, you'll net a small extra niche in your userbase (along with mobile users in 3rd-world countries using really entry-level smartphones and tablets).
If the fault lies with Phaser.io, and you feel like poking their ticket thingy, I'm fine with you passing the screenshots along. I'm also fine with testing potential fixes, if there's any room to explore there.
If your build environment can split out 32-bit Linux binaries without too much effort, I'd be happy to test that too.
That "glitch art" looks like a video-card driver problem to me. It's also possible that the solitaire site uses more texture space than your card/driver can handle causing issues.
The Aframe examples you linked seem to be simple geometric shapes with no textures so you may just be getting lucky there.
Also, Phaser, the library GP is using uses Pixi.js for the webGL drawing code if I remember correctly and Pixi is supposed to fall-back to plain canvas rendering if webGL isn't supported -- so disabling WebGL for the site may make the solitaire site work.
Looks like you're right, the author responded saying exactly the same thing :)
I tried killing webGL, which technically worked - I see a card deck! - but it didn't practically work: said card deck interacted at about 0.75-0.9fps. XD
It looks like I exceeded your texture memory. I use pretty high resolutions (targeting 4K displays), and dynamically build the card textures at runtime based on your screen size. Let me know if the native app improves the situation.
Another commenter said exactly the same thing about texture size, so I think you've pretty much nailed it.
The native build sadly didn't fix much; Electron === Chrome at this level of the stack pretty much. Thanks for taking the time to build it though :)
Incidentally, I tested the 32-bit build on my T43, and also tried the x64 build on my ever so slightly newer T60; both seem to have exactly the same issue: I'm getting pages and pages of
r300: CS space validation failed. (not enough memory?) Skipping rendering.
on both machines' stdouts when I run ./solitaire. One uses (as I mentioned) an ATI X300, the other an X1400.
Behold what 2005-2007 had to offer...! xD
The T43 says:
radeon 0000:01:00.0: VRAM: 128M 0x00000000C0000000 - 0x00000000C7FFFFFF (64M used)
radeon 0000:01:00.0: GTT: 512M 0x00000000A0000000 - 0x00000000BFFFFFFF
[drm] Detected VRAM RAM=128M, BAR=128M
[drm] RAM width 64bits DDR
[TTM] Zone kernel: Available graphics memory: 440758 kiB
[TTM] Zone highmem: Available graphics memory: 509818 kiB
[drm] radeon: 64M of VRAM memory ready
[drm] radeon: 512M of GTT memory ready.
[drm] PCIE GART of 512M enabled (table at 0x00000000C0040000).
And the T60 says:
radeon 0000:01:00.0: VRAM: 128M 0x0000000000000000 - 0x0000000007FFFFFF (128M used)
radeon 0000:01:00.0: GTT: 512M 0x0000000008000000 - 0x0000000027FFFFFF
[drm] Detected VRAM RAM=128M, BAR=128M
[drm] RAM width 128bits DDR
[TTM] Zone kernel: Available graphics memory: 1534580 kiB
[drm] radeon: 128M of VRAM memory ready
[drm] radeon: 512M of GTT memory ready.
[drm] PCIE GART of 512M enabled (table at 0x0000000000040000).
Can confirm, am firmly entrenched in old-hardware club. Ah well.
I'll try and check back here to see if you've thought of anything else; my email's in my profile.
For a few years now Google has had a little canvas benchmark in search results pages to figure out whether to enable endless scrolling. The game might springboard off of a technique like that to decide whether or not to use lower-resolution textures. (IMO, I'd target the detected screen resolution myself, I think that'd work...?)
I have one concern with framerate, if there is no need to maintain 60fps 100% of the time, Phaser should have a way to lower or pause fps, sadly, there isn't a way to avoid battery drain.
a little off topic but is it possible to truly randomize the 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 shuffle combinations?
edit: also, when i play solitaire via an ipad app there is always a 'top score' by someone for the hand i was dealt. i dont understand how this is probabilistically possible
You can easily seed your PRNG with a cryptographic entropy source which can certainly provide enough entropy.
But in reality you don't care. You could seed the PRNG with the current system time in nanoseconds or something, and every game you'll ever play is different. Even if the PRNG has limited state, you'll never notice it as a human player.
I'd like to elaborate. The cryptographic PRNG has lots of entropy, and it probably happens to be enough entropy. You need ~226 bits, and most cryptographic PRNGs will be more than that—but not all! Information-theoretic entropy is actually a stronger requirement, and it's why we use /dev/urandom (which is secure and fast) instead of /dev/random (which is information-theoretic secure, but slow).
So it might be possible that your secure PRNG still cannot generate all permutations.
Quarrel? Here, let me clarify. A PRNG has state with a certain amount of potential entropy, call this X. Uniformly shuffling a deck of cards requires a certain amount of entropy, Y. If X > Y then theoretically you can generate all possible shufflings with your PRNG, assuming it's seeded properly. If X < Y, then you can't.
The key difference is that your PRNG could be cryptographically secure even though X < Y. It turns out that Y ≈ 226 bits, and so it's plausible that your cryptographically secure RNG would have X = 128 bits, for example, even though I'd typically expect a larger number.
People often conflate entropy with security. A random number source could have high amounts of entropy and be insecure, while a random number source with less entropy could be very secure. Or let me put it this way: entropy is necessary but not sufficient for security.
It only takes 226 bits of entropy if all shuffles should be equally likely. If one shuffle has probability sufficiently close to 1, and the rest are equally likely, you could make do with less than a bit of entropy.
I agree with your argument on a pigeonhole-principle level (if you only observe fewer than 128 bits of state and then shuffle cards in a deterministic way based on a summary of your observations, you can't get all possible shuffles out), and I think that's a nice observation. 52! is surprisingly huge, and determinism is deterministic. :-) The issue is just about what /dev/urandom and /dev/random do.
There's a lot of misinformation and cargo cult crypto programming, but the basic summary is that /dev/random information-theoretic security is unnecessary for most applications, but necessary for being able to know that you can produce every possible permutation of a deck of cards.
/dev/random and /dev/urandom are the same CSPRNG construction; /dev/random just provides some extra (almost always counterproductive) checks on the rate at which it's rekeyed.
It is not the case that using /dev/random somehow gets you an information-theoretic security level that /dev/urandom doesn't.
Yes, they both use the same PRNG construction, that's in the article I linked to. But the blocking behavior of /dev/random does give it information-theoretic security, because an attacker with unlimited computational power can't predict its output, assuming they don't know the outputs of the entropy sources you are using, assuming that your entropy estimates are accurate or conservative. And we use /dev/urandom because information-theoretic security in our RNG does not improve actual system security.
Right, I give up. What I should have said is, "/dev/random is a misguided attempt at information-theoretic security." Everything in its design makes it clear that its goal is information-theoretic security.
I was thinking, "Oh, the mixing is good enough for practical use." But that's not what information-theoretic means.
I was thinking, "Oh, assuming the entropy estimation works." But we know it doesn't.
It was a stupid argument. But then again, /dev/random is stupid. Nobody gives a damn about information-theoretic security, and we don't have it anyway.
Well, beyond what a human player would notice, to be able to be dealt any possible hand in solitaire, a random or pseudo-random number generator would need log 2 (52!) bits of state or > 226 bits, which is the point I think the commenter was making, compared to the 32- or 64-bits of information a built-in function would take.
Something like a properly seeded 256 bit xorshift might satisfy this as a random number generator. But even that setup may not be sufficent.
Top score: if all players have them same seed for the RNG, they'll be playing the same sequence of games - unless you've played more games than any other player there'll be a high score. Need to keep track of RNG state between sessions, of course.
The Mersenne twister algorithm (https://en.wikipedia.org/wiki/Mersenne_Twister) has a period of 2^19937 - 1, way more than 52!~2^256 possible shuffles, and is fairly widely used. I know for a fact it is used by the python standard library.
It's not a matter of the size of the period, but of the size of the seed. To be able to generate all shuffles, you need to seed with at least 226 bits.
In practice, for my game, I use a single positive signed int as the initial seed for the Scala/Scala.js PRNG (Amazingly, they implement the same algorithm), meaning only two billion or so possible games. If I re-seeded the PRNG each time I moved a card ordering, then the full [52!] shuffles could be generated. I only use initial seed, so that a given seed will always generate the same shuffle/deal order.
Just one thought though... The starter screen is a bit complicated. (Talking only about the web version, haven't tried the phone apps)
I was just thinking my parents would love this game as Klondike and FreeCell are pretty much bare requirements when they buy a new computer or when I set one up for them. And your game works great, looks good and has all these variants. Except... They wouldn't have a clue which one to pick!
I don't know what would be the best way to show case the games for selecting them. A carousel showing an animation with each of them? Or a preview split screen that shows that animation when you hover on one? Or directly small animations for each button?
No clue, but if you can find a way to make it easier (and to then pin your "favorite" games alongside the "popular" picks) you've got yourself a super winner.
That's really nice! Plain ol' Russian Solitaire isn't linked from the front page, though (it's my wife's go-to solitaire game). I found it by looking in the help for Russian Cell and seeing it linked there.
There's really no reason for it to hog a CPU core at all times, not to mention the battery drain. It's not a real-time game, you can refresh the screen or play the animations just when needed.
WebGL is terribly different across browsers. I run a small game company and we write our games in Unity3D. Exporting them for the web is an exercise in self-flagellation. Starting with Chrome, browsers are killing plugin support of any kind before a viable alternative is ready - and WebGL is certainly not ready. We still support the Unity plugin on Firefox, but can't do a WebGL fallback yet because the tech just isn't there yet.
(Yes, Unity's WebGL exporter is also in the mix, but browser rendering and support for the exported game is still wildly different across browsers.)
Holy crap, I've been waiting for someone to ask me that (I need more coding friends). The entire game is client/server. The client is pure JavaScript, and communicates with the server using only a few messages (select card, move card, etc). Normally, it connects via WebSocket to my Play Framework/Scala app, and the server processes moves and sends the results and a list of potential moves back to the client.
If you download any of the native apps (iOS, Android, Linux, Win, OS X), you can play offline. This is the magic of Scala.js. The same code that my server uses gets compiled down to JS, and the client calls it directly, forgoing the socket connection. You can also see offline mode in the browser by replacing the word 'play' with 'offline'.
Hm. This might be why I was getting severe lag while playing. I'm on a relatively laggy connection (from Brazil), so every move had a noticeable delay. I'd suggest doing client-side prediction, even in the online mode.
Yep, that was it. Just tried offline mode and it's super snappy.
That sounds AWESOME from a technical perspective. Is there a reason that you're doing a client/server thing instead of just having "offline mode" be the default, though?
For a solely single-player game, it seems to me like being able to host everything off of a static fileserver instead of a Scala server seems like a scalability win for you and a bandwidth win for users. I don't doubt your choice and your tech stack (you know way more about your project and its needs than I do!), but I'd be super-curious to hear more about the reasoning and process.
Having it client/server eliminates the possibility of cheating, lets me get exact timings for analytics and data mining and whatnot, and lets me implement cool things like "Observe Game" (coming soon!).
After the feedback from this thread, I'm going to move to a "hybrid" model where the model logic all happens locally, but still fires websocket events to the server. Best of both worlds.
I'll probably open source it in the future, there's a lot of code there, and not much money to be made in Solitaire. I did extract some of the best general parts of the app (the admin, authentication, etc) into a project template I call Boilerplay (https://github.com/KyleU/boilerplay). If anyone's interested, I could also publish my pipeline to turn the Scala.js webapp into native apps for OSX, Windows, Linux, iOS, and Android.
For anyone who comes here without clicking the link, it's linking to a comment of the intern that actually wrote it. He provides some neat context as well. Worth the read if you don't mind being on Reddit at work.
Find out what type of firewall you're using. Find the vendor's website on how to request a url/domain re-categorization, and submit a change request. Most firewalls subscribe to a live database hosted by the vendor, and its request forms are public most of the time.
I used to, for years! Visual Studio and Borland's C++ Builder always had white backgrounds.
I found the dark screens fashionable in Linuxland and themes very hard to look at for a long time. Weirdly, now I code with Monokai theme in OSX and it is very enjoyable.
Perhaps the white background was easier when you weren't typing on two 24" screens and only had a 12" screen to peep into.
Bright backgrounds behind the screen require bright content on the screen to be easy on the eyes. A typical office needs a light theme, a shaded bedroom needs a dark theme.
He mentioned "KlondGmProc" and "DefColProc" as names of message passing routines, so google finds a single result [1] from win2ksrc.rar > klond.c, I think that might be the actual source code.
Likely - "Klondike" is the technically-correct name for the game popularly known as "Solitaire". Probably it was called Klondike originally but packaged as Solitaire by a PM on the Windows 2.1 team. https://en.wikipedia.org/wiki/Klondike_(solitaire)
But this klond.c is not the original source, judging by the inclusion of comments referencing Win 3.1
Where are you seeing that Solitaire was bundled in Windows 2.1? Both the wiki page you reference and the Wes Cherry's linked comment both suggest that it was originally included in 3.0.
That's what Wes wrote on Reddit: "A little clarification on Solitaire history. I wrote it for Windows 2.1 in my own time while an intern at Microsoft during the summer of 1988". It may have been a typo.
EDIT: re-reading I see that he later on says it was eventually bundled in 3.0. So he wrote it for 2.1 and it didn't get picked up until 3.0. In either case, source referencing 3.1 in a comment is clearly not the original intern-developed source from 1988, although it's likely pretty close.
I wonder if you've shed a little light on some dishonesty here.
What if Wes, the intern, wrote Solitaire for Win2.1, then it got redone for Win3.1 et al. by someone else? That makes all of this somewhat morally underhanded.
I'm pretty sure that his name appeared in the credits in Solitaire at least through Windows 2000. I say this because my last name is Cherry and I noticed when I was a kid that Solitaire was credited to someone with the same last name as me. Without Wikipedia etc I doubt there's anywhere else I could have come across that info.
I've said it before, and I'll say it again: an un-nervingly large amount of the work I've done over the years has had this property: the long-term value is inversely proportional to the time put into it. Learning from this correlation remains a big priority for me.
Same here, my best projects are the ones that were banged out over at most a week or so (then possibly supported with minor fixes over time). Still though, every shitty cancerous hell project before that plays its part in training your mind and preparing you for the moment when you can finally produce your opus.
I have heard this explained as stuff that needs to be done yesterday almost always also has the property of cannot be disrupted for any reason (which is why it was so urgent to begin with). So your quick hack that stops the world from exploding cant afford to be meddled with, lest it causes the world to explode in the process....
Its similar to the worse is better philosophy, low quality wins because it can be churned out quickly, it stays in place because the value proposition of replacing it is unclear or insufficient.
When it comes to critical things, its very important that changing things has a definite, demonstrable purpose in regards to its outward effects, not just itch scratching. Aesthetic concerns matter little because the potential for disruptive effects is a near certainty while the upside is much more nebulous typically (the devil you know etc)
I've observed a similar effect in my own work. I wonder if it's as simple as the capability to produce more smaller projects and fewer large projects. Curious if you produced a lot of smaller projects before having valuable ones.
I remember playing 'spider' on my Sun system back in the day (it was a form of solitaire) and once Don Woods (who was also working at Sun at the time) walked past my office and said, "Oh you like that? I wrote it." To which I could only reply he was responsible to two major time wasters in my early career :-)
You used to be able to underflow the score in MS Solitaire by repeatedly dealing new hands (would subtract 52 points or so). I believe that, in Win95 at least, the score was a 16 bit signed variable so you could underflow it with a mere 1261 deals! I wonder if the modern one is 64 bits, and if you can still underflow it...
For pinball, there was a bug with the collision detector in the 64 bit version they could not fix.
"Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn’t figure out why the collision detector was not working. Heck, we couldn’t even find the collision detector!"
I wonder how they did collision detection? I know pinball had a perspective view, but surely items were mapped on a 2D grid and detection done by comparing ball.xy versus object.xy ?
I once played a game of Solitaire on (IIRC) Win95 that had/allowed a red card placed on another red card. Pretty sure I still have the screenshot around somewhere (though that would be easy to fake).
No, that was "Mousing around" for the original Macintosh. See https://www.youtube.com/watch?v=1pwammW5syw (Mac boots at around a minute in; the audio is from one of the two audio cassettes that shipped with each Mac)
A little clarification on Solitaire history. I wrote it
for Windows 2.1 in my own time while an intern at
Microsoft during the summer of 1988...
...A program manager on the Windows team saw it and
decided to include it in Windows 3.0. It was made clear
that they wouldn't pay me other than supplying me with an
IBM XT to fix some bugs during the school year - I was
perfectly fine with it and I am to this day.
Royalties are a thing in the game industry, not universal or even the norm, but they do exist.
I mean, they do everything they can to screw employees out of them using Hollywood-style accounting, but it is incorrect to say "No employee at a company ever received royalty for the code they write for the company".
> I eventually negotiated the final agreement with Guy Kawasaki, where, in addition to the $100,000, I managed to get a 10% royalty of the wholesale price if Apple sold Switcher separately, which Steve swore they would never do, but eventually the royalty delivered another $50,000.
Not quite. The intern is only responsible for the difference between solitaire and the time-wasting activity it substitutes for.
I learned this when my parents tried to blame video games for why I wasn't doing homework. They didn't want to blame books when I substituted activities.
I wrote a clone of Solitaire in Turbo Pascal long time ago. I still remember the satisfaction of coming up with a recursive algo for uncovering safe tiles. The joy of understanding recursion, that was nice, ha ha!