This seems like it might be interesting to me if I already had some understanding of neural networks. Unfortunately for me, I can't even complete the RNN because there's nothing to even suggest what I'm missing when I connect the dots in the only way that the UI suggests I can.
Yeah, that was one of my concerns launching this v1. So in v2 I plan to add a little explanation for each neural network, perhaps along with an animated video showing what the final architecture looks like. Thanks for the feedback.
At the moment it doesn't feel like a game but a demo or tech preview of the UI. For a game I'd expect to have rules or a goal, and then be guided through it as the complexity increases. If the goal of the game is to learn, this would be a great medium for it. Good luck with v2, I hope to remember and see it when I can more actively enjoy it.
The goal is to build the network in the fewest clicks. I agree and have noted many of the comments about adding more explanations for those unfamiliar with the particular network.
Uh, just to be clear - having an example network is one thing. But more importantly, explaining the blocks is what is being asked for.
If you play any game around building computers from logic gates, or any factory optimization game, the idea is to start with components, understand thoroughly their tiny single function, then begin to combine them in different ways.
So yeah - seeing a RNN would allow me to draw the connections, but what I want to understand (what would help me learn from this game) is to know what h(x) means. Before we even construct a network, we should have static inputs to play with those blocks to see what they do. Ideally, we should be asked to construct those blocks from other constituent parts (logistic functions? I dunno).
Yeah - you aren't "learning" anything. You're guessing-and-checking until it lets yo go. No idea what blocks do what, or why you're connecting them - which would be the basis for learning.
I had the same experience. A better description would be a tool to test your understanding of neural network architectures, not a tool to teach you about neural network architectures.
> There are two outputs, two inputs, and three edges.
No, that's something you've inferred from your domain knowledge.
There is a set of dots labeled "xt" in blue, a set of dots labelled "ht" in purple, and a set of dots labelled "yt" in green. Additionally there's a scoreboard with "0 clicks" in blue, "3 edges remaining" in red, and "0 extra edges" in green.
With a bit of color matching I might assume "yt" maps to "extra edges," but that could be a red herring, because I don't see how "clicks" maps to "xt" or where red and purple come in.
It could also help if "RNN" had been defined, but it wasn't...
There is a help button in the top right that shows you need to focus on the circle node connectors to "solve' the problem.
At least for the first example:
You have a blue box labeled xt with a single node connector at the top.
You have a purple box labeled ht with a node connector at the top and bottom.
You have a green box labeled Yt with a node connector at the bottom.
The game tells you at the top you have 3 edges remaining.
Dragging a line from one node to another, releasing, and it turning green means you have placed a "correct" connection.
i.e. xt -> ht [bottom] will give you a green line.
Repeat until you have all edges solved for.
It's not spelling it out for you, but once you complete the "game" you'll at a very high level understand the moving pieces within the network, and the "flow" of data.
I guess "decades of clicking things" is a domain one can be knowledgeable in? Usually boxes with draggable things on the top are inputs, on the bottom are outputs.
Hmm, my decades of clicking things lead me to assume a flow from top to bottom, so something with a connector on the bottom is a source that will output data through that connector, and something with a connector on the top is a sink that will accept input data through that connector.
It flows in the reverse direction of what I’d expect (out is at the top, in is at the bottom, the opposite of any visual programming or diagram I’ve ever seen). It’s also represented in a way I’ve personally never seen ANN’s drawn. I thought you had to connect the dots in the middle and thought “huh? It mustn’t work on mobile” until I read the comments and tried again. And this is with decades of clicking things domain knowledge, and a small bit of neural networks knowledge.
Even knowing what the R in RNN stands for requires some pre-existing knowledge of neural networks. Which isn't something that's helping _me_ learn about them, particularly.
I have always found neural network diagrams like the RNN one here to be very vague and even slightly misleading. What does it mean that h_t loops onto itself? While I know that it means "take as input h_{t-1} also", the diagram itself does not illustrate the concept to the primary person looking at such a diagram, i.e. someone wanting to learn about the architecture.
I came to post the same comment. I was confused by the lack of "t+1" or "t-1" nodes, and then it took me a while to realize I had to connect the "ht" node to itself.
agreed, I find it pretty useful to check if I remember where LSTM stuff connects
I remember reading the original paper a while ago but always forget (pun intended) where to connect stuff
then I realized that memorizing it visually is not the best approach, it's better to think about it in this sorta loose fashion -- I remember there is forget gate, well it forgets previous stuff so there is probably some hadamard product somewhere, it probably needs some inputs and previous hidden states...there was some -1,1 forcing in candidate memory so probably needs tanh instead of sigmoid...and then piece by piece i can reconstruct it pretty closely
Good idea, needs bit more text explaining what is happening, and how to make the connections. I gave up because I couldn't figure out just the UI on how to click.
I joined up some dots, I learnt something, and I got rick-rolled. Awesome game.
It did really get me a bit OCD that the deep RNN had the inputs at the top and the outputs at the bottom initally. The inputs have to be connected at the top edge so need to be at the bottom!! :)