(Amazingly detailed) info: http://tasvideos.org/5243S.html
Edit: I found one that has an example (I think) around 7:38 http://www.nicovideo.jp.am/watch/sm13963118 - the collision detection pushes megaman into the wall and jumps between different sections. Very interesting indeed!
Also you might be interested in the Final Fantasy 6 memory overwrite bug that was discovered in 2016 as well. It uses the Window Color menu settings as the data reference.
Btw regarding the Megaman 2 RTA, there's an even more ridiculous collision bug being used around 11:30 http://www.nicovideo.jp/watch/sm28321223
One thing I'd suggest is exploring a reward function, instead of using only pre-recorded training data. That is, give the AI a goal to complete (in this case, finish the race) and let it learn by itself!
EDIT: to clarify: what should I google for?
OpenAI's example universe agent. Remember that while their goal is an agent that works in any and all environments (read: games), you could certainly optimize yours just for MarioKart.
For the project I am currently working on, I'm using a slightly modified form of LeNet - which isn't too different from the TF MNIST tutorial; after all, recognizing traffic signs isn't much different than recognizing hand-written numbers...
...but "driving" a course? That seems radically different to my less-than-expert-at-TensorFlow understanding, but that is only due to my ignorance.
I'm glad that these examples and demos are being investigated and made public for others - especially people learning like myself - to look at and learn from.
> Later, I switched to use Nvidia’s Autopilot...
So I guess he didn't use the MNIST CNN model.
You can see that it follows much the same pattern as LeNet CNN for MNIST - a few (ok, more than a few!) convolutional layers followed by a few fully connected layers.
Maybe you could call it a "follow on" or perhaps an ANN pattern?:
Conv -> Conv -> Reshape/Flatten -> FC -> FC -> FC
(disregarding activation and such)
...which is really the lesson of the LeNet MNIST CNN - at least, that's my takeaway.
So far, yes - but that has a few caveats:
See - I have some background prior to this, and I think it biases me a bit. First, I was one of the cohort that took the Stanford-sponsored ML Class (Andrew Ng) and AI Class (Thrun/Norvig), in 2011. While I wasn't able to complete the AI Class (due to personal reasons), I did complete the ML Class.
Both of these courses are now offered by Udacity (AI Class) and Coursera (ML Class):
If you have never done any of this before, I encourage you to look into these courses first. IIRC, they are both free and self-paced online. I honestly found the ML Class to be easier than the AI class when I took them - but that was before the founding of these two MOOC-focused companies, so the content may have changed or been made more understandable since then.
In fact, now that I think about it, I might try taking those courses again myself as a refresher!
After that (and kicking myself for dropping out of the AI Class - but I didn't have a real choice there at the time), in 2012 Udacity started, and because of (reasons...) they couldn't offer the AI Class as a course (while for some reason, Coursera could offer the ML Class - there must have been licensing issues or something) - so instead, they offered their CS373 course in 2012 (at the time, titled "How to Build Your Own Self-Driving Vehicle" or something like that - quite a lofty title):
I jumped at it - and completed it as well; I found it to be a great course, and while difficult, it was very enlightening on several fronts (for the first time, it clearly explained to me exactly how a Kalman filter and PID worked!).
So - I have that background, plus everything else I have read before then or since (AI/ML has been a side interest of mine since I was a child - I'm 43 now).
My suggestion if you are just starting would be to take the courses in roughly this order - and only after you are fairly comfortable with both linear algebra concepts (mainly vectors/matrices math - dot product and the like) and stats/probabilities. To a certain extent (and I have found this out with this current Udacity course), having a knowledge of some basic calculus concepts (derivatives mainly) will be of help - but so far, despite that minor handicap, I've been ok without that greater knowledge - but I do intend to learn it:
1. Coursera ML Class
2. Udacity AI Class
3. Udacity CS373 course
4. Udacity Self-Driving Car Engineer Nanodegree
> Do you think the course prepares you enough find a Self-Driving developer job?
I honestly think it will - but I also have over 25 years under my belt as a professional software developer/engineer. Ultimately, it - along with the other courses I took - will (and have) help me in having other tools and ideas to bring to bear on problems. Also - realize that this knowledge can apply to multiple domains - not just vehicles. Marketing, robotics, design - heck, you name it - all will need or do currently need people who understand machine learning techniques.
> Would you learn enough to compete/work along side people who got their Masters/PhD in Machine Learning?
I believe you could, depending on your prior background. That said, don't think that these courses could ever substitute for graduate degree in ML - but I do think they could be a great stepping stone. I am actually planning on looking into getting my BA then Masters (hopefully) in Comp Sci after completing this course. Its something I should have done long ago, but better late than never, I guess! All I currently have is an associates from a tech school (worth almost nothing), and my high school diploma - but that, plus my willingness to constantly learn and stay ahead in my skills has never let me down career-wise! So I think having this ML experience will ultimately be a plus.
Worst-case scenario: I can use what I have learned in the development of a homebrew UGV (unmanned ground vehicle) I've been working at on and off for the past few years (mostly "off" - lol).
> Appreciate your input.
No problem, I hope my thoughts help - if you have other questions, PM me...
That looks like a great course by the way, thanks for sharing.
If you're strapped for cash and don't want to pay the $800/term, you could definitely learn these things on your own using free online resources. If you don't mind the price, though, I've found this course worth the time+money+effort so far. [they're not paying me to say this :)]
i would recommend the coursera course by andrew ng. i had an amazing time. the code stays out of your way and he walks you through the algorithms and explains the theory very well.
i just started the fast.ai by jeremy howard, and literally have been blown away but the course. it is AMAZING! by lesson 3 i'm able to build cnn models and score on top 20% in kaggle competitions. not bad for a complete novice. HIGHLY RECOMMENDED.
once im done with the fast.ai course i may look back around to google's deep learning course. i think it may be easier for more experienced users to digest its info.
Edit: added fast.ai link
Personally, I found Stanford dl courses (image classification, nlp) to be much more suitable for beginners.
Ideally, I want to work on self driving cars or AI in my day to day job, but I don't want to get my hopes up. Do you think that after you complete the nanodegree you will attempt to change your career to an SDC engineer or AI/ML engineer? Or is this just meant to fulfill a curiosity of yours?
So the author did a proper test of the model by scoring it on an unseen track to make sure it generalizes! This is very awesome!
NB: generalization should be one of, if not the first thing you think about and plan for in any ML project.
I feel like that could be a good next step - a ubiquitous neural net model that, after mapping inputs, will learn to play any video game that's on your screen.
Also, bravo on including the stupid little bugs that gave you trouble. It always sustains me working on a hard project to know that a self-driving video game was blocked by a missing newline in a C HTTP request. It makes me step back and laugh at the ridiculous complexity of what we take for granted in our day to day work.
2. The logic is usually a bunch of (human-authored) scripts consisting of if-else spaghetti.
Wouldn't surprise me if they don't even 'drive' in any sense while off-screen, just increment some abstract position relative to the track length. But I don't know this for a fact.
Then again, maybe this is just done by cheating simply with an RNG.
Tell me more!
It's a common enough term, though, there's even a page for the trope here:
It really shows how simple many control tasks actually are!
Coincidentally, one of the neural network components in AlphaGo did pretty much the same, i.e. attempted to guess what human player would usually play in this situation purely based on the image and nothing else.
Just something to think about as a developer. I would imagine that on a local machine, using HTTP as the protocol might add latency.
Step 2 is to make AI figure out how to return to the ideal path through the course when other people are stealing your items or shelling you.
step 3 is to make the AI figure out how to counter attack to slow down the opponents.
Step 4 is OH GOD WE TAUGHT THE AI HOW TO ATTACK RUN FOR YOUR LIVES.
But yes. point being, self driving is a feat of it's own, competing with opponents is a whole different ballgame with it's own set of challenges.
It seems like a lot of people doing reinforcement learning on video games get bogged down on training on raw pixels only... it would take a tremendous amount of data to make the driver recognise when and where to use certain power ups, however if you encoded this as a variable, wow it could be really cool.
I believe this is fundamentally how we humans learn with so few examples. Other humans "encode features for our brain to track" by telling us how it should be done and what information to prioritise.
I'm working (albeit very slowly, as a beginner) on a similar project with Geometry Dash and Python. You're a great inspiration!