Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Pattern Recognition Engine
4 points by felipemnoa on April 7, 2017 | hide | past | favorite | 6 comments
This is a video of a pattern recognition engine I've been working on.

https://www.youtube.com/watch?v=WHNdIuBJHTo&feature=youtu.be

It is able to learn new patterns even if there is a lot of noise. Currently because of the lack of computer power it is only able to learn very simple patterns. With enough computer power I think I could use video to train it.

About the architecture: The pattern recognition engine is a hierarchy of nodes. Each node is responsible for a small field of view, similar to a neuron. Because the input image can be broken up into smaller pieces for processing it is very easy to process it in parallel with multiple computers, which should make it quite fast. Unfortunately I do not have the resources to build a cluster of computers to test the speed so right now this part is just a theory.

I was going to submit this as part of my application to y-combinator but unfortunately I did not complete this prototype on time. I may try to apply for the next one or even do a kickstarter if I can drive enough interest.

Please let me know what you guys think.




These hierarchy of nodes looks to me like a CNN, that is trained on noised image data to try to seperate noise from signal. In what way are the nodes different from neurons?

It would be beneficial to highlight the differences to NN as I fail to recognize them.

Using neuralnet software infrastructure like tensorflow and the like, the parallelization of that task should be trivial. E.g. you could throw some money at a few AWS instances and see where it takes you.


I suppose CNN means Cortical Neuron Network?

>>It would be beneficial to highlight the differences to NN as I fail to recognize them.

You make a good point. I tried initially to use traditional machine learning techniques for pattern recognition but it really bothered me that the whole thing seems like a black box so I decided to write a pattern recognition engine from scratch without using any of the current machine learning techniques.

There is no linear algebra in my algorithm, I still have not yet discovered why it would be needed. My algorithm is not the typical NN (Neural Network) because I'm not assigning weights to each node like it is traditionally done. The only similarity to an NN is just the hierarchy shape. Instead each node is just a pattern recognition engine. In fact, that is the main thing about my algorithm: I have a pattern recognition engine (PRE from now on), and this PRE is used recursively within each node).

A single PRE learns and recognizes patterns as a hierarchy of patterns within its field of view. If you watch the video you will notice how it initially is breaking the 'a' into multiple colors even though it still does not know what an 'a' is nor does it know that an 'a' is there. That is because at the beginning it is just learning the smaller pieces of the 'a'. As it continues to learn it realizes that all those pieces form a bigger piece that has the shape that we humans recognize as the 'a'.

I decided to design the whole thing into a hierarchy of recursive PRE's because during testing I realized that that architecture was able to better handle noise and as a huge bonus it allowed me to make the whole thing a lot more parallelizable.

By the way, this is just the first part. I have another program in the works that I call the Temporal Pattern Recognition Engine. This is what recognizes a pattern in a span of time as a single pattern. i.e. A person jumping up and down would be a temporal pattern. This uses the PRE for the initial recognition and then it does its own work to do the temporal pattern recognition. Still in the early stages though, I will require massive computer power for some real testing.


Cr0sh spent time to give you a good advice. Follow it.


> I suppose CNN means Cortical Neuron Network?

CNN == Convolutional Neural Network

> I tried initially to use traditional machine learning techniques for pattern recognition but it really bothered me that the whole thing seems like a black box so I decided to write a pattern recognition engine from scratch without using any of the current machine learning techniques.

That...is not a good approach.

Don't get me wrong - learning this stuff is hard, and implementing it (on your own - that is, a completely from-scratch not-using-any-libraries neural network) is even more non-trivial. You need to know and understand linear algebra for this; knowing a bit of calculus (derivatives mainly) can also help, but isn't absolutely necessary.

Trying to build it yourself, while admirable, will likely only lead you down paths already traveled and found wanting (if you are lucky). Once you understand the basics of neural networks and other machine-learning tools (gradient descent, state vector machines, etc) - and have implemented them yourself - you'll have a foundation on which to build with tools like TensorFlow, Keras, and others, without them feeling like such "black boxes".

Remember: Abstraction is a good thing. If you are doing any kind of programming that isn't hand-assembly or plugboards - you are using an abstraction to the machine. One thing I have learned in listening to people like Andrew Ng and Sebastian Thrun - is that even they rely and use those black-box abstractions, and don't have a "not-invented-here" attitude. There's nothing wrong with wanting to look under the hood for understanding, but don't let too much hubris get in the way of thinking that you can do it better without understanding what has come before.

You can take this advice or leave it - it's up to you of course; but if you want to have a better base on how all of this works, then my suggestion would be for you to gain an understanding first of linear algebra (just the basics - vectors, matrices, mathematical manipulation and calculation, etc) - Khan Academy or other resources can help there.

Then, go to Udacity, Coursera, and/or some of the other MOOC providers, and find a class or two on machine learning, and take them. Most will take (if seriously studying, and depending on existing work load) about 6-8 weeks - but you can usually go at your own pace (so shorter or longer learning terms are possible). I'm partial to Coursera's and Udacity's courses, mainly because in 2011 I took Andrew Ng's "ML Class" MOOC and Norvig/Thrun's "AI Class" - from both of which Coursera and Udacity (respectively) were founded.

Later, in 2012, I took Udacity's CS373 course (it was titled differently than today's offering, but I highly recommend it regardless). Today, I am enrolled in and progressing through the Udacity Self-Driving Car Engineer Nanodegree (so I am a bit biased, I suppose).

The tough thing about learning this stuff on the internet, without going thru a well-known MOOC or other vetted source, is knowing what you're learning is valid to learn. There is a ton of information out there purporting to teach you about neural networks or other "AI" tools - and they are actually teaching out-of-date technologies and techniques. But without having a grounding in the long history of machine learning (and AI) as a field, you wouldn't know this. It can be very confusing. There have been some good postings here on HN that have given some details on good resources, so you might want to check those out. There's nothing wrong with learning historical methods (in fact, I encourage it - I believe that one can't really understand a subject until they understand the paths that were taken in the past - whether they led to failures or to success for the period). But you don't want to learn something that is actually sub-optimal for the task.

Finally a couple of other points: Do you know what MNIST or ImageNet is? If not, then you need to learn about datasets and where to get them. Also, for a real CNN, look into the history of character recognition using the MNIST dataset - eventually, you will find a guy named "Yan LeCun" - who is one of the original people who led us down this whole CNN and Deep Learning path (along with plenty of other names).

There is so much out there (much more than I can summarize here) - you have the interest, I can see that. Take the time to properly learn this stuff, and you'll be rewarded with some interesting results.


Thanks for the the advice. It is very informative.


Unfortunately youtube has removed my video, twice. The first time I thought it was because of the music I used, the second time I have no idea why. If I were paranoid I'd think that I'm on the right track towards the path of artificial intelligence and some existing AI (like the one in Person of Interest) doesn't wan't me to publish this. :)

Edit: Never mind, it seems to be up now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: