>"Perceptrons are the simplest form of machine learning and lend themselves to somewhat easier hardware implementations compared to some of the other machine learning algorithms."
Can someone explain what is it about perceptrons that make them easier to implement in hardware?
All you need for a perceptron is to add up all of the inputs after they’ve been multiplied by their weight. This would probably be done in parallel with a fused multiply add circuit which is fairly simple and some place to store the weights. When they change the perceptron all they need to do is record the current weights and load in the new values.
Wait, how is K-NN harder to implement than a perception... Even in hardware? What about linear regression? What about decision trees? What about naive Bayes?
A single layer perceptron IS linear regression + some non-linearity of the output. I think in reality they could have used the word "logistic regression" but chose perceptron because it sounds better.
Performing nearest neighbors search is much more expensive than a simple multiply-add. I don’t know about the other, my idea is that a perceptron is much easier to implement in hardware/low-level operations.
>"Perceptrons are the simplest form of machine learning and lend themselves to somewhat easier hardware implementations compared to some of the other machine learning algorithms."
Can someone explain what is it about perceptrons that make them easier to implement in hardware?