Anyone here in the OMSA/OMSCS program?
[ ] https://poloclub.github.io/cse6242-2019fall-online/
EDIT: I'm wrong. x < 0 for some of the pixels. Specifically for the more red-ish channels.
thanks for sharing this, great content. made me think about old projects I have laying around.
Here are examples of equivalent notations that may make it more familiar:
y = 2⋅x + b or
y = a⋅x + b
- a/2 would be the weight, often named 'w'.
- b would be the bias, often named 'b'.
- y is the output.
In neural network documentation this is often written as (often case sensitive, which may be confusing):
output = w⋅x + b (output = weight * input + bias).
I hope that explains it.
Parameter: an aspect of the model, a dial which is fixed by data (eg., a)
Kernel (as used here): a subset of such parameters
Algorithm: procedure which accepts data and produces a model
Hyperparameter: an aspect of the algorithm, a dial which changes model production
Convolution: A convolution of image A and Filter B describes to what degree A is "like" B. Here "Filter B" is a kernel, ie., a parameter set learnt by the network.
The goal of a CNN is to produce a model whose parameters are image filters that describe the degree to which an images expresses various shapes. By learning the filters from an image set, the network is specialized to distinguish images in that set.
This seems a bit cryptic. The way I understand hyperparamters, they define how a model learns, i.e. you can set an alpha in gradient descent. Now when you compare them to "ordinary" parameters, hyperparamters do not define relationship between data and output.
And you misunderstood my suggestion for the article as a request for your help. But thanks. I don’t doubt what you wrote is accurate and helpful in the same way that saying “a transom is a part of a building” is accurate and helpful.
Yes, both data and hyper-parameters are inputs to the algorithm.
I wasn't trying to offer anything more than a sketch of the terms for someone already semi-informed.
To "define" terms in a way that a person without any experience of the area could understand would require quite a long article.
My goal wasn't to answer you specifically but to take your observation as establishing a plausible interest in others for something like my comment.
Hyperparameters in ML is the tuning parameters on the shape and structure of the model, such as the number of features in linear regression above, the number of layers in a NN or number of neurons in each layer. I think basically any tuning parameters besides theta can be considered hyperparameters. The difference is the theta parameters are learned while the hyperparameters are decided by human. But you can also run experiments on different tuning parameters and compare the outcomes so in a sense hyperparameters can be learned.
Convolution, well, the article is trying to explain it. It's like rolling up a portion of an image using a filter. E.g. Making an image blur by pixelizing it. The main purpose is the find out high level feature of the image. E.g. Put a filter on to find the edge of an object in the image.
Kernel is a small NxN matrix (3x3, 4x4, 16x16, etc) used as filter to convert the pixels in an image to high level feature. E.g. the mean-color-kernel takes 4x4 pixels and computes the average of their colors. Now apply the mean-color-kernel over all the 4x4 blocks of an image and you got one convolution.