It's probably worth mentioning that there are a lot of ways to implement convolution with a kernel, and the kernel can be of any size, not just 3×3. The explanation here shows how to implement the output-side algorithm nonrecursively; http://www.dspguide.com/ch6/3.htm gives this for the one-dimensional case. But you can implement it on the input side instead (iterating over the input samples instead of the output samples), there are kernels that have a much more efficient recursive implementation (including zero-phase kernels using time-reversal), you can implement very large kernels if you can afford to do the convolution in the frequency domain, and there's a whole class of kernels that have efficient sparse filter cascade representations, including large Gaussians.
(To say nothing of convolutions over other rings.)
Curious: What would a negative pixel look like? What use would that have?
Applying this to the "outline" kernel, it seems like white bordered by black would show up just as white as black bordered by white, with homogeneous kernels still showing up black.
The identity kernel is [0 0 0; 0 1 0; 0 0 0]. For every input pixel, the output is the original pixel value. Don't change the pixel value based on what is around it.
A simple blur kernel would be 1/9 * [1 1 1; 1 1 1; 1 1 1]. The output for each input pixel is the average of the origin's pixel value and all of its neighboring values, with an even weighting. A less dramatic blur can weigh the origin pixel higher than the neighbors, such as a gaussian blur. This will result in the output pixel being more similar to the origin pixel, than to it's neighbors.
Edge detection like [-1 -1 -1; -1 8 -1; -1 -1 -1] can be understood by multiplying the origin pixel value by 8, and subtracting off all of its 8 neighboring values. If the values are all fairly similar, lets say all gray, your output will be black. 8A - 8A = 0. So it "punishes" pixels that are similar to its neighbors. When a pixel is different than some or all of its neighbors, you will be left with some value > 0 at the output, which detects a change from its neighbors: an edge. Horizontal edge detection: [1 0 -1; 2 0 -2; 1 0 -1]. Ignores the pixels above, below, and center, but accentuates the differences between what is on the left from the right.
For blur, you can either do two things:
- generate a matrix such that each element in the matrix is equal to 1 (e.g. averaging)
- generate a matrix such that it represents the Gaussian distribution (you can use a 2D Gauss function)
For edge detection, you essentially have "derivative", e.g. rate of change; the more abrupt the change, the brighter the resulting pixel, hence why edges are highlighted. A good convolution kernel for edge detection would be the Laplacian.
For sharpness, it's pretty much the Laplacian Kernel + identity kernel.
...you might want to start by skimming chapters 23 and 24.
The values of the gaussian kernel matrix are determined by doing a discrete sampling of the gaussian function. You get to choose sigma (gaussian's standard deviation) and kernel size (spatial neighborhood of the kernel, ie how much of the surroundings that the kernel will examine).
Another example is the Sobel operator, used to extract edges from images:
The kernel matrix is the result of composing a gaussian smoothing with a spatial-differencing operation. Thus, the Sobel estimates edges from smoothed images.
As for the sharpen kernel described in the post -- an intuitive explanation is that you want to accentuate differences in pixel intensities.
look at FIR filters, they are essentially 1D image kernels.
That is to say, there is more to cellular automata than the GOL, and one bit per cell.
A Survey on Two Dimensional Cellular Automata and Its Application in Image Processing
Parallel algorithms for solving any image processing task is a highly demanded approach in the modern world. Cellular Automata (CA) are the most common and simple models of parallel computation. So, CA has been successfully used in the domain of image processing for the last couple of years. This paper provides a survey of available literatures of some methodologies employed by different researchers to utilize the cellular automata for solving some important problems of image processing. The survey includes some important image processing tasks such as rotation, zooming, translation, segmentation, edge detection, compression and noise reduction of images. Finally, the experimental results of some methodologies are presented.
There's an active community searching-for/categorizing/discussing patterns in Conway's game-of-life (GOL) (and other cellular automata). Examples:
The common element you are picking up on, I think, is [comonadic computation](http://blog.sigfpe.com/2006/12/evaluating-cellular-automata-...).
All you need for a CA is a space in which to map your cells (in an arbitrary number of dimensions, with an arbitrary mapping), some state for each cell (maybe a real, this is arbitrary) and some transition function to compute the next state of each cell from its current state and that of its "neighborhood" (which again is arbitrarily defined.)
So nice and clear.
I've been working with images for 12 years and I was never sure exactly what 'sharpen' actually did ...