A digital image is nothing more than data -- numbers indicating variations of red, green, and blue at a particular location on a grid of pixels. Most of the time, we view these pixels as miniature rectangles sandwiched together on a computer screen. With a little creative thinking and some lower level manipulation of pixels with code, however, we can display that information in a myriad of ways. This tutorial is dedicated to breaking out of simple shape drawing in Processing and using images (and their pixels) as the building blocks of Processing graphics.
One very important area of application is image processing, in which algorithms are used to
detect and isolate various desired portions or shapes (features) of a digitized image or
video stream. It is particularly important in the area of optical character recognition.
Edge Detection
Edge detection is the name for a set of mathematical methods which aim at identifying points in a
digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply
are typically organized into a set of curved line segments termed edges. The same problem of finding
discontinuities in 1D signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection.
Edge detection is a fundamental tool in image processing, machine vision and computer vision,
particularly in the areas of feature detection and feature extraction.