Image Processing for Edge Detection

INTRODUCTION

Edge detection is a fundamental tool used in most image processing applications to obtain information from the frames as a precursor step to feature extraction and object segmentation. This process detects outlines of an object and boundaries between objects and the background in the image. An edge-detection filter can also be used to improve the appearance of blurred or anti-aliased image streams. The basic edge-detection operator is a matrix area gradient operation that determines the level of variance between different pixels. The edge-detection operator is calculated by forming a matrix centered on a pixel chosen as the center of the matrix area. If the value of this matrix area is above a given threshold, then the middle pixel is classified as an edge. Examples of gradient-based edge detectors are Roberts, Prewitt, and Sobel operators. All the gradient-based algorithms have kernel operators that calculate the strength of the slope in directions, which are orthogonal to each other, commonly vertical and horizontal. Later, the contributions of the different components of the slopes are combined to give the total value of the edge strength. The Prewitt operator measures two components.

The vertical edge component is calculated with kernel Kx and the horizontal edge component is calculated with kernel Ky. |Kx| + |Ky| gives an indication of the intensity of the gradient in the current pixel. Prewitt horizontal and vertical operators depending on the noise characteristics of the image, edge detection results can vary. Gradient-based algorithms such as the Prewitt filter have a major drawback of being very sensitive to noise. The size of the kernel filter and coefficients are fixed and cannot be adapted to a given image.Image Processing for Edge Detection

EXISTING SYSTEM

The approaches used to remove the noise in the existing system are:

  • If several copies of an image have been obtained from the source, some static image, then it may be possible to sum the values for each pixel from each image and compute an average. This is not possible, however, if the image is from a moving source or there are other time or size restrictions.
  • If such averaging is not possible, or if it is insufficient, some form of Convolution and Edge Detection Filters may be required.
    • enhancement filtering, which attempts to improve the (subjectively measured) quality of an image for human or machine interpretability. Enhancement filters are generally heuristic and problem oriented
PROPOSED SYSTEM

There are numerous types of convolution filters, matrix filters, smoothing, high pass, edge detection, etc… The main issue in matrix convolution is that it requires an astronomic number of computations. For example is a 800×600 image, is convolved with a 9×9 PSF (Pulse spread function), we already need almost six millions of multiplications and additions (800x600x9x9). Several strategies are useful to reduce the execution time when computing matrix convolutions and edge detection:

Related Post