DonNTU Master Metelytsia Daria Metelytsia Daria

Faculty of computer science and technology

Department software of the intellectual systems

Speciality Software systems

Development and analysis of contour extraction algorithm for grayscale

image with conditions low-contrast boundaries of objects

Scientific adviser: PhD in engineering, associate Professor Elena Volchenko

ABSTRACT



Content


Introduction


Automatic processing of visual information is one of the most important areas in the field of artificial intelligence. Interest in the problems of computer processing is determined by extending the capabilities of both the computer systems and the development of new technologies of processing, analysis and identification of different types of images. At the same time for creation of effective technologies the developing methods and algorithms must satisfy a number of requirements for speed and accuracy. Regularly, each algorithm having certain characteristics, specializes in his image type. Therefore, vision systems need a combination of several methods that solve the same problem in different ways, providing the necessary indicators for action speed and authenticity [1].

One of the most difficult problems of visual information processing is the edge detection, as contours are the most informative structural elements of objects. Nevertheless, the contours that standing out in low-contrast blur image by the known methods have such drawbacks as tears, absence of contour lines, or any false ones, which do not exist.

Most of the taking images are of low contrast, have an uneven background, and also may contain noises. Therefore, to analyze such information is necessary to provide high visual quality and effectiveness of the pretreatment the test image that can be obtained by using advanced methods of edge and borders detection. Such pretreatment will improve the solution of large number of problems [2].

To date, there is a great variety of methods for edge detection. These methods are effective for image processing with a low noise level.

Thus, to date, development of methods for the isolation of low-contrast objects in the images remains an actual.

1. Actuality


The object edge detection in the image is one of the actual tasks in digital signal processing. Research psychologists found that from the point of view of recognition and analysis of objects in an image is the most informative thing is not value of the brightness of objects, but characteristics of their borders.

The methods and algorithms of edge detection take a substantial share, why analyzing images and because objects recognition. They simplify the work with the image considerably. But most of the currently existing algorithms can not provide a good enough precision of objects edge detection, because there are always gaps and false boundaries. These complicate greatly. This work is aimed to development of algorithm that allows to improve the allocation of the image boundaries by combining and modifying of existing image processing techniques.

2. Goal and tasks of the research


The aim of this work development of an optimal combination of processing algorithms for better edge detection.

The key object of research work is to select lines extending to the borders of the homogeneous fields.

3. Subject and object of study


Subject of research is developing an algorithm of edge detection of low-contrast objects in the image on the basis of existing algorithms and analyzing the results of their work.

The object of this study is existing methods of processing and edge detection in the image.

4. Scientific novelty


The scientific novelty of this work is developing of algorithm for edge detection of low-contrast gray-scale images. The existing edge detection algorithms do not get along well with low contrast object and background, as well as extremely sensitive to noise in the image. The developing algorithm will be aimed to maximize the edge detection of all the boundaries of objects and pressing suppress false contouring, which often arise from the presence of impulse noise.

5. Underlining of boundaries


Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significanty reduces the amount of data and filters out useless information, while preserving the important structural properties in an image [1].

Edge detection refers to the process of identifying and locating sharp discontinuities in an image. The discontinuities are abrupt changes in pixel intensity which characterize boundaries of objects in a scene.

Classical methods of edge detection involve convolving the image with an operator (a 2-D filter), which is constructed to be sensitive to large gradients in the image while returning values of zero in uniformregions.

There are an extremely large number of edge detection operators available, each designed to be sensitive to certain types of edges. Variables involved in the selection of an edge detection operator include Edge orientation, Noise environment and Edge structure. The geometry of the operator determines a characteristic direction in which it is most sensitive to edges.

Second-order derivative edge detection techniques employ some form of spatial second-order differentiation to accentuate edges. An edge is marked if a significant spatial change occurs in the second derivative.

For calculation of the second-order derivative and result superimposing on the image (high-frequency filtering) three masks provided in formulas 5.1–5.3 are used:


H1 =0−10
−15−1
0−10

(5.1)

H2 =−1−1−1
−19−1
−1−1−1

(5.2)

H3 =1−21
−25−2
1−21

(5.3)


One of tasks of filtering the image with rise of high frequencies is the case when the source image is more dark, than it is required. In this case it is possible to vary coefficient amount of high frequencies U>1 for increase of the general image brightness. Matrixes are given in formulas 5.4–5.6.


H4 =0−10
−1U+4−1
0−10

(5.4)

H5 =−1−1−1
−1U+8−1
−1−1−1

(5.5)

H6 =1−21
−2U+4−2
1−21

(5.6)



6. Detection of lines and allocation of contours

6.1 General information


Operators can be optimized to look for horizontal, vertical, or diagonal edges. Edge detection is difficult in noisy images, since both the noise and the edges contain high-frequency content. Attempts to reduce the noise result in blurred and distorted edges. Operators used on noisy images are typically larger in scope, so they can average enough data to discount localized noisy pixels. This results in less accurate localization of the detected edges. Not all edges involve a step change in intensity.

Effects such as refraction or poor focus can result in objects with boundaries defined by a gradual change in intensity [5]. The operator needs to be chosen to be responsive to such a gradual change in those cases. So, there are problems of false edge detection, missing true edges, edge localization, high computational time and problems due to noise etc.

Sections with fast changing of brightness from the low level to high in one-dimensional and two-dimensional cases are shown in image 6.1 and 6.2. In an one-dimensional case overfall is characterized by height, slope angle and coordinate of cent of a slope.


One-dimensional case

Image 6.1 — One-dimensional case


Двумерный случай

Image 6.2 — Two-dimensional case


6.2 Linear methods


One of simple methods of finding of overfalls of brightness is computation of the discrete differences [1, 5, 6]. The image is created as a result of the discrete differentiation of formula 6.3.


gi, j = fi, j − fi, j+1

(6.3)


Underlining of horizontal overfalls is calculated similarly 6.4.


gi, j = fi, j − fi+1, j

(6.4)


Horizontal underlining also is calculated by means of a difference of brightness of elements along a line of the image 6.5 and 6.6.


gi, j = [fi, j − fi, j-1] − [fi, j+1 − fi+1, j]

(6.5)

gi, j = 2 * fi, j - fi, j-1 - fi, j+1

(6.6)


Boundaries of objects on the image can be found by means of the masks, given in 6.7–6.10.


H7 =−1−1−1
222
−1−1−1

(6.7)

H8 =−1−12
−12−1
2−1−1

(6.8)

H9 =−12−1
−12−1
−12−1

(6.9)

H10 =2−1−1
−12−1
−1−12

(6.10)


Contrast of overfalls without their orientation it is possible to promote by convolution of an array of the image with Laplace provided in the form of a mask 6.11 the operator.


L =0−10
−14−1
0−10

(6.11)


6.3 Non-linear methods


There are three basic types of gray-level discontinuities in a digital image: points, lines, and edges. The most common way to look for such discontinuities is to run a mask through the image. For a mask 3×3 this procedure involves computing the sum of products of the coefficients with the gray levels contained in the region encompassed by the mask.

A row of non-linear operators of separation of contour uses computation of the module of a gradient of brightness, provided in a formula 6.12.


∇ = gi, j = √(X2 + Y2)

(6.12)


The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point.

The operator consists of a pair of 2×2 convolution kernels as shown in Figure. One kernel is simply the other rotated by 90°. Computation of overfalls by Roberts's method are provided in formulas 6.13 и 6.14:


X = fi, j − fi+1, j+1

(6.13)

Y = fi, j+1 − fi+1, j

(6.14)

Roberts algorithm

Image 6.3 — The algorithm Roberts with preliminary processing

(animation: 4 frames, 20 cycles of repetition, 16,3 kilobytes)


Masks for receiving components of a gradient of X and Y by means of the operator Roberts are given in 6.15 and 6.16.


H11 =10
0−1

(6.15)

H12 =01
−10

(6.16)


Prewitt operator is similar to the Sobel [8, 10] operator and is used for detecting vertical and horizontal edges in images (6.17–6.18).


H13 =−101
−101
−101

(6.17)

H14 =−1−1−1
000
111

(6.18)


The Sobel operator consists of a pair of 3×3 convolution kernels as shown in formulas 6.19 и 6.20. One kernel is simply the other rotated by 900.


H15 =−101
−202
−101

(6.19)

H16 =−1−2−1
000
121

(6.20)


These kernels are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient. The gradient magnitude is given by 6.21 and 6.22 [1].


gi, j = log(fi, j − ¼ * log(fi-1, j)  − ¼ * log(fi, j+1) − ¼ * log(fi+1, j) − ¼ * log(fi, j-1)

(6.21)

gi, j = ¼ * log{ fi, j4 / (fi-1, j * fi, j+1 * fi+1, j * fi, j-1)

(6.22)


To date large number of algorithms of edge detection [4–18] is requested. One of new and the most effective of them is the algorithm of edge detection on low-contrast blurred images on the basis of fractal filtering. Fractals detects the contours better, than other operators.But they are oriented on processing of low-contrast images as a whole, than separation of those lines and boundaries [19].

The Laplacian of Gaussian (LoG) was proposed by Marr(1982). The LoG of an image f(x,y) is a second order derivative defined as:


2f = d2*f / d*x2 + d2*f / d*y2

(6.23)

Laplasian of Gaussian

Image 6.4 — The algorithm Laplacian of Gaussian with preliminary processing

(animation: 5 frames, 20 cycles of repetition, 18,3 kilobytes)


It has two effects, it smoothes the image and it computes the Laplacian, whch yields a double- edge image. Locating edges then consists of finding the zero crossings between the double edges. The digital implementation of the Laplacian function is usually made through the mask below:


H17 =0−10
−14−1
0−10

(6.24)

H18 =−1−1−1
−18−1
−1−1−1

(6.25)


The Laplacian is generally used to found whether a pixel is on the dark or light side of an edge.

The Canny algorithm used an optimal edge detector based on a set of criteria which include finding the most edges by minimizing the error rate, marking edges as closely as possible to the actual edges to maximize localization, and marking edges only once when a single edge exists for minimal response [20]. According to Canny, the optimal filter that meets all tree criteria above can be efficiently approximated using the first derivative of a Gaussian function.


Algorithm:

Conclusion


At a result of this work some methods and algorithms for improving image quality and edge enhancement of the image were reviewed and analyzed. Namely, there were filtering and low-pass filtering, underlining the contours by various methods such as Roberts, Prewitt, Sobel and Wallace, histogram modification for increasing image, linear and nonlinear methods for the detection of lines and edge enhancement. It was found that the classical algorithms cope with simple tasks well enough, but they are not suitable for work with real low-contrast image. The result of processing such images causes to noisy and congested excessive false contours. Improve the selection boundaries, you can use pre-defined set of image processing algorithms to operators, as well as algorithms for restoring the lost fragments of contours.

In this work we proposed two options to improve the results of edge detection based on algorithms of bee and ant colonies. These variants can provide good results in the image processing but the task is complicated by the presence of noise on real images, which can cause to false contours and borders.


While writing this essay the master's has not been complete yet. The final term of completion is December 2015. Full text of the work and materials on the topic can be obtained from the author or his manager after that date.


Referenses


  1. Садыхов Р. Обработка изображений и идентификация объектов в системах технического зрения / Р.Х. Садыхов, А.А. Дудкин // Объединенный институт проблем информатики НАН Беларуси, Минск, Беларусь. — 2006 г. — № 3. — С. 10–11.
  2. Беленский Й. Метод выделения контура на слабоконтрастных размытых изображениях / Й.Й. Беленский, И.В. Микулка // Вестник Винницкого политехнического института. — 2012 г. — № 3. — С. 6–7.
  3. Алиев М.В. Выделение контуров на малоконтрастных и размытых изображениях с помощью фрактальной фильтрации / М.В. Алиев, А.Х. Панеш, М.С. Каспарьян // Вестник Адыгейского государственного университета. Серия 4: Естественно-математические и технические науки. 2011. №3. — С. 101–107.
  4. Алгоритмы выделения контуров изображения [Электронный ресурс]. — Режим доступа: http://habrahabr.ru/post/114452.
  5. Грузман И.С. Цифровая обработка изображений в информационных системах / И.С. Грузман, В.С. Киричук, В.П. Косых, Г.И. Перетягин, А.А. Спектр // Научное пособие. — Новосибирск: Изд-во НГТУ, 2002. — С. 125–139.
  6. Обработка изображений, цифровая обработка сигналов, распознавание образов [Электронный ресурс]. — Режим доступа: http://www.sati.archaeology.nsc.ru/gr /texts/image process/index.html.
  7. Гонсалес Р. Цифровая обработка изображений / Р. Гонсалес, Р. Вудс. — Москва: Техносфера, 2005. — С. 148–414.
  8. Анисимов Б.В. Распознавание и цифровая обработка изображений / Б.В. Анисимов, В.Д. Курганов, В.К. Злобин // Научное пособие для студентов вузов. — М.: Высшая школа, 1983. — С. 41–66.
  9. Бердников Ю. Распознавание и удаление субтитров / Научное пособие // Graphics Media Lab. — С. 13–24.
  10. Хуанг Т. Обработка изображений и цифоровая фильтрация.: Пер. з англ. Сороки Е.З. — М.: Мир, 1979. — С. 28–47.
  11. Блейхут Р. Быстрые алгоритмы цифровой обработки сигналов.: Пер. з англ. Грушко И. — М.: Мир, 1989. — С. 50–61.
  12. Русин Б. Системы синтеза, обработки и распознавания сложно-структурированных изображений / Б.П. Русин. — Л.: Вертикаль. — 1997. — С. 264–268.
  13. Robinson G.S. Edge detection by compass gradient masks / G.S. Robinson // Comput. — Vision Graphics Image Process. — 1977. — № 6 — P. 492–501.
  14. Сойфер В. Методы компьютерной обработки изображений / В.А. Сойфер. — М.: ФИЗМАЛИТ. — 2003. — С. 684–693.
  15. Форсайт Д. Компьютерное зрение. Современных подход.: Пер.с англ. — М.: Издательский дом Вильямс, 2004. — С. 728–733.
  16. Прэтт У. Цифровая обработка изображений. / У. Прэтт // М: Мир, 1979. — С. 78–91.
  17. Калинкина Д. Проблема подавления шума на изображениях и видео и различные подходы к ее решению / Д. Калинкина, Д. Ватолин — Москва: Техносфера, 2007. — С. 118–128.
  18. Фисенко В.Т. Компьютерная обработка и распознавание изображений. / В.Т. Фисенко, Т.Ю. Фисенко // Санкт-Петербург 2008. — С. 192.
  19. Courtney P, Thacker N.A. (2001) Performance Characterization in Computer Vision: The Role of Statistics in Testing and Design, Chapter in: Imaging and Vision Systems: Theory, Assessment and Applications, Jacques Blanc-Talon and Dan Popescu (Eds.), NOVA Science Books.
  20. Muthukrishnan R. International Journal of Computer Science & Information Technology (IJCSIT) // R. Muthukrishnan, M. Radha / 3(6). — С. 259–267.
  21. Алгоритм искусственной пчелиной колонии [Электронный ресурс]. — Режим доступа: http://www.slideshare.net/KirillNetreba/ss-6990901.
  22. Dorigo M. Ant System: Optimization by Colony of Cooperating Agents / M. Dorigo, V. Maniezzo, A. Colorni // IEEE Transaction Systems, Man and Cybernetics. — Part B. — 1996. — Vol. 26. — P. 29–41.