Èñòî÷íèê: «Èíôîðìàòèêà è êîìïüþòåðíûå òåõíîëîãèè – 2010» — 2010 / Ýëåêòðîííûé ñáîðíèê ìàòåðèàëîâ ìåæäóíàðîäíîé íàó÷íî-òåõíè÷åñêîé êîíôåðåíöèè ñòóäåíòîâ, àñïèðàíòîâ è ìîëîäûõ ó÷¸íûõ «Èíôîðìàòèêà è êîìïüþòåðíûå òåõíîëîãèè – 2010». — Äîíåöê, ÄîíÍÒÓ — 2010

 

 

 

UDC 004.932.2:004.627

Fractal compression algorithms in context of medical images` procession

 

Anastasova E.A., Belovodskiy V.N.

Donetsk National Technical University, Donetsk

E-mail: anastasova.k@gmail.com

 

Abstract

The topicality of medical images’ procession was covered. There was considered one of the most essential problems of telemedicine furthering, development and applying – storage and transmission of graphics. Possible methods of images’ procession were discussed. The modifications of compression algorithm, that provided high compression efficiency of information and quite small decompression error values, were analyzed. 

Introduction

Images are widely used in different fields both in casual life either in narrow terms of science. Medical images are almost all grayscale ones. It is necessary to notice that only the part of whole image is worth analyzing.   There is a need for huge volumes of memory to store and qualitative process the high quality images. As a result some difficulties of real-time processing occur. According to medical images some characteristics of the algorithm can be named: compress ratio is high, compress time vary, but decompress one seeks to minimize, output error is minimal.

Problem statement

The object of the paper is analysis of exists treatments and methods those provides quite low error after decompress to allow next assay, high compress ratio, short time for decompression.  

Solutions to the problem

It is necessary to discuss the milestones of basic fractal algorithm. It implies a partition of origin image on domains and ranks. After this step domains are sorted out for each rank (the domain is compressed in each orientation to the size of rank and the best values of coefficients of process using the method of least squares.

Due to process and usage specificity of medical images the rule “once to compress and ofttimes decompress” can be named. In this connection the method offered by Vatolin is fully applicable [1]. The author supposes to allocate significant areas of an image, to use different compression ratio depending on characteristics of image’s parts. The half automatic systems are used to choose the areas. A domain is chose from region, that approximate the rank quite well and isn’t much worse during the compression.  The mean distance from the correction is used as a measure of optimal block.

In medical images the main interest is only its fragments, not a whole one. It’s offered to compress only the fragments tacking into account their features to save a high quality of important parts of image while ensuring the highest possible degree of compression. For this an image is divided into disjoint parts tacking into account their informational importance or morphological structure on purpose of further compression of each of fragments using the preferable  algorithm in terms of the ratio of two main characteristics “compression rate” and “quality”. To solve this problem, introduce the concept of a mask that shows the location of one or more areas of interest. This area can be allocated on the basis of structural features - images heterogeneity in terms of there are so-called constant regions present, where all pixels have identical or close to the value of shade, and areas with lots of small parts, where the neighboring pixels differ in color . In this case, use the statistical characteristics, defined by luminance histogram. In particular, we can analyze images based on the values of the average entropy:

 

 

here L       - quantity of colors gradations,

        - luminance histogram,

              - random value, that features luminance of picture elements.

The values of entropy for the whole image and for each area separately are calculated, and these values are compared. A sign of small details in the field is increasing of the entropy values compared to the entropy of the entire image. This region (one or several) of the first level is divided into areas of the second level [2].

At the stage of compression selections can be used different algorithms. We describe the best ones.

The most common modification of the basic fractal algorithm is the FE-algorithm. The comparison of five characteristics that describe the domain and rank blocks is helps to reduce the computational cost of FE-algorithm. They are compared in the beginning of searching. These characteristics are: standard deviation, asymmetry, inter-pixel contrast,  coefficient, which characterizes the differences between pixel values and the value of central pixel, the maximum gradient - the maximum of horizontal and vertical gradients. The rank of unit’s characteristics vector is calculates when the unit is processed, and then the distance between the characteristic vector of the rank and the characteristic vector of each domain is calculated. Procedure for selection of domains is a kind of filter, which significantly limits the number of domains that are moving [3]. Pearson's correlation coefficient can also be used to optimize search of best domain. The better the actual dependence of R D is approximated by the linear; the closer to 1 in absolute value will be their correlation coefficient. Using this ratio allows you to immediately assess the optimality of the current domain for a given rank, without calculating the conversion coefficient of the contrast and brightness. Thus, these ratios are calculated once for each rank. After, it allows considering only those domains that satisfy:

 

,

 

i.e. the domain contrast should be higher than rank contrast grade [4]. Algorithms of this kind can be efficiently implemented using Unsupervised Kohonen maps [5]. 

It is important to note, that separate place is occupied by the algorithms using basically discrete pseudocosine transformation (DPCT). Such algorithms are characterized by performances rather approximated to that the JPEG method shows. Estimates of the computational costs show that the algorithm based on DPCT not inferior to JPEG, including the computational complexity value [6].

Conclusion
Thus, there is an analysis of approaches and algorithms for image processing. Most interesting are the allocation of significant areas and the application of specific compression algorithm for each one. This allows us to achieve significant compression ratios and retain all the important parts in satisfactory condition.

The aim of future work is to create applications based on existing approaches, their approbation on real images, and the modernization of algorithms.

 

 

List of sourses:

1.                        Vatolin, D.S. Increasing the degree of compressing fractal image compression by specifying the quality of image areas / International Conference Graphicon 1999, Moscow, Russia, http://www.graphicon.ru/

2.                        Zulema, E.S. Adaptiva image compression method / News of Khmelnytsky national university ¹ 2 – 2010

3.                        Bublichenko, A.V. Algorithms for image compression: a comparative analysis and modification / A.V. Bublichenko, V.N. Belovodsky / Qualification Masters work. - 2008.

4.                        Ilyushin, S.V. Fractal compression of telemedicine images / S.V. Ilyushin, SD Light / "Telecommunications», ¹ 4 - 2009.

5.                        Prokhorov, V.G. Using Kohonen maps to accelerate fractal image compression / V.G. Prokhorov / Applied Software, ¹ 2. - 2009. - S. 7.

6.                        Umnyashkin, S. V. Mathematical methods and algorithms for digital image compression using orthogonal transformation / S. V. Umnyashkin / Abstract. - 2001.