Index IntroductionDigital Image and InformationEntropy of an ImageEntropy and HistogramHistogram EqualizationHistogram ManipulationsIntroductionAfter discussing the specific problem in Chapter 2, a research investigation has been undertaken in the previous chapters. It is prudent now to discuss the modalities, towards solving the Denoising problem. To this end, a new approach has been developed using alignment related to the information in this chapter. The methodology uses pre and post filter banks, which arrive at the solution of the approach. However, the entropy minimization and maximization strategy is taken into consideration for validating results for medical ultrasound image analysis. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an Original Essay Image and Digital Information The most important feature in this approach is an image. From the classical point of view, the image is defined as something that can be perceived with visual systems [1, RV K Reddy et al., 2016]. However, image processing concerns different classes of images, with the fact that not all images are perceived equally and directly by the human eye. The grayscale digital image is an image that can be modeled as a function of a discrete domain referred to Ωd= [ 1,……..m] X [1,….,n] with the discrete interval [0,….255], which is typically represented by a two-dimensional array [mn] with the n coordinate between [1 ×n]. Information means the knowledge that the user obtains from the image relating to facts or details on the topic of interest. It is the part of knowledge that is obtained from investigation or study. A research paper titled “A Mathematical Theory of Communication” written by Claude Shannon in the year 1948 has been accepted as the birth of a new field called Information Theory. Shannon used probability theory to model and describe sources of information. Dedicated works have defined the information source as the data produced by a source to be treated as a random variable. However, the extent of information depends on the n. of possible outcomes. If it is assumed that the two given messages have length n1 & n2 with s1 & s2 as the number of symbols, then the measure of information is given by: H= n log s= log sn …Eq. (3.1.a) The larger the n. of messages, the greater the amount of information. If a single message is possible from any event, then that event will be said to have no or no information. As,log1= 0 …eq. (3.1.b)Claude Shannon also defined the entropy of a system as if m no existed. of events given as: e1, e2…..em with probability of occurrence p1, p2…..pm, then the entropy value will be calculated by:H=∑ pi log 1/pi = -∑ pi log pi … Eq . (3.1.c) Where the information of each event is weighted by the probability of its occurrence. Entropy of an Image The Shannon entropy for an image is calculated based on the distribution of gray level values of the image represented by the gray level histogram, and the probability distribution is calculated based on the n. of times each gray value occurs in an image divided by the n. total. of occurrence. The researchers found that if an image consists of a single intensity then it will have a low entropy value and therefore contain the least amount of information. However, if the number of intensities is higher, the image will have a higher entropy value with a large amount of information present in it [10, J5, JP W Pluim, 2003]. Shannon defines the entropy for any n-state system as the acquisition ofinformation from an event is inversely proportional to the probability that the event will occur. According to [11, NR Pal, 1991] for an image I with gray values I (x, y) in (x,y) and dimension PXQ belonging to the set of gray levels {0,1,……L- 1} , the frequency of gray levels would be given as:∑_(j=0)^(i-1)▒〖Nj=PQ〗 …….Eq. (3.3.a)And, if P[xi] is the probability of the sequence xi of gray levels of length l then the entropy will be given as-H=1/l ∑p (xi)e1-p(xi) … …. .Eq.(3.3.b)This entropy is called global entropy. Therefore, the information present in any image is analyzed in terms of entropy which provides the measurement of uncertainty. Entropy and Histogram The histogram, for any image is a graph on x-axis that shows the number of pixels present in the image with different intensity values of an image. If an image is 8-bit grayscale, there will be 256 possible intensities with all 256 intensities displayed on the histogram. Similarly for color images, it would be a 3D histogram with three different axes for R, G and B changes. Therefore, the entropy of the image is calculated based on 256 quantity levels and is directed by the value of NH (X) =-∑_(i=0)^255▒〖pi logpi 〗 …..( 3.3.a)Pi=Ng/Np …..(3.3.b)Where Ng is the number of corresponding pixels at gray levels and Np is the total number of pixels in the image; Pi is the probability of occurrence for each gray level intensity. Since the information present in the image can be analyzed in terms of entropy, it is found that the entropy of an image decreases as the amount of information contained in the image decreases. Histogram EqualizationThe histogram is a graph that shows the no. of pixels at each intensity value for an 8-bit grayscale image with 256 possible different intensities. Histogram equalization is the statistical distribution of gray levels present in the image. It is a form of contrast enhancement used to increase the overall contrast of images. Adjust pixel values for better distribution and contrast enhancement. It is used to stretch the histogram of a given image. Histogram Manipulations Histogram equalization is one of the most popular conventional methods for image enhancement. The method redistributes the gray levels of an image's histogram with a significant change in image brightness. This has led to the limitations of conventional methods such as loss of image originality, loss of fine details and excessive enhancement. Many researchers have worked on histogram equalization techniques and their manipulations. As produced by researchers in [5, E1, M. Kaur et al., 2013], there are various manipulations in histogram equalization. As in brightness-preserving bi-histogram equalization (BBHE), the histogram of the input image is divided into two equal parts at the XT point so that the two different histograms are generated with two different ranges from 0 to XT and from XT+1 to XL- 1. Next, both histograms are equalized separately. In dualistic subimage histogram equalization (DSIHE), the method allows the division of the input images to develop the subimages with the same area and amount of pixels. The brightness of the output image is equal to the average of the subimage area level and its average gray level. The researchers also highlighted the disadvantages of the DSIHE method, as it cannot develop significant effects on image brightness. In the minimum average brightness error bi-histogram equalization method (MMBEBHE), the same approach as thresholded BBHE and DSIHE is followed. This method.
tags