Image Denoising Using Wavelets

2y ago
20 Views
2 Downloads
562.38 KB
14 Pages
Last View : 6d ago
Last Download : 3m ago
Upload by : River Barajas
Transcription

Image Denoising Using Wavelets— Wavelets & Time Frequency —Raghuram RangarajanRamji VenkataramananSiddharth ShahDecember 16, 2002

AbstractWavelet transforms enable us to represent signals with a high degree of sparsity. This is the principlebehind a non-linear wavelet based signal estimation technique known as wavelet denoising. In this report weexplore wavelet denoising of images using several thresholding techniques such as SUREShrink, VisuShrinkand BayesShrink. Further, we use a Gaussian based model to perform combined denoising and compressionfor natural images and compare the performance of these methods.

Contents1 Background and Motivation1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2 The concept of denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3332 Thresholding2.1 Motivation for Wavelet thresholding2.2 Hard and soft thresholding . . . . .2.3 Threshold determination . . . . . . .2.4 Comparison with Universal threshold.444453 Image Denoising using Thresholding3.1 Introduction: Revisiting the underlying principle . . . . .3.2 VisuShrink . . . . . . . . . . . . . . . . . . . . . . . . . .3.3 SureShrink . . . . . . . . . . . . . . . . . . . . . . . . . .3.3.1 What is SURE ? . . . . . . . . . . . . . . . . . . .3.3.2 Threshold Selection in Sparse Cases . . . . . . . .3.3.3 SURE applied to image denoising . . . . . . . . . .3.4 BayesShrink . . . . . . . . . . . . . . . . . . . . . . . . . .3.4.1 Parameter Estimation to determine the Threshold.5567788894 Denoising and Compression using Gaussian-based MMSE Estimation4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.2 Denoising using MMSE estimation . . . . . . . . . . . . . . . . . . . . . .4.3 Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .991011115 Conclusions.12

3“If you painted a picture with a sky,clouds, trees, and flowers, you would usea different size brush depending on thesize of the features.Wavelets are like thosebrushes.”be confused with smoothing; smoothing only removesthe high frequencies and retains the lower ones.Wavelet shrinkage is a non-linear process and iswhat distinguishes it from entire linear denoisingtechnique such as least squares. As will be explainedlater, wavelet shrinkage depends heavily on the choice-Ingrid Daubechiesof a thresholding parameter and the choice of thisthreshold determines, to a great extent the efficacy ofdenoising. Researchers have developed various tech1 Background and Motivation niques for choosing denoising parameters and so farthere is no “best” universal threshold determinationtechnique.1.1 IntroductionThe aim of this project was to study variousFrom a historical point of view, wavelet analysis isthresholding techniques such as SUREShrink [1], Visa new method, though its mathematical underpinuShrink [3] and BayeShrink [5] and determine the bestnings date back to the work of Joseph Fourier in theone for image denoising. In the course of the project,nineteenth century. Fourier laid the foundations withwe also aimed to use wavelet denoising as a means ofhis theories of frequency analysis, which proved to becompression and were successfully able to implementenormously important and influential. The attentiona compression technique based on a unified denoisingof researchers gradually turned from frequency-basedand compression principle.analysis to scale-based analysis when it started to become clear that an approach measuring average fluctuations at different scales might prove less sensitiveThe concept of denoisingto noise. The first recorded mention of what we now 1.2call a ”wavelet” seems to be in 1909, in a thesis byA more precise explanation of the wavelet denoisingAlfred Haar.procedure can be given as follows. Assume that theIn the late nineteen-eighties, when Daubechies and observed data isMallat first explored and popularized the ideas ofwavelet transforms, skeptics described this new fieldX(t) S(t) N (t)as contributing additional useful tools to a growingtoolbox of transforms. One particular wavelet techsignal with additivenique, wavelet denoising, has been hailed as “offering where S(t) is the uncorrupted 1noiseN(t).LetW(·)andW(·)denote the forwardall that we may desire of a technique from optimalandinversewavelettransformoperators.Let D(·, λ)ity to generality” [6]. The inquiring skeptic, howdenotethedenoisingoperatorwiththresholdλ. Weever maybe reluctant to accept these claims based ymptotic theory without looking at real-world evidence. Fortunately, there is an increasing amount of S(t). The procedure can be summarized in threeof literature now addressing these concerns that help stepsus appraise of the utility of wavelet shrinkage morerealistically.Y W (X)Wavelet denoising attempts to remove the noiseZ D(Y, λ)present in the signal while preserving the signal charŜ W 1 (Z)acteristics, regardless of its frequency content. It involves three steps: a linear forward wavelet transform, nonlinear thresholding step and a linear in- D(·, λ) being the thresholding operator and λ beingverse wavelet transform.Wavelet denoising must not the threshold.

4Noisy Signal in Time Domain (Original signal is Superimposed)61082584206641524210002 25 20 40 6 4 2 4 505001000150020002500Figure 1: A noisy signalin time domain. 10 8 10 100500100015002000 8Thresholding2.1 4 20246810Figure 2: The same sig- Figure 3: Hard Threshnal in wavelet domain. olding.Note the sparsity of coefficients.2.22 6 6 10 8 6 4 202468102500Motivation for Wavelet thresholdingThe plot of wavelet coefficients in Fig 2 suggests thatsmall coefficients are dominated by noise, while coefficients with a large absolute value carry more signalinformation than noise. Replacing noisy coefficients (small coefficients below a certain threshold value) byzero and an inverse wavelet transform may lead toa reconstruction that has lesser noise. Stated moreprecisely, we are motivated to this thresholding ideabased on the following assumptions:Figure 4: Soft Thresholding.Hard and soft thresholdingHard and soft thresholding with threshold λ are defined as followsThe hard thresholding operator is defined asD(U, λ) U for all U λ 0 otherwiseThe soft thresholding operator on the other hand isdefined asD(U, λ) sgn(U )max(0, U λ)Hard threshold is a “keep or kill” procedure andis more intuitively appealing. The transfer functionof the same is shown in Fig 3. The alternative, softthresholding (whose transfer function is shown in Fig4 ), shrinks coefficients above the threshold in abso The decorrelating property of a wavelet translute value. While at first sight hard thresholding mayform creates a sparse signal: most untouchedseem to be natural, the continuity of soft thresholdcoefficients are zero or close to zero.ing has some advantages. It makes algorithms mathematically more tractable [3]. Moreover, hard thresh Noise is spread out equally along all coefficients.olding does not even work with some algorithms suchas the GCV procedure [4]. Sometimes, pure noise co The noise level is not too high so that we can efficients may pass the hard threshold and appeardistinguish the signal wavelet coefficients from as annoying ’blips’ in the output. Soft thesholdingthe noisy ones.shrinks these false structures.As it turns out, this method is indeed effective andthresholding is a simple and efficient method for noisereduction. Further, inserting zeros creates more sparsity in the wavelet domain and here we see a link between wavelet denoising and compression which hasbeen described in sources such as [5].2.3Threshold determinationAs one may observe, threshold determination is animportant question when denoising. A small threshold may yield a result close to the input, but theresult may still be noisy. A large threshold on the

5other hand, produces a signal with a large numberof zero coefficients. This leads to a smooth signal.Paying too much attention to smoothness, however,destroys details and in image processing may causeblur and artifacts.To investigate the effect of threshold selection,we performed wavelet denoising using hard and softthresholds on four signals popular in wavelet literature: Blocks, Bumps, Doppler and Heavisine[2].The setup is as follows:HaarDb2Db4Db8 For each threshold, the MSE of the denoised signal is calculated. Repeat the above steps for different orthogonalbases, namely, Haar, Daubechies 2,4 and 8.The results are tabulated in the table 12.4ComparisonthresholdwithAs seen from the table, the best empirical thresholds for both hard and soft thresholding are muchlower than this value, independent of the waveletused. It therefore seems that the universal threshold is not useful to determine a threshold. However,it is useful for obtain a starting value when nothing isknown of the signal condition. One can surmise 2.02.2Table 1: Best thresholds, empirically found with different denoising schemes, in terms of MSEthe universal threshold may give a better estimatefor the soft threshold if the number of samples arelarger (since the threshold is optimal in the asymptotic sense).Universal 3 The threshold λU N IV 2lnN σ (N being the signallength, σ 2 being the noise variance) is well known inwavelet literature as the Universal threshold. It isthe optimal threshold in the asymptotic sense andminimises the cost function of the difference betweenthe function and the soft thresholded version of thesame in the L2 norm sense(i.e. it minimizes E kYT hresh YOrig . k2 ). In our case, N 2048, σ 1,therefore theoretically,pλU N IV 2ln(2048)(1) 3.905(1)Soft1.61.61.61.8Heavy SineHard SoftHaar 1.41.6Db21.41.6Db41.41.6Db81.41.6 The original signals have length 2048. We step through the thresholds from 0 to 5 withsteps of 0.2 and at each step denoise the fournoisy signals by both hard and soft thresholdingwith that hresholdingusingIntroduction: Revisiting the underlying principleAn image is often corrupted by noise in its acquisitionor transmission. The underlying concept of denoisingin images is similar to the 1D case. The goal is toremove the noise while retaining the important signalfeatures as much as possible.The noisy image is represented as a twodimensional matrix {xij }, i, j 1, · · · , N. The noisyversion of the image is modelled asyij xij niji, j 1, · · · , N.where {nij } are iid as N(0,σ 2 ). We can use the sameprinciples of thresholding and shrinkage to achievedenoising as in 1-D signals. The problem again boilsdown to finding an optimal threshold such that the

0.511.522.5threshold33.544.55Bumps1.1softhard1mean squared error between the signal and its estimate is minimized.The wavelet decomposition of an image is done asfollows: In the first level of decomposition, the image is split into 4 subbands,namely the HH,HL,LHand LL subbands. The HH subband gives the diagonal details of the image;the HL subband gives thehorizontal features while the LH subband representthe vertical structures. The LL subband is the lowresolution residual consisiting of low frequency components and it is this subband which is further splitat higher levels of decomposition.The different methods for denoising we investigatediffer only in the selection of the threshold. The basicprocedure remains the same :0.9 Calculate the DWT of the image.0.8MSE0.7 Threshold the wavelet coefficients.(Thresholdmay be universal or subband adaptive)0.60.5 Compute the IDWT to get the denoised 5Heavisine1.1softhard10.90.8MSE0.70.60.50.4Soft thresholding is used for all the algorithms dueto the following reasons: Soft thresholding has beenshown to achieve near minimax rate over a large number of Besov spaces[3]. Moreover, it is also found toyield visually more pleasing images. Hard thresholding is found to introduce artifacts in the recoveredimages.We now study three thresholding techniques- VisuShrink,SureShrink and BayesShrink and investigatetheir performance for denoising various standard .5VisuShrinkVisushrink is thresholding by applying the Universal threshold proposed by Donohoand Johnstone [2]. This threshold is given by σ 2logM where σ is thenoise variance and M is the number of pixels in theimage.It is proved in [2] that the maximum of any Mvalues iid as N(0,σ 2 )will be smaller than the universal threshold with high probability, with the probability approaching 1 as M increases.Thus, with highDoppler1.403.25Figure 5: MSE V/s Threshold values for the four testsignals.

7(a) 512 512 image of ‘Lena’probabilty, a pure noise signal is estimated as beingidentically zero.However, for denoising images, Visushrink is foundto yield an overly smoothed estimate as seen in Figure 6. This is because the universal threshold(UT)is derived under the constraint that with high probability the estimate should be at least as smooth asthe signal. So the UT tends to be high for large values of M, killing many signal coefficients along withthe noise. Thus, the threshold does not adapt well todiscontinuities in the signal.3.33.3.1(b) Noisy version of ‘Lena’SureShrinkWhat is SURE ?Let µ (µi : i 1, . . . d) be a length-d vector, andlet x {xi } (with xi distributed as N(µi ,1)) be multivariate normal observations with mean vector µ. Letµb µb(x) be an fixed estimate of µ based on the observations x. SURE (Stein’s unbiased Risk Estimator)is a method for estimating the loss kbµ µk2 in anunbiased fashion.In our case µb is the soft threshold estimator(t)µbi (x) ηt (xi ). We apply Stein’s result[1] to getan unbiased estimate of the risk Ekbµ(t) (x) µk2 :SU RE(t; x) d 2·#{i : xi T } dXmin( xi , t)2 .i 1(2)For an observed vector x(in our problem, x isthe set of noisy wavelet coefficients in a subband),we want to find the threshold tS that minimizesSURE(t;x),i.e(c) Denoised using Hard ThresholdingtS argmint SU RE(t; x).(3)The above optimization problem is computationally straightforward. Without loss of generality, wecan reorder x in order of increasing xi .Then on intervals of t that lie between two values of xi , SURE(t)is strictly increasing. Therefore the minimum valueof tS is one of the data values xi . There are onlyd values and the threshold can be obtained usingO(d log(d)) computations.(d) Denoised using Soft ThresholdingFigure 6: Denoising using VisuShrink

83.3.2Threshold Selection in Sparse CasesThe SURE principle has a drawback in situations ofextreme sparsity of the wavelet coefficients. In suchcases the noise contributed to the SURE profile by themany coordinates at which the signal is zero swampsthe information contributed to the SURE profile bythe few coordinates where the signal is nonzero. Consequently, SureShrink uses a Hybrid scheme.The idea behind this hybrid scheme is that thelosseswhile using an universal threshold, tF d 2 log d, tend to be larger than SURE for dense situations, but much smaller for sparse cases.So theSthreshold is set to tFd in dense situations and to tin sparse situations. Thus the estimator in the hybrid method works as follows½µbx (x)i s2d γds2d γd ,ηtFd (xi )ηtS (xi )wherePs2d 2i (xid 1)3/2γd log2 (d) dη being the thresholding operator.3.3.33.4BayesShrinkIn BayesShrink [5] we determine the threshold foreach subband assuming a Generalized GaussianDistribution(GGD) . The GGD is given byGGσX ,β (x) C(σX , β)exp [α(σX , β) x ]β(6) x , β 0, where1/2 1 Γ(3/β)α(σX , β) σX[ Γ(1/β) ]andC(σX , β) β·α(σX ,β)12Γ( β)R and Γ(t) 0 e u ut 1 du.(4)The parameter σX is the standard deviation and βis the shape parameter It has been observed[5] thatwith a shape parameter β ranging from 0.5 to 1, wecan describe the the distribution of coefficients in asubband for a large set of natural images.Assuming(5) such a distribution for the wavelet coefficients, we empirically estimate β and σX for each subband and tryto find the threshold T which minimizes the BayesianRisk, i.e, the expected value of the mean square error.SURE applied to image denoisingWe first obtain the wavelet decomposition of thenoisy image. The SURE threshold is determinedfor each subband using (2) and (3). We choose between this threshold and the universal threshold using (4).The expressions s2d and γd in (5), given forσ 1 have to suitably modified according to thenoise variance σ and the variance of the coefficientsin the subband.The results obtained for the image ’Lena’ (512 512pixels) using SureShrink are shown in Figure 7(c).The ‘Db4’ wavelet was used with 4 levels of decomposition. Clearly, the results are much better thanVisuShrink. The sharp features of the image areretained and the MSE is considerably lower. Thisbecause SureShrink is subband adaptive- a separatethreshold is computed for each detail subband.b X)2 EX EY X (Xb X)2τ (T ) E(X(7)b ηT (Y ), Y X N (x, σ 2 ) and X where XGGX ,β . The optimal threshold T is then given byT (σx , β) arg min τ (T )T(8)This is a function of the parameters σX and β. Sincethere is no closed form solution for T , numericalcalculation is used to find its value.It is observed that the threshold value set byTB (σX ) σ2σX(9)is very close to T .The estimated threshold TB σ 2 /σX is not onlynearly optimal but also has an intuitive appeal. The

9normalized threshold, TB /σ. is inversely proportional to σ, the standard deviation of X, and proportional to σX , the noise standard deviation. Whenσ/σX ¿ 1, the signal is much stronger than the noise,Tb /σ is chosen to be small in order to preserve mostof the signal and remove some of the noise; whenσ/σX À 1, the noise dominates and the normalizedthreshold is chosen to be large to remove the noisewhich has overwhelmed the signal. Thus, this threshold choice adapts to both the signal and the noisecharacteristics as reflected in the parameters σ andσX .3.4.1Parameter Estimation to determinethe ThresholdThe GGD parameters, σX and β, need to be estimated to compute TB (σX ) . The noise variance σ 2is estimated from the subband HH1 by the robustmedian estimator[5],σb M edian( Yij ),0.6745Yij subbandHH1(11)where σY2 is the variance of Y. Since Y is modelledas zero-mean, σY2 can be found empirically byσbY2 n1 X 2Yn i,j 1 ijσb2TbB (bσX ) .σbXThe results obtained by BayesShrink for the image’Lena’ (512 512 pixels) is shown in figure 7(d).The’Db4’ wavelet was used with four levels of decomposition. We found that BayesShrink performs betterthan SureShrink in terms of MSE. The reconstructionusing BayesShrink is smoother and more visually appealing than the one obtained using SureShrink. Thisnot only validates the approximation of the waveletcoefficients to the GGD but also justifies the approximation to the threshold to a value independent of β.(10)The parameter β does not explicitly enter into theexpression of TB (σX ). Therefore it suffices to estimate directly the signal standard deviation σX . Theobservation model is Y X V , with X and V independent of each other, hence2σY2 σX σ2To summarize,Bayes Shrink performs softthresholding, with the data-driven, subbanddependent threshold,(12)where n n is the size of the subband under consideration. Thusσb2TbB (bσX ) (13)σbXwhereqσbX max(bσY2 σb2 , 0)(14)In the case that σb2 σbY2 , σbX is taken to bebzero, i.e, TB (bσX ) is , or, in practice,TbB (bσX ) max( Yij ), and all coefficients are set to zero.44.1Denoising and Compressionusing Gaussian-based MMSEEstimationIntroductionThe philosophy of compression is that a signal typically has structural redundancies that can be exploited to yield a concise representation.White noise,however does not have correlation and is not easilycompressible. Hence, a good compression methodcan provide a suitable method for distinguishing between signal and noise.So far,we have investigatedwavelet thresholding techniques such as SureShrinkand BayesShrink for denoising.We now use MMSEestimation based on a Gaussian prior and showthat significant denoising can be achieved using thismethod. We then perform compression of the denoised coefficients based on their distribution andfind that this can be done without introducing significant quantization error. Thus, we achieve simultaneous denoising and compression.

104.2Denoising using MMSE estimationAs explained in the previous section,the GeneralizedGaussian distribution (GGD) is a good model forthe distribution of wavelet coefficients in each detailsubband of the image. However, for most images,aGaussian distribution is found to be a satisfatory approximation. Therefore, the model for the ith detailsubband becomesYji Xji Nji(a) 512 512 image of ‘Lena’j 1, 2, · · · , Mi .(15)where Mi is the number of wavelet coefficients in theith detail subband.The coefficients {Xji } are inde2pendent and identically distributed as N (0, σXi ) andiare independent of {Nj }, which are iid draws fromN (0, σ 2 ). We want to get the best estimate of {Xji }based on the noisy observations {Yji }.This is donethrough the following steps:1. The noise variance σ 2 is estimated as describedin the previous section.2. The variance σY2 i is calculated as(b) Noisy version of ‘Lena’σ̂Y2 i M1 Xi i 2Yn2 j 1 j3. σ̂X for the subband i is estimated as before asqσ̂X i max(σ̂Y2 i σ̂ 2 , 0).This comes about because22σ̂Y2 i σ̂Xi σ̂and in the case that σ̂ 2 σ̂Y2 i , σ̂X i is takento be zero. This means that the noise is moredominant than the signal in the subband and sothe signal cannot be estimated with the noisyobservations.(c) Denoised using SureShrink4. Based on (15),the MMSE estimate of Xji basedon observing Yji isX̂ji E[X/Y ] (d) Denoised using BayesShrinkFigure 7:DenoisingSureShrink(σ 30)byBayesShrinkand2σ̂Xi· Yjiσ̂Y2 i(16)

11We observe the similarity of this step to waveletshrinkage, since each coefficient Yji is broughtcloser to zero in absolute value by multiplyingwith2σ̂Xiσ̂ 2 i( 1). This effect is similar to that ofYwavelet shrinkage in soft thresholding.Steps 2 through 4 are repeated for eachdetail subband i. Note that the coefficientsin the low resolution LL subband are keptunaltered.The results obtained using this method for the’Elaine’ image with a Db4 wavelet with 4 levels areshown in the first three parts of Figure 8.The MSEcomparison plot in Figure 9 shows that denoisingby Gaussian estimation performs slightly better thanSureShrink for the ’Clock’ image. The slightly inferior performance to BayesShrink is to be expectedsince a GGD prior is a more exact representation ofthe wavelet coefficients in a subband than the Gaussian prior.4.3CompressionWe now introduce a quantization scheme for aconcise representation of the denoised coefficients{X̂ji }. From (16), the {X̂ji } are iid with distribution N (0,4σ̂Xiσ̂ 2 i). The number of bits used to encodeYeach coefficient X̂ji is determined as follows. For simplicity of notation , we denote X̂ji as Aj , keeping inmind that Aj is a part of subband i1. We first fix the maximum allowable distortion,say D, for each coefficient.2. The variance of each coefficient Aj is foundempirically by calculating the variance of a 3 3block of coefficients centered at Aj .It is assumed that we have available a finite set of optimal Lloyd Max quantizers forthe N (0, 1) distribution. In our experiments, wetook 5 quantizers with number of quantizationlevels M 2, 4, 8, 16 and 32.3. Each coefficient Aj is encoded using the quantizer with the least M so that (Aj Âj )2 D.Note that both D and the quantizer levels, defined for N (0, 1) have to scaled by σAj for eachcoefficient Aj .4. Steps 2 and 3 are repeated for all the coefficientsAj in a subband and for all the detail subbands.5. The coefficents in the low resolution subband arequantized assuming a uniform distribution [5].This is motivated by the fact that the LL coefficients are essentially local averages of the imageand are not characterized by a Gaussian distribution.4.4ResultsFigure 8 shows the results obtained when this denoising and compression scheme is applied to theimage ’Elaine’ with σ 30.We used Db-4 discretewavelet series with 4 levels of decomposition.We seethe denoised version has much lower MSE (143.7 vsσ 2 900)and better visual quality too. The compressed version looks very similar to the denoisedimage with an additional MSE of around 20. It hasbeen encoded using 1.52 bpp (distortion value D setat 0.1). The rate can be controlled by changing thedistortion level D. If we fix a large distortion level D,we get a low encoding rate, but have a price to paylarger quantization error. We choose to operate ata particular point on the ’Rate v Distortion’ curvebased on the distortion we are prepared to tolerate.The performance of the different denoising schemesis compared in Figure 9. A 200 200 image ’Clock’is considered and the MSEs for different values of σare compared. Clearly, VisuShrink is the least effective among the methods compared. This is dueto the fact that it is based on a Universal thresholdand not subband adaptive unlike the other schemes.Among these, BayesShrink clearly performs the best.This is expected since the GGD models the distribution of coefficients in a subband well. MMSE estimation based on a Gaussian distribution performsslightly worse than BayesShrink. We also see thata quantization error(approximately constant) is introduced due to compression. Among the subband

12MSE comparsions of different thresholding methods for the ftMMSE EstimationQuantization 0406080100120140160180200(a) 200 200 image of ‘Elaine’100502004051015202530sigma6080Figure 9: Comparison of MSE of various 0160180200(b) Noisy version of ‘Elaine’Denoised Elaine with estimation with wavelet db4 # levels 4adaptive schemes, SureShrink has the highest MSE.But it should be noted that SureShrink has the desirable property of adapting to the discontinuities inthe signal. This is more evident in 1-D signals suchas ’Blocks’ than in images.5(c) Denoised version of ‘Elaine’Quantized version of Denoised Elaine(d) Quantized image of ‘Elaine’Figure 8: MMSE Denoising and QuantizationConclusionsWe have seen that wavelet thresholding is an effective method of denoising noisy signals. We firsttested hard and soft on noisy versions of the standard 1-D signals and found the best threshold.Wethen investigated many soft thresholding schemesviz.VisuShrink, SureShrink and BayesShrink for denoising images. We found that subband adaptivethresholding performs better than a universal thresholding. Among these, BayesShrink gave the best results. This validates the assumption that the GGDis a very good model for the wavelet coefficient distribution in a subband. By weakening the GGD assumption and taking the coefficients to be Gaussiandistributed, we obtained a simple model that facilitated both denoising and compression.An important point to note is that although

13SureShrink performed worse than BayesShrink andGaussian based MMSE denoising, it adapts well tosharp discontinuities in the signal. This was not evident in the natural images we used for testing. Itwould be instructive to compare the performance ofthese algorithms on artificial images with discontinuities (such as medical images). It would also beinteresting to try denoising (and compression) usingother special cases of the GGD such as the Laplacian(GGD with β 1).Most images can be describedwith a GGD with shape parameter β ranging from0.5 to 1. So a Laplacian prior may give better resultsthan a Gaussian prior (β 2) although it may notbe as easy to work with.References[1] Iain M.Johnstone David L Donoho. Adaptingto smoothness via wavelet shrinkage. Journalof the Statistical Association, 90(432):1200–1224,Dec 1995.[2] David L Donoho. Ideal spatial adaptation bywavelet shrinkage. Biometrika, 81(3):425–455,August 1994.[3] David L Donoho. De-noising by soft thresholding. IEEE Transactions on Information Theory,41(3):613–627, May 1995.[4] Maarten Jansen. Noise Reduction by WaveletThresholding, volume 161.Springer Verlag,United States of America, 1 edition, 2001.[5] Martin Vetterli S Grace Chang, Bin Yu. Adaptive wavelet thresholding for image denoising andcompression. IEEE Transactions on Image Processing, 9(9):1532–1546, Sep 2000.[6] Carl Taswell. The what, how and why of waveletshrinkage denoising. Computing in Science andEngneering, pages 12–19, May/June 2000.

one for image denoising. In the course of the project, we also aimed to use wavelet denoising as a means of compression and were successfully able to implement a compression technique based on a unified denoising and compression principle. 1.2 The concept of denoising A more precise explanation of the wavelet denoising procedure can be given .

Related Documents:

Denoising and Compression Using Wavelets Juan Pablo Madrigal Cianci Trevor Gianinni December 15, 2016 Abstract An explanation of the theory behind signal and image denoising and compression is presented. Di erent examples of image and signal denois-ing and image compression are implemented using MATLAB. Some of their characteristics are discussed.

4 Image Denoising In image processing, wavelets are used for instance for edges detection, watermarking, texture detection, compression, denoising, and coding of interesting features for subsequent classifica-tion [2]. Image denoising by thresholding of the DWT coefficients is discussed in the following subsections. 4.1 Principles

2.2 Image Denoising. A typical application area for image reconstruction is image denoising, where the task is to remove noise to restore the original image. Here, we focus on image denoising tech-niques based on deep neural networks; for more detailed information about image denoising research, please refer to the following survey papers [9,11].

* 1993 - ’94 Sabbatical year devoted to wavelets and applications. * 1993 - Short course in Ghent, Belgium (my alma mater). * 1994 - Work on coiflets (with Monzon and Beylkin), - work on Dubuc-Deslauriers’ subdivision scheme and wavelets, - work on Battle-Lemari e spline based wavelets. * Course on wavelets at CSM-Golden, CO (1995).File Size: 309KB

In the recent years there has been a fair amount of research on wavelet based image denoising, because wavelet provides an appropriate basis for image denoising. But this single tree wavelet based image denoising has poor directionality, loss of phase information and shift sensitivity [11] as

images and its application to the image denoising problem. The denoising of a natural image corrupted by Gaussian noise is a classic problem in signal processing [4]. The wavelet transform has become an important tool for this problem due to its energy compaction property [5]. Indeed, wavelets

age denoising based on minimization of total variation (TV) has gained certain popularity in the literature (e.g., [4]), and the TV approach is initially suggested for denoising 2-D images (e.g. [12]). MATLAB pro-grams for 3-D image denoising using anisotropic dif-fusion have also been developed (e.g., [6]). Other

AM I MY BROTHER’S KEEPER? Lanecia A. Rouse “In the Habit” session for use with devozine meditations for January 12–18, 2015. MAKING THE CONNECTION “The other day I was sitting in a local coffee shop writing a devotion. Needing a break, I looked up from my computer and out a big window in front of me to view the city scene. I noticed outside a woman wearing house shoes, and she seemed .