Lecture 16: Image Deblurring By Frequency Domain Operations

1y ago
22 Views
3 Downloads
1.67 MB
28 Pages
Last View : 1m ago
Last Download : 2m ago
Upload by : Lilly Kaiser
Transcription

Lecture 16: Image Deblurring by FrequencyDomain OperationsHarvey RhodyChester F. Carlson Center for Imaging ScienceRochester Institute of Technologyrhody@cis.rit.eduNovember 8, 2005AbstractImage restoration by reduction of blurring is an importantapplication of linear filter techniques. These filtering techniques aremost easily understood in the frequency domain. Wiener, ConstrainedLeast-Squares deblurring and blind deconvolution are presented.DIP Lecture 16

Image Blur ModelImage blur is a common problem. It may be due to the point spreadfunction of the sensor, sensor motion, or other reasons.Linear model of observation systemg(x, y) f (x, y) ? h(x, y) η(x, y)What deblurring filter will construct the best estimate fˆ(x, y)?DIP Lecture 161

Frequency Domain ModelThe observation equation can also be expressed in the frequency domain asG(u, v) F (u, v)H(u, v) N (u, v)We can construct an estimate of F (u, v) by filtering the observation G(u, v).Let T (u, v) be a linear shift-invariant reconstruction filter.F̂ (u, v) G(u, v)T (u, v)Our task is to find a filter T (u, v) that provides a good estimate of theoriginal image.The solution must balance noise reduction and sharpening of the image.These are conflicting goals.DIP Lecture 162

Inverse FilteringAs a first attempt at a solution we can try the inverse filter. Assume thatwe know the sensor function H(u, v). TryT (u, v) H 1(u, v)F̂ (u, v) G(u, v)H 1(u, v) F (u, v) N (u, v)H 1(u, v)The result will be filtered noise added to the desired image.The problem is that the inverse filter typically has very high gain at certainfrequencies so that the noise term completely dominates the result.DIP Lecture 163

Inverse FilterOriginal ImageBlurred ImageRestored with H 1(u, v)A small amount of noise saturates the inverse filter.DIP Lecture 164

Inverse FilterThe inverse filter has very high gain at frequencies where H(u, v) is small.The amplified noise at these frequencies will dominate the output.Sensor MTFInverse FilterThe frequency response of a very low-quality sensor and the inverse filterfor it are shown above.DIP Lecture 165

Modified Inverse FilterOne can attempt to reduce the noise amplification by modifying theinverse filter to lower high-frequency gain. Consider weighting H 1 with aButterworth response.1B(u, v) 1 u2 v 2D2 nB(u, v)T (u, v) H(u, v)F̂ (u, v) (F (u, v)H(u, v) N (u, v))T (u, v) F (u, v)B(u, v) N (u, v)B(u, v)H(u, v)The goal is to select B(u, v) so that the image is not distorted and thenoise is still suppressed.DIP Lecture 166

Modified Inverse FilterH(u, v)B(u, v): R 20, n 1R 40, n 1Blurred Image G(u, v)Restored using R 20Restored using R 40DIP Lecture 167

Modified Inverse FilterH(u, v)B(u, v): R 90, n 8Inverse B/HOriginal Image G(u, v)Blurred using R 20RestoredDIP Lecture 168

Wiener FilterThe Wiener filter minimizes the mean-squared errorhi22e E (f (x, y) fˆ(x, y))The frequency-domain expression for the Wiener filter isH (u, v)T (u, v) H(u, v) 2 Sη (u, v)/Sf (u, v)where Sη (u, v) and Sf (u, v) are the power spectra of the noise and theoriginal image.When the power spectra are not known it is common to use a Wienerinspired approximationH (u, v)T (u, v) H(u, v) 2 Kwhere K has a small positive value.DIP Lecture 169

Norbert Wiener 1894-1964 Born in 1894 in Columbia, Missouri 1918 – Ph.D. in Philosophy, Harvard,age 18 Instructor of Mathematics at MIT in1919, Assistant Professor in 1929 andProfessor in 1931 1942 – “The Yellow Peril” (classified) 1948 – “Cybernetics or Control andCommunication in the Animal and theMachine” 1949 – “Extrapolation, Interpolationand Smoothing of Stationary TimeSeries with Engineering Applications”DIP Lecture 16Norbert Wiener10

Wiener FilterOriginal image (left),blurred image (right)Restored with Wienerfilters with K 0.01(left) and K 0.0001(right)DIP Lecture 1611

Wiener Filter ResponseThe frequency response of Wiener filters with K 0.01 and K 0.0001are shown below.Note that they do not suffer from catastrophic gain at any frequency.K 0.01DIP Lecture 16K 0.000112

ComparisonRestoration with the Wiener approximation and the Butterworth modifiedinverse filter are similar.OriginalDIP Lecture 16Wiener K 0.0001Inverse Butterworth [90, 8]13

Motion Blur ModelAn image can be blurred by motion while the camera shutter is open. If animage f (x, y) is moving with constant velocity and in T seconds moves tof (x α, y β) then the image registered by a sensor isZTf (x g(x, y) 0βtαt, y )dtTTG(u, v) F (u, v)H(u, v)where the system transfer function issin π(αu βv) iπ(αu βv)H(u, v) Teπ(αu βv)DIP Lecture 1614

Motion Blur ExampleAn image that has motion blur with parameters α β 0.1 and T 1 isshown below. (Original left, blurred right).DIP Lecture 1615

Wiener Filter Restoration of Motion BlurShown below are three blurred images with different noise levels and theimages recovered by Wiener filtering.The restoration filter was the Wiener filter approximationH (u, v)T (u, v) H(u, v) 2 Kwith K chosen for best visual quality for each image by trial and error.DIP Lecture 1616

Wiener Restoration ResultsHigh NoiseDIP Lecture 16Medium NoiseLow Noise17

Constrained Least Squares FilteringThe Wiener filter uses the power spectrua of the actual image and noise.The CLS method seeks to constrain the variation in the image due to noisewithout actual knowledge of either power spectrum.The CLS reconstruction is F̂ (u, v) H (u, v)G(u, v)22 H(u, v) γ P (u, v) P (u, v) is the Fourier transform of the Laplacian filter 0p(x, y) -10-14-1 0-1 0The filter P (u, v) has a large amplitude at high frequencies, where the noisetends to be dominant. It modifies the denominator to reduce the noiseeffects at high frequencies.DIP Lecture 1618

CLS-Wiener ComparisonThe reconstruction results are shown for CLS (top) and Wiener (bottom).High NoiseDIP Lecture 16Medium NoiseLow Noise19

Blind DeconvolutionIf the blur function H(u, v) is not known then recovery of F (u, v) fromG(u, v) F (u, v)H(u, v) N (u, v)is more difficult.Blind deconvolution is a description of a class of techniques that estimateH(u, v) from one or more images and then use that estimate to recover anestimate of F (u, v).If nothing is known about the image or blur function then it is not possibleto solve the blind deconvolution problem.Blind deconvolution algorithms differ in what is assumed about F (u, v) andH(u, v).In developing a blind deconvolution by heuristics we will ignore the additivenoise. It may be possible to modify the process to reduce noise effectswhere that is necessary.DIP Lecture 1620

Blind DeconvolutionOne practical algorithm1 is based on the assumption that H(u, v) is asmooth function. This is reasonable for many sensor systems.The concept for the algorithm is to separate the image and blur functionsby using smoothness. If we ignore noise, we have G(u, v) F (u, v) H(u, v) Express F (u, v) as the sum of a slowly and rapidly varying components. F (u, v) F (u, v) L F (u, v) HThen G(u, v) H(u, v) F (u, v) L H(u, v) F (u, v) H1Jae S. Lim, Two-Dimensional Signal and Image Processing, Prentice-Hall, 1990.“Algorithms for Blind Deconvolution”DIP Lecture 16Section 9.3.2,21

Blind Deconvolution F (u, v) H(u, v) F (u, v) F (u, v) L F (u, v) H G(u, v) F (u, v) L · H(u, v) DIP Lecture 16 G(u, v) F (u, v) H · H(u, v) 22

Blind DeconvolutionThe low-frequency term in the expression G(u, v) H(u, v) F (u, v) L H(u, v) F (u, v) Hcan be extracted with a suitable smoothing filter, S(u, v).S(u, v) G(u, v) S(u, v) H(u, v) F (u, v) L S(u, v) H(u, v) F (u, v) H H(u, v) F (u, v) LWe can then estimate the blur function from Ĥ(u, v) S(u, v) G(u, v) F (u, v) LThe numerator can be constructed from the image. The denominator canbe estimated from similar images.DIP Lecture 1623

Blind Deconvolution for Atmospheric Blur RemovalA 1.6 meter ground-based telescope sensing green light, for example, should resolve features on the order of 10 inches whenobserving the Hubble Space Telescope in its 600 km orbit. Time-varying changes in the refractive index of Earth’s atmospherecan, however, swell this resolution by a factor of 15 making meaningful inference about the satellite from ground-based picturesnearly impossible.2Hubble Space Telescope as acquired by a 1.6 m telescope at AirForce Maui Optical Station2Timothy J. Schulz, Michigan Technical esearch/blind deconvolution.htmlDIP Lecture 1624

Blind Deconvolution for Atmospheric Blur RemovalA maximum-likelihood estimationmethod has been considered forrecovering fine-resolution imagery froma sequence of noisy, blurred images.A numerical technique based on theexpectation-maximization (EM)procedure has been developed forsolving this multiframe blind deblurringproblem. A parallel implementation ofthis algorithm on an IBM SP2computer at the Maui HighPerformance Computing Center hasbeen used to restore the resolution ofground-based telescope imagery of theHubble Space Telescope.DIP Lecture 1625

Phase vs MagnitudeThe Fourier transform of an image f (x, y) can be expressed in magnitudephase form asF (u, v) F (u, v) eiθ(u,v)A question is sometimes asked “What is the most important, the magnitudespectrum or the phase spectrum?”The magnitude spectrum describes the energy distribution as a function offrequency. We have seen that the mean-squared value of f (x, y) can befound by summing over F (u, v) 2.However, the phase spectrum contains important information about imagedetail. One cannot expect to reconstruct f (x, y) from F (u, v) becausethe phase-shift for each component is completely ambiguous. However, asshown on the next page, one can reconstruct a reasonable approximationfrom the phase spectrum.DIP Lecture 1626

Phase SubstitutionThe magnitude spectrum and phasespectrum are both important, but whichis more powerful? Let A and B be imagearrays, and letAF F(A)BF F(B)CF AF eiφBDF BF eiφAC F 1(CF )D F 1(DF )The effect of the interchange of phase andmagnitude spectra is illustrated here.DIP Lecture 1627

application of linear filter techniques. These filtering techniques are most easily understood in the frequency domain. Wiener, Constrained . Series with Engineering Applications" Norbert Wiener DIP Lecture 16 10. Wiener Filter Original image (left), . where the system transfer function is H(u,v) T sinπ(αu βv) π(αu βv) e iπ .

Related Documents:

image. Deblurring motion blurred images has been studied in the past, and many image processing pipelines on smart-phones have an image stabilization stage or deblurring stage [5, 10]. Single image deblurring has also been widely stud-ied before [7, 4, 13, 9, 8, 16]

latent image from complex motion blurred image pairs. Though the multi-image deblurring methods show good performances, computational time for the deblurring task has made it difficult to apply into a mobile robot. To change the way of image capture is in the lime-light to tackle the deblurring

Introduction of Chemical Reaction Engineering Introduction about Chemical Engineering 0:31:15 0:31:09. Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Lecture 27 Lecture 28 Lecture

make CNNs a promising approach to image deblurring. Early CNN-based deblurring methods aim to mimic conventional deblurring frameworks for the estimation of both latent image and blur kernel. Prior works [28], [29] first use a network to predict the non-uniform blur kernel and then utilize a non-blind deblurring method [30] to restore images .

Deblurring Dataset: The importance of image deblurring dataset has been raised with remarkable progress of image deblurring. Several existing popular uniform deblurring dataset [40,22,13] are sythesized by blur kernel. In [40,22, 13], single sharp image is convolved with a set of mo

Fig. 1: Self-supervised motion deblurring. First: Sharp ground truth image. Second: Blurry input image. Third: Deblurring results of our self-supervised method. Fourth: Deblurring results of the supervised method from Tao et al. [32]. and deblurs it in real time on a single GTX 1080Ti graphic card usi

deblurring results on text images than previous approaches. 1 Introduction In single image deblurring, image degradation is usually modeled as: b k l n, (1) where bis the observed blurred image, kis the motion blur kernel, is the convolution operator, lis the latent image, and nis

Image deblurring has long been an important task in computer vision and image processing. Given a motion- or focal-blurred image, caused by camera shake, object motion or out-of-focus, the goal of deblurring is to recover a sharp latent image with necessary edge structures and details. Single i