Optimal Single Image Capture For Motion Deblurring

2y ago
45 Views
2 Downloads
4.63 MB
8 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Elise Ammons
Transcription

Optimal Single Image Capture for Motion DeblurringAmit AgrawalMitsubishi Electric Research Labs (MERL)201 Broadway, Cambridge, MA, USARamesh RaskarMIT Media Lab20 Ames St., Cambridge, MA, rring images of moving objects captured from atraditional camera is an ill-posed problem due to the lossof high spatial frequencies in the captured images. Recent techniques have attempted to engineer the motion pointspread function (PSF) by either making it invertible [16] using coded exposure, or invariant to motion [13] by movingthe camera in a specific fashion.We address the problem of optimal single image capture strategy for best deblurring performance. We formulatethe problem of optimal capture as maximizing the signal tonoise ratio (SNR) of the deconvolved image given a scenelight level. As the exposure time increases, the sensor integrates more light, thereby increasing the SNR of the captured signal. However, for moving objects, larger exposuretime also results in more blur and hence more deconvolution noise. We compare the following three single imagecapture strategies: (a) traditional camera, (b) coded exposure camera, and (c) motion invariant photography, as wellas the best exposure time for capture by analyzing the rate ofincrease of deconvolution noise with exposure time. We analyze which strategy is optimal for known/unknown motiondirection and speed and investigate how the performancedegrades for other cases. We present real experimental results by simulating the above capture strategies using a highspeed video camera.1. IntroductionConsider the problem of capturing a sharp image of amoving object. If the exposure time can be made sufficiently small, a sharp image can be obtained. However,small exposure time integrates less amount of light, therebyincreasing the noise in the captured image. As the exposuretime increases, the SNR of the captured signal improves, butmoving objects also result in increased motion blur. Motiondeblurring attempts to obtain a sharp image by deconvolution, thereby resulting in increased deconvolution noisewith exposure.In this paper, we ask the following question: What is thebest exposure time and capture strategy for capturing a sin-gle image of a moving object? We formulate the problemof optimal capture as follows: Maximize the SNR of the deconvolved image of the moving object, given a certain scenelight level, while not degrading the image corresponding tothe static parts of the scene1 .To obtain the best deblurring performance, one needs toanalyze the rate of increase of capture SNR versus deconvolution noise with the exposure time. For imaging sensors,the capture SNR increases proportional to the square root ofthe exposure time (sub-linear) due to the signal-dependentphoton noise. It is well-known that deblurring of imagesobtained from a traditional camera is highly ill-posed, dueto the loss of high spatial frequencies in the captured image. We first show a simple but rather non-intuitive result:the deconvolution noise for 1-D motion blur using a (static)traditional camera increases faster than capture SNR withthe exposure time. Thus, increasing exposure time alwaysdecreases the SNR of the deconvolved moving object. Wethen analyze recent advances in engineering the motion PSFthat dramatically improves the deconvolution performance.Two prominent methods are (a) making the PSF invertibleusing a coded exposure camera [16], and (b) making thePSF invariant by moving the camera with non-zero acceleration [13].A coded exposure camera [16] modulates the integration pattern of light by opening and closing the shutterwithin the exposure time using a carefully chosen pseudorandom code. The code is chosen so as to minimize thedeconvolution noise assuming a specific amount of motionblur in the image. However, coded exposure also loses light.In [16], the chosen code was 50% on/off, thereby losing halfthe light compared to a traditional camera with the sameexposure time. While [16] analyzed the improvement indeconvolution performance, it ignored the loss of light inthe image capture. We incorporate the loss of light in ouranalysis, and show that it is not necessary to have a 50%on/off code with signal-dependent noise; one does have theflexibility of choosing other codes. Note that the PSF ismade invertible for any object motion direction, while mo1 Otherwise a trivial capture strategy would be to move the camera withthe same speed as the object if the motion direction is known.

unknownMotion DirectionknownSingle Image CaptureCamera Motion Required?Loss of LightInvariant PSFPSF InvertibilityNoise on Static Scene PartsMotion DeblurringDegradesgraduallyCoded ExposureMIPMIPCoded ExposureknownDegradessharplyunknownMotion Magnitude§‡TraditionalNoNoNoVery BadNoMIP [13]YesNoYes§Good§Deconvolution NoiseCoded Exposure [16]NoYesNoGood‡No(for same light level)Only for object motion direction same as camera motion direction & object motion magnitude in a rangeFor any object motion directionFigure 1. Overview of single image capture techniques for motion deblurring. Coded exposure is optimal for deblurring for any motiondirection, if the motion magnitude is known; but motion PSF needs to be estimated for deblurring. MIP is optimal if the motion direction isknown and magnitude is within a range (could be unknown), with additional advantage that motion PSF need not be estimated (invariant).However, performance of coded exposure degrades gradually as motion magnitude differs from the desired one, while MIP performancedegrades sharply as motion direction differs from camera motion direction and motion magnitude goes beyond the assumed range.tion magnitude is required for optimal choice of code.Motion invariant photography (MIP) [13] moves thecamera with a constant acceleration while capturing the image. The key idea is to make the motion PSF invariant toobject speed within a certain range. Thus, objects movingwith different speeds within that range would result in samemotion PSF. Note that MIP needs to know the direction ofthe object motion, since the camera should be moved accordingly, but knowledge of motion magnitude is not required. Another disadvantage is that the static parts of thescene are also blurred during capture, leading to deconvolution noise on those scene parts. We compare the threetechniques in terms of SNR of the deconvolved image andobtain optimal parameters given a scene light level and object velocity (or range of velocities). Given capture parameters for a scenario, we investigate how the performance degrades for different motion magnitudes and directions. Anoverview is shown in Figure 1.1.1. Contributions We formulate the problem of optimal single imagecapture of a moving object as maximizing the SNR ofthe deconvolved image of the moving object. We show that for a traditional image capture using astatic camera, SNR of the deblurred moving object decreases with the increase in exposure time. We investigate which capture strategy to choose,choice of exposure time and associated parameters andanalyze its performance for different operating conditions such as known/unknown motion magnitude anddirection.1.2. Related workMotion deblurring has been an active area of researchover last few decades. Blind deconvolution [9, 5] attemptsto estimate the PSF from the given image itself. Sincedeblurring is typically ill-posed, regularization algorithmssuch as Richardson-Lucy [14, 18] are used to reduce noise.Recent papers [22, 20, 11, 10, 2] have shown promising results for PSF estimation and deblurring.Manipulating PSF: By coding the exposure, [16] madePSF invertible and easy to solve. Wavefront coding [3]modifies the defocus blur to become depth-independent using a cubic phase plate with lens, while Nagahara et al.[15] move the sensor in the lateral direction during image capture to achieve the same. MIP [13] makes motionPSF invariant for a range of speeds by moving the camera. Coded apertures [4] has been used in astronomy usingMURA codes for low deconvolution noise, using broadbandcodes for digital refocusing [21] and for depth estimationin [8, 12].Improved capture strategies: In [6], optimal exposures were obtained to combine images for high dynamicrange imaging. Hadamard multiplexing was used in [19]to increase the capture SNR in presence of multiple lightsources. The effect of photon noise and saturation was further included in [17] to obtain better codes.2. Optimal single image captureConsider an object moving with a velocity v m/sec. Forsimplicity, we assume that the object is moving horizontallyin a single plane parallel to the image plane and the objectmotion results in an image blur of vi pixels/ms. Let i denote the captured blurred image at exposure time of 1 msand i o and i b be the average image intensity of the moving object and the static background in the captured imagerespectively. Define a baseline exposure time t0 v1i forwhich the blur is 1 pixel in the capture image. Let m be thesize of the object in the image along the motion direction ifit was static and let SNR0 be the minimum acceptable SNRfor the object.Image noise model: We use the affine noise model [17,1, 7], where the noise η is described as the sum of a signalindependent term and a signal dependent term. The signalindependent term is due to dark current, amplifier noise andthe A/D quantizer. Let the gray level variance of this term2be σgray. Signal-dependent noise is related to photon fluxand the uncertainty of the electron-photon conversion process. The variance of the photon generated electrons linearly increases with the measured signal, and hence with

tPSFm0.35PSFft-vLvv 1v 2v 3v 4v 50.3vL0.250.2fxk0.150.1x0.05x00102030405060Figure 2. Comparison of capture strategies. (Left) A 1D object (blue) of length m blurs by k in x-t space with integration lines corresponding to traditional camera (solid brown), coded exposure (dotted) and MIP (yellow) and the resulting PSF’s. Objects moving withspeed v have energy along a single line in frequency domain fx -ft space. For traditional & coded exposure (static cameras), the capturedimage corresponds to the ft 0 slice after modulation by sinc (red) and broadband (blue) filter respectively. Thus, for coded exposure,any velocity v results in non-zero energy on ft 0 slice for all spatial frequencies. MIP optimally captures energy within the wedge givenby [ vr , vr ] [13] but performs poorly for v outside this range. (Right) Motion PSF for MIP becomes similar to box function as speedincreases beyond the desired range (vr 3).the exposure time t. Thus, the photon noise variance canbe written as Ct, where C is a camera dependent constant.2Thus, ση2 σgray Ct. Given this noise model, the SNRof the captured image is given byi o t.(1)SNRcapture q2σgray Ct2For long exposures, Ct σgray, the photon noise domi o tnates and SNRcapture i increases as the square rootC2of the exposure time. When Ct σgray, SNRcapture increases linearly with t.Deconvolution noise: At exposure time t, the amountof blur k tvi . The captured image i(x, y) is modeled asa convolution of the sharp image of the object s(x, y) withthe motion PSF h(x), along with added noisei(x, y) s(x, y) h(x) η(x, y),(2)where denotes convolution. For 1D motion, the discreteequation for each motion line is given by i As n, whereA(m k 1) m denotes the 1D circulant motion smear matrix, and s, i and n denote the vector of sharp object, blurredobject and noise intensities along each motion line. The estimated deblurred image is then given bybs (AT A) 1 AT i s (AT A) 1 AT n.(3)The covariance matrix of the noise in the estimate bs s isequal toΣ (AT A) 1 AT ση2 A(AT A) T ση2 (AT A) 1 .(4)The rootp mean square error (RMSE) increases by a factorf trace(AT A) 1 /m. Thus, the SNR2 of the deconvolved object at exposure time t is given byi o t,(5)SNRd q2f σgray Ctwhere f denotes the deconvolution noise factor (DNF).2 Weuse 20 log10 (.) for decibels.2.1. Traditional cameraFor a traditional capture, motion PSF is a box functionwhose width is equal to the blur size kh(x) 1/kif 0 x k,0o.w.(6) Figure 3 (left) show the plots of ft which is proportional2to SNRd at high signal dependent noise (Ct σgray).Plots are shown for different object velocities assumingm 300. Note that the SNR decreases as exposure timeis increased. Thus, for traditional capture, increasing theexposure time decreases the SNR of the deconvolved object. For a specific camera, the minimum exposure time thatsatisfies SNRd SNR0 would be optimal, if this conditioncould be satisfied.Trivial capture: If the SNR at baseline exposure t0 isgreater than SNR0 , then the optimal exposure time is t0 .For example, if there is enough light in the scene (brightdaylight), a short exposure image will capture a sharp imageof a moving object with good SNR.3. Reducing deconvolution noiseNow we consider the following two approaches: (a)coded exposure camera, and (b) MIP for reducing the deconvolution noise and analyze the optimal capture strategyfor known/unknown motion magnitudes and directions.3.1. Coded exposure cameraIn coded exposure, the PSF h is modified by modulatingthe integration pattern of the light without camera motion.Instead of keeping the shutter open for the entire exposuretime, a coded exposure camera ‘flutters’ the shutter openand close using a carefully chosen binary code. Let n be thecode length and s be the number of ones in the code. Lightis integrated when the code is 1 and is blocked when it is0. This preserves high spatial frequencies in the capturedblurred image at the expense of losing light. Note that s 1 is equivalent to the short exposure image and s n is

Coded ExposureTraditional Camera 2v 1v 2v 3v 4v 5 24 4SNR (dB)SNR (dB) 22 26MIP 4v 1v 2v 3v 4v 5v 1v 2v 3v 4v 5 5 6 6SNR (dB) 20 8 7 8 9 10 28 10 300 12 115101520Exposure time t (ms)25305101520 1205Exposure Time t (ms)101520Exposure Time t (ms)2),σgraySNRd decreases as the exposure time is increased for traditional cameraFigure 3. Key idea: At large signal-dependent noise (Ct but not for coded exposure and MIP. Plots show the decrease in SNR for different object speeds. For these plots, parameters depending onexposure time and object speed were used for both coded exposure and MIP.equivalent to the traditional camera. Thus, coded exposureprovides tradeoff between the amount of light and amountof deconvolution noise. The SNR of the deconvolved imagefor the coded exposure camera is given bySNRCE dfCEqi o ts/n2σgray Cts/n,(7)since both the signal and signal dependent noise will be reduced by a factor of s/n.In [16], the light loss was kept equal to 50% (s n/2).Note that [16] ignores the loss of light in their analysis ofdeconvolution noise and thus only minimizes fCE for finding the best code, while one should maximize SNRCEd . Wefirst evaluate the relationship between n and s for optimalcode design incorporating the loss of light.Code selection incorporating light loss: First, we analyze choice of n for fix amount of light (same s). Thesame amount of light ensures that capture noise is similarand one can directly compare DNF’s for different n. Figure 4 (left) show plots of DNF versus n, for several valuesof s. For each plot, n is in the range [s, 3s]. Note that DNFdecreases sharply as n is increased, as expected. However,the ‘knee’ in the curves shows that increasing n beyond acertain point leads to similar performance. Since the kneeoccurs before 2s, this implies that a smaller code length canbe used. Small codes are easier to search for and also lead tolarger on/off switching time in a practical implementation.Next, we plot the SNR as s is increased from 1 to n for a. At low s, the infixed n. At low light levels, SNR fs/nCEcrease in noise due to low light overwhelms the reduction indeconvolution noise. At high s, deconvolution noise dominates. Thus, there is a distinct valley for each curve ands n/2 is a good choice as shown in Figure 4 (middle).However, now consider the effect of signal-dependent s/nnoise at high light levels. In this case, SNR fCE andthe plots are shown in Figure 4 (right). Notice that for agiven n, the performance is good for a range of s and notjust for s n/2. Thus, it means that a code with smaller scan be used. Since the size of search space is of the order ofnfors , it leads to a faster search for small s. For example,nn611n 40, 20 1.37 10 , while 8 76.9 10 .Choice of t: Now we analyze the performance of codedexposure for different exposure time t by considering a codedepending on the blur size. However, in practice, the object speed (blur size) is not known a-priori. The analysisof how a code generalizes for different object velocities ists/ndone in Section 4. Figure 3 (middle) plots SNR ( fCE )for coded exposure versus t for different velocities vi , wherefor every blur size k tvi , the best code was used. Notethat with signal dependent noise, SNR does not decreasewith exposure time. Thus, the exposure time could be increased and is useful as static parts of the scene could thenbe captured with higher SNR.3.2. Motion invariant photographyIn MIP, the camera is moved with a constant acceleration so that objects moving with speed within [ vr , vr ] andin the same camera motion direction result is same motionPSF. Intuitively, for every object velocity v [ vr , vr ],the moving camera spends an equal amount of time moving with v, thus capturing it sharply for that time period.The PSF is thus peaked at zero and has low deconvolution noise compared to a box (flat) PSF for the traditionalcapture. Let the acceleration of the camera be a and letT t/2. For velocity range [ vr , vr ] and exposure time t,a vr /2T vr /t [13] for good performance. SNR of thedeconvolved image for MIP is given byIPSNRM dfM IPqi o t2σgray Ct,(8)where fM IP depends on the modified PSF due to cameramotion.Choice of t: We first analyze the performance of MIPfor a given velocity v and exposure time t using a v/t.

15s 8s 10s 12s 1430 10n 12n 20n 28n 36 20n 12n 20n 28n 36 152515SNRSNRDNF (dB) 20 2520 3010 35 3550 25 301015202530Code Length n3540 400102030Number of Ones in Code (s) 40040102030Number of Ones in Code (s)40Figure 4. Choice of optimal n and s for coded exposure. (Left) For a given s, DNF decreases as n is increased and the ‘knee’ in each curvehappens before n 2s (marked with square). This indicates that s n/2 could be used. (Middle) For signal-independent noise, SNRmaximizes around s n/2. (Right) However, for signal-dependent noise, smaller s can be used.4. Comparisons and performance analysisFirst we compare different capture strategies for thesame amount of captured light. This ensures that the capture noise is similar for all three capture strategies, allowingdirectly comparisons of the DNF’s. Note that to keep thesame light level, t is decreased by a factor of n/s for MIPand traditional camera. This will lead to more blur in codedexposure image by the same factor. In [13], coded exposuredeblurring was visually compared with MIP using syntheticdata, but [13] does not cite the code and blur size used forcomparisons. Thus, it is difficult to fairly evaluate the performance in [13]. In addition, the captured light level is notsame for comparisons in [13].DNF comparison: Figure 5 compares DNF’s with t fordifferent velocities. For coded exposure, the motion direction is not known but speed was assumed to be known forcomputing the optimal code. In contrast, for MIP, motiondirection was assumed to be known and maximum speedwas set to vr 3. In addition, a vr /t was used separately for each t for best performance. While at lowerspeeds (v vr ), MIP gives low deconvolution noise, asthe speed approaches vr , coded exposure performs betterthan MIP.Performance generalization for motion magnitude:The acceleration parameter a for MIP is set based on thedesired velo

‡ For any object motion direction Figure 1. Overview of single image capture techniques for motion deblurring. Coded exposure is optimal for deblurring for any motion direction, if the motion magnitude is known; but motion PSF needs to be estimated

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

HowtoImplement Embedded Packet Capture Managing Packet DataCapture SUMMARYSTEPS 1. enable 2. monitor capture capture-name access-list access-list-name 3. monitor capture capture-name limit duration seconds 4. monitor capture capture-name interface interface-name both 5. monitor capture capture-name buffer circular size bytes .

2. monitor capture capture-name access-list access-list-name 3. monitor capture capture-name limit duration seconds 4. monitor capture capture-name interface interface-name both 5. monitor capture capture-name buffer circular size bytes EmbeddedPacketCaptureOverview 4 EmbeddedPacketCaptureOverview PacketDataCapture

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Device# monitor capture mycap start *Aug 20 11:02:21.983: %BUFCAP-6-ENABLE: Capture Point mycap enabled.on Device# show monitor capture mycap parameter monitor capture mycap interface capwap 0 in monitor capture mycap interface capwap 0 out monitor capture mycap file location flash:mycap.pcap buffer-size 1 Device# Device# show monitor capture mycap