Motion-Based Motion Deblurring - University Of Delaware

2y ago
51 Views
3 Downloads
2.30 MB
10 Pages
Last View : 12d ago
Last Download : 3m ago
Upload by : Julia Hutchens
Transcription

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL. 26, NO. 6,JUNE 2004689Motion-Based Motion DeblurringMoshe Ben-Ezra and Shree K. Nayar, Member, IEEEAbstract—Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motioncan be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blindrestoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure timein the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution toconstruct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used tocompute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur theimage. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototypesystem was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The resultsshow that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with abrief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in thescene move with different velocities.Index Terms—Sharpening and deblurring, inverse filtering, motion, motion blur, point spread function, resolution, hybrid imaging.æ1INTRODUCTIONMOTION blur is the result of the relative motion betweenthe camera and the scene during the integration timeof the image. Motion blur can be used for aestheticpurposes, such as emphasizing the dynamic nature of ascene. It has also been used to obtain motion and scene3D structure information [39], [7], [6], [24], [9], [41], [25],[46], [33]. Motion blur has also been also used in computergraphics to create more realistic images which are pleasingto the eye [5], [31], [42], [3], [26], [10], [19]. Severalrepresentations and models for motion blur in human andmachine vision have been proposed [40], [4], [12], [13].Very often, motion blur is simply an undesired effect. Ithas plagued photography since its early days and is stillconsidered to be an effect that can significantly degradeimage quality. Fig. 1 shows simulated examples of imagesthat are blurred due to simple motions of the camera. Inpractice, due to the large space of possible motion paths,every motion blurred image tends to be uniquely blurred.This makes the problem of motion deblurring hard.Motion blurred images can be restored (up to lost spatialfrequencies) by image deconvolution [17], provided that themotion is shift-invariant, at least locally, and that the blurfunction (point spread function, or PSF) that caused the blur isknown. As the PSF is not usually known, a considerableamount of research has been dedicated to the estimation of thePSF from the image itself. This is usually done using themethod of blind image deconvolution [27], [37], [18], [2], [38],[46], [8], [43], [45], [44]. PSF estimation and motion deblurringhave also been addressed in image sequence processing, andin spatial super-resolution algorithms [36], [2], [20], [32], aswell as in the context of temporal super-resolution [35]. The authors are with the Computer Science Department, ColumbiaUniversity, 1214 Amsterdam Avenue, New York, NY 10027-7003.E-mail: {moshe, nayar}@cs.columbia.edu.Manuscript received 27 Apr. 2003; revised 18 Aug. 2003; accepted 21 Sept.2003.Recommended for acceptance by S. Soatto.For information on obtaining reprints of this article, please send e-mail to:tpami@computer.org, and reference IEEECS Log Number TPAMI-0053-0403.0162-8828/04/ 20.00 ß 2004 IEEEMethods of blind image deconvolution generally assumes that the motion that caused the blur can beparameterized by a specific and very simple motion model,such as constant velocity motion or linear harmonic motion.Since, in practice, camera motion paths are more complex,the applicability of the above approach to real-worldphotography is very limited. Fig. 2 shows the result ofapplying matlab’s blind image deconvolution to the imageshown in Fig. 1b. The resulting image is clearly degraded bystrong deconvolution artifacts.Two hardware approaches to the motion blur problem,which are more general than the above methods, have beenrecently put forward. The first approach uses opticallystabilized lenses for camera shake compensation [14], [15].These lenses have an adaptive optical element, which iscontrolled by inertial sensors, that compensates for cameramotion. As shown in Fig. 3, this method is effective only forrelatively small exposures; images that are integrated overdurations that are even as small as 1/15 of a second canexhibit noticeable motion blur due to system drift [30], [29].The second approach uses specially designed CMOS sensors[11], [21]. These sensors prevent motion blur by selectivelystopping the image integration in areas where motion isdetected. It does not, however, solve the problem of motionblur due to camera shake during long exposures.In this paper, we present a novel approach to motiondeblurring of an image. Our method estimates the continuous PSF that caused the blur, from sparse real motionmeasurements that are taken during the integration time ofthe image, using energy constraints. This PSF is used todeblur the image by deconvolution.In order to obtain the required motion information, weexploit the fundamental trade off between spatial resolutionand temporal resolution by combining a high resolutionimaging device (the primary detector) together with a simple,low cost, and low resolution imaging device (the secondarydetector) to form a novel hybrid imaging system. While theprimary detector captures an image, the secondary detectorPublished by the IEEE Computer Society

690IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL. 26,NO. 6,JUNE 2004Fig. 1. Different camera motions lead to different motion blurs. Here, the unblurred scene shown in (a) is blurred using three different simulatedcamera rotations about the “X” and “Y” axes. These blurring functions are depth invariant and for long focal lengths also shift invariant. In (b) and (c),the scene is blurred by linear horizontal and vertical motions, respectively. In (d), the scene is blurred due to circular motion. In practice, the space ofpossible motion paths is very large, which makes the problem of motion deblurring without prior knowledge of the motion, very hard so solve.obtains the required motion information for the PSFestimation.We have conducted several simulations to verify thefeasibility of hybrid imaging for motion deblurring. Thesesimulations show that, with minimal resources, a secondarydetector can provide motion (PSF) estimates with subpixelaccuracy. Motivated by these results, we have implementeda prototype hybrid imaging system. We have conductedexperiments with various indoor and outdoor scenes andcomplex motions of the camera during integration. Theresults show that hybrid imaging outperforms previousapproaches to the motion blur problem.Finally, we discuss the applicability of hybrid imaging tothe deblurring of motion blur caused by moving objects.Moving objects present a much more complex blur problemdue to their blending with the background during imageintegration. We show that hybrid imaging provides apartial, yet significant step towards solving this problem.2minimum level is determined by the signal-to-noise characteristics of the detector. Therefore, given such a minimumlevel and an incident flux level, the exposure time required toensure detection of the incident light is inversely proportionalto the area of the pixel. In other words, exposure time isproportional to spatial resolution. When the detector is linearin its response, the above relationship between exposure andresolution is also linear. This is the fundamental trade offbetween the spatial resolution (number of pixels) and thetemporal resolution (number of images per second).This trade off is illustrated by the solid line in Fig. 4. Theparameters of this line are determined by the characteristicsof the materials used by the detector and the incident flux.Different points on the line represent cameras with differentspatio-temporal characteristics. For instance, a conventionalvideo camera (shown as a white dot) has a typical temporalresolution 30fps and a spatial resolution of 720 480 pixels.Now, instead of relying on a single point on this trade offline, we could use two very different operating points onthe line to simultaneously obtain very high spatial resolutionFUNDAMENTAL RESOLUTION TRADE OFFAn image is formed when light energy is integrated by animage detector over a time interval. Let us assume that thetotal light energy received by a pixel during integration mustbe above a minimum level for the light to be detected. ThisFig. 2. Blind image deconvolution applied to the motion blurred imageshown in Fig. 1b. The strong deconvolution artifacts are the result ofincorrect PSF estimation.Fig. 3. The use of a stabilized lens for reducing motion blur. The imageshown in (a) was taken by a hand-held camera using a 400mmstabilized Canon zoom lens at 1/250 of a second; we can see that thestabilization mechanism works very well for this speed, producing asharp image. In contrast, when the exposure time is raised to 1/15 of asecond, the stabilization mechanism drifts resulting in the motion blurredimage shown in (b). (Printed with permission of the photographer [29]).

BEN-EZRA AND NAYAR: MOTION-BASED MOTION DEBLURRING691Fig. 4. The fundamental trade off between spatial resolution and temporal resolution of an imaging system. While a conventional video camera (whitedot) is a single operating point on the trade off line, a hybrid imaging system uses two different operating points (gray dots) on the line,simultaneously. This feature enables a hybrid system to obtain the additional information needed to deblur images.with low temporal resolution and very high temporalresolution with low spatial resolution. This type of a hybridimaging system is illustrated by the two gray dots in Fig. 4.As we shall see, this type of hybrid imaging gives us themissing information needed to deblur images with minimaladditional resources.3HYBRID IMAGING SYSTEMSWe now describe three conceptual designs for the hybridimaging system. The simplest design, which is illustrated inFig. 5a, uses a rigid rig of two cameras: a high-resolution stillcamera as the primary detector and a low-resolution videocamera as the secondary detector. Note that this type of ahybrid camera was exploited in a different way in [34] togenerate high-resolution stereo pairs using an image-basedrendering approach. In our case, the secondary detector isused for obtaining motion information. Note that it isadvantageous to make the secondary detector black andwhite since such a detector collects more light energy (broaderspectrum) and, therefore, can have higher temporal resolution. Also, note that the secondary detector is used only as amotion sensor; it has low resolution and high gain and is notsuitable for superresolution purposes [1]. While this is a verysimple design, performing the geometrical calibration between the primary and secondary detectors can be tricky sincethe image of the primary detector can be blurred. Moreover,the primary detector’s projection model will change when thelens is replaced or the zoom setting is varied. This problem isaddressed by the following two designs.The second design uses the same lens for both detectors bysplitting the image with a beam splitter. This design, which isshown in Fig. 5b, requires less calibration than the previousone since the lens is shared and, hence, the image projectionmodels are identical. An asymmetric beam splitter that passesmost of the visible light to the primary detector and reflectsnonvisible wavelengths toward the secondary detector, forexample a “hot mirror” [28], would be preferred.A third conceptual design, which is illustrated in Fig. 5c,uses a special chip layout that includes the primary and thesecondary detectors on the same chip. This chip has a highresolution central area (the primary detector) and a lowresolution periphery (the secondary detector). Clearly, inthis case, the primary and the secondary detectors wouldnot have the same field of view. This is possible since weassume that the motion is shift invariant. Note that such achip can be implemented using binning technology nowcommonly found in CMOS (and CCD) sensors [16]. Binningallows the charge of a group of adjacent pixels to becombined before digitization. This enables the chip toswitch between a normal full-resolution mode (whenbinning is off) and a hybrid primary-secondary detectormode (when binning is activated).4COMPUTING MOTIONThe secondary detector provides a sequence of images(frames) that are taken at fixed intervals during the exposuretime. By computing the global motion between these frames,Fig. 5. Three conceptual designs of a hybrid camera. (a) The primaryand secondary detectors are essentially two separate cameras. (b) Theprimary and secondary detectors share the same lens by using a beamsplitter. (c) The primary and secondary detectors are located on thesame chip with different resolutions (pixel sizes).

692IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,we obtain samples of the continuous motion path during theintegration time. The motion between successive frames islimited to a global rigid transformation model. However, thepath, which is the concatenation of the motions betweensuccessive frames, is not restricted and can be very complex.We compute the motion between successive frames using amultiresolution iterative algorithm that minimizes thefollowing optical flow based error function [22]: X @I@I @I 2arg minu;ð1Þþv þ@x@y @tðu;vÞ@I @I @I; @y ; @t are the spatial and temporal partialwhere, @xderivatives of the image, and ðu; vÞ is the instantaneousmotion at time t. This motion between the two frames isdefined by the following global rigid motion model:2 3 xucos sin x 4 5¼y ;ð2Þv sin cos y1where ð x; yÞ is the translation vector and is the rotationangle about the optical axis.Note that the secondary detector, which has a short butnonzero integration time, may also experience some motionblur. This motion blur can violate the constant brightnessassumption, which is used in the motion computation. Weassume that the computed motion between two motionblurred frames is the center of gravity of the instantaneousdisplacements between these frames during their integration time. We refer to this as the motion centroid assumption.5CONTINUOUS PSF ESTIMATIONThe discrete motion samples that are obtained by themotion computation need to be converted into a continuouspoint spread function. To do that, we define the constraintsthat a motion blur PSF must satisfy, and then use theseconstraints in the PSF estimation.Any PSF is an energy distribution function, which can berepresented by a convolution kernel k : ðx; yÞ 7! e, whereðx; yÞ is a location and e is the energy level at that location. Thekernel k must satisfy the following energy conservationconstraint:ZZkðx; yÞ dx dy ¼ 1;ð3Þwhich states that energy is neither lost nor gained by theblurring operation (k is a normalized kernel). In order todefine additional constraints that apply to motion blur PSFs,we use a time parameterization of the PSF with a pathfunction f : t 7! ðx; yÞ and an energy function h : t 7! e.Note that the functions f and h define a curve whichbelongs to a subset of all possible PSFs. Due to physicalspeed and acceleration constraints, fðtÞ should be continuous and at least twice differentiable. By assuming that thescene radiance does not change during image integration,we get the additional constraint:Z tþ t thðtÞ dt ¼; t 0; tstart t tend t; ð4Þt tstartendtwhere ½tstart ; tend is the image integration interval. Thisconstraint states that the amount of energy which isVOL. 26,NO. 6,JUNE 2004integrated at any time interval is proportional to the lengthof the interval.Given these constraints, and the motion centroid assumption from the previous section, we can estimate a continuousmotion blur PSF from the discrete motion samples, asillustrated in Fig. 6. First, we estimate the path fðtÞ by splineinterpolation as shown in Figs. 6a and 6b; spline curves areused because of their smoothness and twice differentiabilityproperties, which satisfy the speed and acceleration constraints. In order to estimate the energy function hðtÞ we needto find the extent of each frame along the interpolated path.This is done using the motion centroid assumption bysplitting the path fðtÞ into frames with a 1D Voronoitessellation, as shown in Fig. 6b. Since the constant radianceassumption implies that frames with equal exposure timesintegrate equal amount of energy, we can compute hðtÞ (up toscale) for each frame as shown in Fig. 6c. Note that all therectangles in this figure have equal areas. Finally, we smoothhðtÞ and normalize (scale) it to satisfy the energy conservationconstraint. The resulting PSF is shown in Fig. 6d. The endresult of the above procedure is a continuous motion blur PSFthat can now be used for motion deblurring.6IMAGE DECONVOLUTIONGiven the estimated PSF, we can deblur the high resolutionimage that was captured by the primary detector usingexisting image deconvolution algorithms [17], [23]. Since thisis the only step that involves high-resolution images, itdominates the time complexity of the method, which isusually the complexity of FFT. The results reported in thispaper were produced using the Richardson-Lucy iterativedeconvolution algorithm [17], which is a nonlinear ratiobased method that always produces nonnegative gray-levelvalues and, hence, gives results that make better physicalsense than linear methods [17]. This method maximizes aPoisson-statistics image model likelihood function, yieldingthe following iteration: ðkÞ ðxÞ Sð xÞ ðkþ1Þ ðxÞ ¼ OOIðxÞ ðkÞS O;ð5Þ ðkÞ is the kth estimation ofwhere: I is the measured image, Oð0Þ the result, O ¼ I, and S is the convolution kernel (the ðkÞPSF). Given that I and S are everywhere positive, Ocannot be negative.7SIMULATION RESULTSPrior to prototype implementation, two sets of simulationtests were done in order to validate the accuracy of the PSFestimation algorithm.The first set addresses the accuracy of the motionestimation as a function of frame resolution and gray levelnoise. The second set illustrates the accuracy of thecomputed path fðtÞ in the presence of motion blur. Bothour tests were conducted using a large set of images thatwere synthesized from the 16 images shown in Fig. 7.7.1 Motion Estimation Accuracy TestIn this test, we computed the motion between an image and adisplaced version of the same image (representing twoframes) using four different resolutions and four differentlevels of Gaussian noise for each resolution. The displacement

BEN-EZRA AND NAYAR: MOTION-BASED MOTION DEBLURRING693Fig. 6. The computation of the continuous PSF from the discrete motion vectors. (a) The discrete motion vectors which are samples of the functionf : t7!ðx; yÞ. (b) Interpolated path fðtÞ and its division into frames by Voronoi tessellation. (c) Energy estimation for each frame. (d) The computed PSF.used in the test was ð17; 17Þ pixels, and the noise level wasvaried between standard deviations of 3 to 81 gray levels. Thecomputed displacements of the downscaled images werescaled back to the original scale and compared with the actual(ground truth) values. Table 1 shows the test results. We cansee that subpixel motion accuracy was obtained for all testsexcept the test with

Motion-Based Motion Deblurring Moshe Ben-Ezra and Shree K. Nayar,Member, IEEE Abstract—Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previ

Related Documents:

Fig. 1: Self-supervised motion deblurring. First: Sharp ground truth image. Second: Blurry input image. Third: Deblurring results of our self-supervised method. Fourth: Deblurring results of the supervised method from Tao et al. [32]. and deblurs it in real time on a single GTX 1080Ti graphic card usi

latent image from complex motion blurred image pairs. Though the multi-image deblurring methods show good performances, computational time for the deblurring task has made it difficult to apply into a mobile robot. To change the way of image capture is in the lime-light to tackle the deblurring

‡ For any object motion direction Figure 1. Overview of single image capture techniques for motion deblurring. Coded exposure is optimal for deblurring for any motion direction, if the motion magnitude is known; but motion PSF needs to be estimated

image. Deblurring motion blurred images has been studied in the past, and many image processing pipelines on smart-phones have an image stabilization stage or deblurring stage [5, 10]. Single image deblurring has also been widely stud-ied before [7, 4, 13, 9, 8, 16]

make CNNs a promising approach to image deblurring. Early CNN-based deblurring methods aim to mimic conventional deblurring frameworks for the estimation of both latent image and blur kernel. Prior works [28], [29] first use a network to predict the non-uniform blur kernel and then utilize a non-blind deblurring method [30] to restore images .

Deblurring Dataset: The importance of image deblurring dataset has been raised with remarkable progress of image deblurring. Several existing popular uniform deblurring dataset [40,22,13] are sythesized by blur kernel. In [40,22, 13], single sharp image is convolved with a set of mo

Motion Deblurring Blind motion deblurring, removing the motion blur given just a noisy blurred image, is a very challenging problem that has been extensively stud-ied (see [18] for a recent review and comparison of vari-ous algorithms). Representative methods for single image blind deb

2.1 ASTM Standards:3 F3096 Performance Specification for Tipover Restraint(s) Used with Clothing Storage Unit(s) 3. Terminology 3.1 Definitions of Terms Specific to This Standard: 3.1.1 clothing storage unit, n—furniture item intended for the storage of clothing typical of bedroom furniture. 3.1.2 operational sliding length, n—length measured from the inside face of the drawer back to .