Nonlinear Image Estimation Using Piecewise And Local Image .

3y ago
29 Views
4 Downloads
423.98 KB
13 Pages
Last View : 21d ago
Last Download : 3m ago
Upload by : Wade Mabry
Transcription

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998979Nonlinear Image Estimation UsingPiecewise and Local Image ModelsScott T. Acton, Member, IEEE, and Alan C. Bovik, Fellow, IEEEAbstract— We introduce a new approach to image estimationbased on a flexible constraint framework that encapsulates meaningful structural image assumptions. Piecewise image models(PIM’s) and local image models (LIM’s) are defined and utilized to estimate noise-corrupted images. PIM’s and LIM’s aredefined by image sets obeying certain piecewise or local imageproperties, such as piecewise linearity, or local monotonicity. Byoptimizing local image characteristics imposed by the models,image estimates are produced with respect to the characteristicsets defined by the models. Thus, we propose a new generalformulation for nonlinear set-theoretic image estimation. Detailedimage estimation algorithms and examples are given using twoPIM’s: piecewise constant (PICO) and piecewise linear (PILI)models, and two LIM’s: locally monotonic (LOMO) and locallyconvex/concave (LOCO) models. These models define propertiesthat hold over local image neighborhoods, and the correspondingimage estimates may be inexpensively computed by iterativeoptimization algorithms. Forcing the model constraints to holdat every image coordinate of the solution defines a nonlinear regression problem that is generally nonconvex and combinatorial.However, approximate solutions may be computed in reasonabletime using the novel generalized deterministic annealing (GDA)optimization technique, which is particularly well suited forlocally constrained problems of this type. Results are given forcorrupted imagery with signal-to-noise ratio (SNR) as low as 2dB, demonstrating high quality image estimation as measured bylocal feature integrity, and improvement in SNR.Index Terms—Image enhancement, image estimation.I. INTRODUCTIONONE OF THE oldest ongoing problems in image processing is image estimation, which encompasses algorithms that attempt to recover images (usually digital) fromobservations. More specifically, it is generally desired toremove unwanted noise artifacts, which are often broadband,while simultaneously retaining significant high-frequency image features, such as edges, texture and detail. In such acontext, the problem is often referred to as image enhancement.The objectives of image estimation/enhancement are generallytwofold, and conflicting: smoothing of image regions whereManuscript received November 30, 1995; revised September 10, 1997. Thismaterial is based on work supported in part by the U.S. Army Research Officeunder Grant DAAH04-95-1-0255. The associate editor coordinating the reviewof this manuscript and approving it for publication was Prof. Moncef Gabbouj.S. T. Acton is with the School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078 USA (e-mail: sacton@master.ceat.okstate.edu).A. C. Bovik is with the Laboratory for Vision Systems, Center forVision and Image Sciences, Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712-1084 USA (e-mail:bovik@ece.utexas.edu).Publisher Item Identifier S 1057-7149(98)04371-1.the intensities vary slowly, and simultaneous preservation ofsharply-varying, meaningful image structures.The first main theme of the current paper is the developmentof image estimation algorithms that begin with a model forthe image. The model used should, of course, be designed tocapture meaningful image detail and structure for the application at hand. We explore several fairly general image modelsthat are based on well-defined local image characteristics. Themodels that we study are divided into two classes: piecewiseimage models (PIM’s), which model images as everywhereobeying a certain property (such as constancy or linearity) ina piecewise manner, and local image models (LIM’s), whichcharacterize images as obeying a certain property (such asmonotonicity or convexity) over every subimage of specifiedgeometry.A second main theme of the paper is the casting of theestimation problem as an approximation to a nonlinear regression with respect to the characteristic set defining the imagemodel. Estimation proceeds by encouraging adherence to themodel properties while maintaining a semblance (a minimumdistance) to the observed input image. The goal is to computea solution image that approximates the desired image propertyand that also is at minimum distance (defined by a prescribeddistance norm) from the observed image.The approach to image estimation described here is generally quite new. Some related methods have been reported thatattempt to preserve image smoothness in a more usual sense(small derivative or Sobolev image norm), while at the sametime producing an output image that is “close” to the inputimage [3], [10], [11], [13]. In these constrained optimization orregularized methods, the smoothness constraint can be relaxedat image boundaries—identified via line processes [10]. Theregions between the discontinuities can be modeled as weaklycontinuous surfaces, using a weak membrane model [4] ora two-dimensional (2-D) noncausal Gaussian Markov randomfield (GMRF) model [12], [23]. These approaches, while ofteneffective, do suffer from some drawbacks. First, they do notfall within a flexible, unified framework that allows for the useof different image models demanded by different applications.Second, the implementation of smoothness constraints thatdecouple across intensity boundaries is somewhat difficult(since the estimation of line processes is a hard problem),whereas models such as local monotonicity and piecewise linearity naturally preserve boundaries between smooth regions.Finally, the computational cost of obtaining image estimationresults using constrained combinatorial optimization is impractical for time-critical image processing applications. Here itis shown that approximate nonlinear regression with respect1057–7149/98 10.00 1998 IEEE

980IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998to PIM’s and LIM’s can be accomplished with relativelylow computational complexity via the recently introducedgeneralized deterministic annealing (GDA) algorithm. GDAis a starting-state independent iterative optimization techniquethat is particularly well-suited for locally constrained problemssuch as those studied here.The paper is organized as follows. Section II outlines thenonlinear regression approach to nonlinear image estimation.Sections III and IV describe four image models (two PIM’sand two LIM’s) and the estimation procedure in each case.Computed image estimation examples are provided for eachmodel. Section V briefly discusses the iterative optimizationalgorithm GDA, particularly those aspects that relate to the settheoretic image estimation problem. The utilization of GDAleads to a nonheuristic implementation that is particularlyefficient for the problem. The paper is concluded in Section VI.II. NONLINEAR IMAGE ESTIMATIONA. Nonlinear Image Estimation and theRelationship to Nonlinear RegressionConsider the problem of estimating a discrete-space imagefrom an observed imagewhere(1)represents additive independent, identically disIn (1),tributed (i.i.d.) noise where.The image estimation problem posed by (1) can be solvedvia nonlinear regression(2)is the (generally nonuniqueHere, the optimizing estimate[18]) image closest to the observation , among all images thatlie within the characteristic set , i.e., images that strictlysatisfy the image model (a PIM or LIM, in this case). Thedefines the characteristic property of thecharacteristic setregression, such as local monotonicity or piecewise constancy.is the distance between image and theThe termobserved image , defined by an appropriate distance norm.Solving (2) is generally an expensive combinatorial optimization for data sets approaching the size of images [18],[19]. Locally monotonic regression algorithms in [18] areof exponential complexity, although a recent algorithm thatpromises linear complexity when operating on signals from afinite alphabet has been proposed [22]. In the current paper,we take a different approach: we recast the problem by treatingmembership in as a soft constraint. This leads to a problemwell-suited to fast optimization algorithms.B. Existence and Statistical Optimalityof Nonlinear RegressionNonlinear regression of the form (2) always has at least oneis a closed setsolution provided that the characteristic set[18], as in all the cases considered here. Nonlinear regressionalso has an interpretation as projection of the signal to beregressed onto the characteristic set . The projection is withrespect to a semimetric [18]. The geometrical structure of theregression problem also admits a strong statistical optimalityproperty. Indeed, if the additive noise in (1) consists of i.i.d.samples coming from a discrete version of the generalizedwith densityexponential distribution function(3)then the solution to the nonlinear regression (2) is a maximumlikelihood estimate, provided that the distance norm used isthe -semimetric [20](4)Thus, if the image noise can be modeled as i.i.d. and comingfrom the density (3) for some , then the nonlinear regression problem can be formulated as maximum likelihood viaselection of the norm (4).The generalized exponential distribution includes three verycommon additive noise models that will be employed here. Foris the Laplacian density, and the optimizing dataconstraint leading to an ML estimate is the -norm. Laplaciannoise is a common “heavy-tailed” or highly impulsive noisemodel, e.g., to model data containing outliers. Foris Gaussian density, and the ML estimate is under the-norm. Finally, asbecomes uniform density,-norm.and the distance measure to use is theWe can use the preceding observations to guide the selectionin the construction of a cost (energy)of the normfunctional for a regularized solution. The regularized solutionencapsulates soft constraints for consistency with the sensedimage and adherence to the characteristic property. Althoughthe introduction of the soft model constraint to replace the hardchanges the problem, ifconstraint that the solution lie inthe solution is forced toward both the original image underthe appropriate data constraint norm and also toward thecharacteristic set, an estimate of the optimal regression willbe obtained which may be more physically sensible than the.regressionC. Regularized SolutionIn the regularized solution, the image estimate is foundthat combines aby minimizing an energy functionalpenalty for deviation from the observed image data witha penalty for local deviation from the characteristic imageproperty—assumed to be a PIM or a LIM:(5)Thus(6)is the distance between imageand theIn (5),observed image , as defined in (4). This term is called thedata constraint. The distance norm is generally motivated by

ACTON AND BOVIK: NONLINEAR IMAGE ESTIMATION981a priori information about the noise process , as described inSection II-B. In all of the simulations, additive noise from the. In eachgeneral density (3) will be used forcase, the appropriate optimal -norm or -semimetric is usedto define the data constraint.in (5), the model constraint,The termprovides an energy penalty for local deviation from a characwhich defines the image model. Theteristic propertydepends on the characteristicform of the model normproperty. However, in general it will be written(7)is a local measure of errorwhereenergy relative to the characteristic set.The characteristic properties studied here will be definedby PIM’s and LIM’s. The model constraint is computed bysumming, over all image coordinates, the absolute distancebetween and the closest local solution to that satisfies thecharacteristic property locally. Again, a suitable distance normmay be selected to define the model constraint according tosome statistical or structural criterion.The regularized solution (6) is a more tractable approximation to the regression (2). However, aside from the issueof computational complexity, it can be argued that (6) mayoften present a more physically sensible solution than (2).Consider the case of (5), where is taken to be large: themodel constraint is thus given considerably greater weight thandata constraint. If is taken sufficiently large, then the solutionimage will be forced to adhere to the characteristic property atnearly every, and possibly at all image coordinates. In such acase, the solution may not adequately resemble the input imagein some locations, owing to local deviations from the imagemodel. It may therefore be argued that the nonlinear regression(2) yields solutions which may be numerically optimal, yetsuboptimal in the important sense of image enhancement.determines the degree toThe regularization parameterwhich will conform to the data constraint versus the modelconstraint. In [8], methods were explored for determiningsuch relative weights for more usual (linear) smoothnessdepends on the aoperators. Generally, the estimation ofpriori knowledge of the corruptive noise and is typicallycomplex and time-consuming. Because operators used to evaluate the characteristic properties (the PIM’s and LIM’s) arenonlinear (unlike the traditionally used Laplacian operator), themethods used in [8] are not applicable to this implementation.Instead, the regularization parameter may be selected via crossvalidation [17]. With this method, the image is first dividedinto an estimation set and a validation set. To evaluate thesolution quality given for a particular regularization parameter,the nonlinear image estimation is performed using (5) on thepixels in the estimation set. Simultaneously, image estimationis implemented for the pixels in the validation set, but with acost functional that does not include the data constraint. So, thepixels in the validation set can be used to predict the estimationerror [17]. The main drawback of using cross validation toselect the regularization parameter is that the cost to evaluatea particular is equivalent to the cost of performing imageestimation itself.Empirically, we have found the image estimation procedureto be quite robust with respect to selection of ; indeed, valuesof that differ by one or two orders of magnitude (10 or 100)do not yield very different results than obtained here. This isdue to the fact that the constraints defined by the PIM’s andLIM’s used here are fully realizable. Meaningful image estimates can be computed that have zero cost penalties from thePIM and LIM constraints, in contrast to the Laplacian operatorthat produces a zero-energy penalty only for an image withoutedges. The results demonstrate this—in every example givenin the paper, over 90% of the pixels in the obtained imageestimate obey the defining characteristic property. However,algorithms of this type appear to be somewhat sensitive tounder-specification of —for values an order of magnitudesmaller than unity (thus heavily weighting the data constraintrelative to the model constraint), the solution quality beginsto deteriorate.Note that in the absence of a priori information concerningthe original image structure, cross validation may be alsoapplied to select the appropriate PIM or LIM for imageestimation. With this approach, the validation error [17] (thepredicted mean-squared error) is computed for each potentialas the input. Then, themodel using the corrupted imagemodel producing the lowest validation error is used for imageestimation.III. IMAGE ESTIMATION USING PIECEWISE IMAGE MODELSPiecewise image models, or PIM’s, describe images thatobey an image property, such as constancy, linearity, polynomial behavior, or some other more abstract or other specificproperty on a piecewise basis over the entire image domain.The pieces over which the property holds form a properpartition of the image; each piece is constrained to be of someminimum size (specified by the model degree). The size of apiece may be defined in various ways, such as the minimumdimension along its minor axis. The piecewise model allowsfor sudden discontinuities in the image property that definesthe PIM; there is no explicit discontinuity-detection mechanism, however; the region boundaries naturally evolve as thesolution is found.Two potentially useful piecewise image properties that define PIM’s are studied here: piecewise constancy (PICO), andpiecewise linearity (PILI). The associated regression problems defined by (2) are termed PICO regression and PILIregression, respectively. Both regressions are ill-posed combinatorial problems having nonunique solutions. The corresponding PICO or PILI image estimation problems (6)are easily configured for iterative solution. Naturally, otherpiecewise models can be defined, such as piecewise quadratic(PIQU) models or higher-order piecewise polynomial models,piecewise exponential (PIEX) models, etc. However, PICOand PILI afford meaningful and simple image descriptions thatcorrespond to commonly encountered natural and syntheticimage data, and that adequately demonstrate the framework of

982IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998Fig. 1.Illustrative examples of PICO-3 regression using two and four orientations.this theory. Of course, PICO images define a somewhat morerestricted category of imagery; good examples include fourcolor artwork, printed matter, and binary image data. Anotherpotentially useful application of PICO image estimation is asa preprocessing stage to intensity-based image segmentation.By first forming a PICO image (which defines a coarse segmentation), the segmentation problem is reduced to decidingwhether to merge neighboring PICO regions.The definitions of the PICO and PILI image properties arequite similar, and can be given together as follows.is pieceDefinition 1: A one-dimensional (1-D) signalwise constant (piecewise linear) of degree , or PICO(PILI- ) if the length of the shortest constant (linear) subsequence in is greater than or equal to .Thus, each sample is part of a constant (linear) segmentof length greater than or equal to . The lowest degree 1-DPICO (PILI) regression of interest is PICO-2 (PILI-3) , sinceall signals are PICO-1 (PILI-2).In defining PILI we make a special dispensation for signalsquantized to integer values: the definition is relaxed by allowing each sample to deviate from the nearest real-valued lineartrend by no more than unity.Although PICO and PILI have simple definitions in onedimension, for higher-dimensional signals there is quite a bitof latitude in the definition. The following one supplies an effective piecewise characterization that is also computationallyconvenient.Definition 2: A two-dimensional (2-D) image is PICO(PILI- ) if is PICO- (PILI- ) (in the sense of Definition1) on every 1-D path along a set of prescribed orientations.We have experimented with two types of 2-D PICO/PILIdefinitions: a two-orientation version, and a four-orientationversion. The two-orientation PICO (PILI) definition enforcespiecewise constancy (linearity) along image columns androws (linear paths quantized along 90 intervals). The fourorientation definition includes the diagonal orientations (linearpaths quantized along 45 intervals).Four-orientation PICO limits image streaking, or highlyvisible and easy-to-misinterpret constant streaks, similar tothose that can occur when a 1-D median filter is applied to animage [6]. Qualitatively, PICO image estimates that utilize thefour-orientation constraint exhibit smoother region boundaries,whereas the two-orientation constraint may produce slightlyjagged boundaries between the constant regions. There aretradeoffs, of course; imposing PICO along a larger number oforientations creates a more expensive computation of energyin (2) (more paths to check). Also, the four-orientation PICOregression may round corners, as shown in Fig. 1.In the presence of high-amplitude noise, we have observedthat streaking tends

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998 979 Nonlinear Image Estimation Using Piecewise and Local Image Models Scott T. Acton, Member, IEEE, and Alan C. Bovik, Fellow, IEEE Abstract— We introduce a new approach to image estimation based on a flexible constraint framework that encapsulates mean-ingful structural image .

Related Documents:

Graphing Linear Piecewise Functions Given a linear piecewise function definition, students represent its piece domain boundaries on a number line and then graph the function. AR.9.10H Interpreting Piecewise Functions Students identify the domain in both non-continuous and continuous piecewise functions given an equation and the graph of

A piecewise function is a function represented by two or more functions, each corresponding to a part of the domain. A piecewise function is called piecewise because it acts differently on different “pieces” of the number line. Possible Descriptions: The graph is a function. The graph is composed of part of a line

f(m)). - huon Aug 17 '12 at 12:40. PC1MHCC Piecewise Defined Functions Activity Section 2.2 Piecewise Defined Functions Activity. ¶ Subsection 2.2.1 Evaluating Pieces of Different Functions. A piecewise defined function is literally a function that has been defined i

Explain 3 Modeling with Piecewise-Defined Functions Some real-world situations can be described by piecewise functions. Example 3 Write a piecewise function for each situation. Then graph the function. Travel On her way to a concert, Maisee walks at a speed of 0.03 mile per minute from

PIECEWISE-DEFINED LINEAR FUNCTION: Given non-overlapping intervals on the real number line, a (real) piecewise linear function is a function from the union of the intervals on the real number line that is defined by (possibly different) linear functions on each interval. or Lesson 1 : Graphs of Piecewise Linear Functions S.2

- when graphing piecewise defined functions, I will use input/output tables o I will create one input/output table for each piece of the piecewise function o the homework problems are set-up in the same way (complete a table to find order pairs, then plot the ordered pairs) For the piecewise function given above, I would have the following .

piecewise linear approximation integrates piecewise linear functions exactly When p 1 point, the 0thdegree piecewise constant approximation (also) integrates piecewise linear function exactly Note cancellation of

ner, Gladys Thomas, Charles McKinney, Mary Pelfrey, Christine Qualls, Dora Turner, David Petry, Cleone Gor don, Dorothy Scruggs, Phyllis Rice, Jacquelyn White, Rowena Napier, William Smith, Annie Smith, Ruth Ann Workman, Barbara Johnson and Letha Esque. The awards were presented by MU President Robert B. Hayes on March 4. Faculty meet Tuesday A general faculty meeting has been scheduled for .