167 Multigrid Methods For Tomographic Reconstruction 1. Introduction

1y ago
11 Views
2 Downloads
6.10 MB
27 Pages
Last View : 20d ago
Last Download : 3m ago
Upload by : Troy Oden
Transcription

167MULTIGRID METHODS FOR TOMOGRAPHIC RECONSTRUCTIONT.J. Monks1.INTRODUCTIONIn many inversion problems, the random nature of the observed data has a very significant impacton the accuracy of the reconstruction. In these situations, reconstruction teclmiques that are based onthe known statistical properties of the data, are particularly useful. In a report [1], we consideredmethods for the reconstruction of an object from its projection data, and showed that maximumlikelihood methods are particularly successful when the data is very noisy. Although maximumlikelihood methods were first applied to emission tomography [2], they have also found utility in otherimaging modalities, including transmission [3], diffraction [4] and limited angle transmissiontomography [5, 6]. The major problem with maximum likelihood methods for image reconstruction, isthat the normalteclmique used, the expectation-maximisation (EM) algorithm, is excruciatingly slow toconverge to the desired objective. In this paper, we demonstrate the use of multigrid methods toovercome this handicap.In order to facilitate subsequent discussion, we shall give a brief review of the expectationmaximisation (EM) algorithm, as described in [1]. The EM algorithm is an algebraic reconstructiontechnique where we resolve the object to be reconstructed into a grid-like array of unknowns and set upalgebraic equations for these unknowns in terms of the measured projection data. The reconstructionregion is sub-divided into pixels numbered 1, . , B. h1 pixel), the parameter to be reconstructed isassumed to have a constant value J.i. If the projections are measured at sites numbered 1, . , T, thenthis linear system of equations can then be written as :n AA. e(1.1)where A is a sparse matrix connecting the object vector, A. eiFlB, to the measured data vector, n e IFi r, and1: E IR '\ is the vector of errors due to inaccuracies in both the measurement and the discretizationprocesses.The reconstruction problem is to find a method of estimating the object vector, A., given only themeasurements, n, and some estimate of the connection matrix, A. In transmission tomography, thematrix element Aij linking pixel j to detector i is calculated as a function of the intersection area of the raypath through pixel) to the relevant detector. In emission tomography, A;i is usually calculated as someapproximation to the probability that detector i measures activity in pixel j. The algorithms to solve(1.1) can require a large amount of computation as a result of the characteristics of the matrix A: it is huge, possibly as large as 105 x IQ5 or more, it is reasonably sparse, perhaps only 10% of the elements are non-zero, the pattern or structure of the non-zero elements in the matrix is irregular and therefore difficult toexploit, the location and value of the non-zero elements may be efficiently generated either by rows or bycolumns, depending on the defmition of A, but it is generally not possible to do so for both rowsand columns.

168The direct solution of sparse systems of this size is not only numerically unwise, it is also inefficient, asthe sparse matrix is generally filled into a full one during the course of the method. Consequently, weusually resort to iterative methods to obtain the reconstructed image.In situations where the object vector, :l, is well modelled as a set of independent Poissonprocesses, the EM algorithm can be formulated as a multiplicative, iterative method that finds a solutionto ( 1.1) that maximises the likelihood function :l{l) ln[P(niA.)] [-t.A .,;. . n,ln[t.A. , ;. J-ln[n,!J](1.2)through the iterations :T;.(i l)b,(i) 1\.bnA"'-- .B(i)t lLA , A .,,b' l(1.3)The reconstructions obtained with the EM algorithm, have superior resolution, signal-to-noise and.contrast compared to those obtained from the more conventional filtered-backprojection technique.Unfortunately, the computational requirements of the EM algorithm are more than an order ofmagnitude greater than backprojection methods and convergence to the asymptotic solution becomesincreasingly slow as the iterations progress. A more complete description of the algorithm and itsbehaviour is given, for example, in [1, 7].Large, sparse systems of equations, such as (1.1), are usually solved with iterative methods. Ingeneral, we find that the asymptotic approach to a solution becomes extremely slow, in fact, theconvergence rate diverges exponentially as the solution is approached- this has been called criticalslowing down. The inherent sluggishness of local iteration algorithms can be studied from a spatialfrequency perspective. A Fourier analysis of the error function from one iteration to the next shows thatwhile t.l}e components of the en-or decrease rapidly in some frequency bands, they decay slowly in otherbands. The region of rapid decay depends on the specific relaxation method, but typically consists ofthe high frequency components which have wavelengths of the same order as the mesh spacing. On theother hand, the region of slow decay always includes the low to mid-frequency components, implyingthat enors about the global information persist through many iterations [8].For the EM iteration of (1.3), the update term is given by :11.;\Y l) :AY l)- A(i)(1.4)We can examine the spectral content of this term by taking a 2D Fourier transform of ll.A., thenconverting this 2D transform to a lD function by summing it into bins with the same radial frequency.A plot of the resulting function at each iteration of a typical reconstruction problem is shown in figure1.1. This figure shows that most of the energy in the update term of equation 1.4 is concentratedaround two frequencies: the zero spatial frequency that represents overall brightness in the image, andthe pixel spacing frequency that represents differences between a pixel and its closest neighbours. Theinformation bearing parts of the image will be the shapes and blobs that span many pixels and willconsequently have frequency components that lie between these two extremes. Unfortunately, the

169Log amplitude280240200160120 Iteration number40Figure 1.180The average spectral content of the EM update term (equation 1.4) shown per iteration.fl is the zero spatial frequency, whilst f2 is the inter-pixel spacing frequency.figure shows that a fixed grid algorithm is inefficient at updating these components; many iterations arerequired to develop and alter the information-bearing regions in the reconstruction.Since the step size of a single update must be limited to maintain stability at short distances, thespeed of evolution of large scale features is greatly slowed. Common (L2 or L 1) error norms decreasequickly during the first few iterations, so long as there are high-frequency components to beannihilated, but soon degenerate to a slow, asymptotic diminution when only the low-frequencycomponents remain. This suggests that while the relaxations may be inefficient at completely cancellingthe error function, they can be very efficient in smoothing it.The multigrid method [9] has been very successful in overcoming the critical slowing problem.It is a paradigm of the general class of divide-and-conquer algorithms where the solution of a set of subproblems is collectively faster than a whole problem tackled at once. The basic idea underlyingmulti grid approaches is to solve the problem on multiple grid scales, where processiilg on a given gridgenerally depends on coarser grid corrections and fmer grid residual transfers. Hence, as the iterationsprogress, a solution evolves which is consistent over a large range of length scales. Empirical studieson model problems indicate that multigrid methods are asymptotically optimal, that is, the computationalworkload required to attain a fixed accuracy is proportional to the number of discrete unknowns(convergence in essentially O(N) number of operations, where N is the number of unknowns). Singlegrid methods on the other hand, show O(N3) convergence, so that the number of iterations needed forthe system to relax to its steady state solution grows much faster than the number of mesh points.Hence in image processing applications, where N can be very large (order 104 to 106), multigridmethods offer potentially dramatic increases in efficiency over standard single-grid methods. We referthe reader to [9] for a description of the MV and FMV cycle algorithms that will be used in this paper.

170zl pixels21-4pixelsz - pixelsFigure 1.2 : Two representations of a multi-resolution decomposition of an image. The left hand figure is aconstant pixel size presentation showing the pyramidal neighbourhood relationships between pixels in adjacentlayers. The right hand figure has a constant spatial dimension for all the layers and highlights the resolutionchanges. The transfer between layers is via the processes of restriction and prolongation, as discussed in the text.The use of multigrid schemes in image reconstruction should be particularly fruitful as there isample psychological evidence that early processing within the human visual system takes place onmultiple scales [10, and references therein]. Burt [11, 12] introduced the idea of a pyramid of relatedimages (figure 1.2), which may be formed either by smoothing followed by restriction (known .as alow-pass pyramid) or subtraction of the smoothed image from the original followed by restriction(known as a band-pass pyramid). This scheme can be generalised to encompass the idea of a pyramidaldata structure which, at each descending level, provides successively condensed representations of theinformation in the image. This allows construction of sub-trees in the pyramid whose leaves are pixels(or local features) in the image, and whose roots represent global features of various types. Thuspyramids provide a possible means of bridging the gap between pixel-level and region-level imageanalysis since local computations on a coarse mesh can correspond to more global computations on afme mesh. The decomposition of an image on multiple scales provides an intermediate representationbetween uni-grid spatial and Fourier domain descriptions. These multi-resolution schemas have beenapplied to image segmentation, region matching, feature and shape analysis, surface interpolation,optical flow, sub-band decomposition and compression.Surprisingly, multigrid techniques have rarely been applied to image reconstruction problems.Terzopoulos [13, 14] has applied multigrid approaches to four problems in image reconstruction:surface fitting to sparse data, image reflectance from intensity, shape from shading and optical flow.Each of these problems can be formulated as a linear system, and the underlying procedure Terzopoulosused to. solve each corresponds to the full multigrid V-cycle (FMV-cycle). Terzopoulos initiallyproposed an accommodative approach, where the method uses an internal check, based on thecomputation of the residual norm, to determine when to switch between grids. He found that for manyproblems, the accommodative algorithm behaves in a fairly fixed fashion, performing a similar numberof iterations at each level before switching. It is then possible to abandon the computationally expensive

171calculation of the dynamic residual norm in favour of a pre-assigned, fixed-flow approach. For eachproblem, Terzopoulos obtained substantial improvements in efficiency over single-grid methods, andrecommended that the multigrid approach be tried for all image reconstruction problems.Herman et al. [15] applied a pseudo-multigrid approach to the problem of transmissiontomography. The authors make the point that multigrid can change either one or both of the imagediscretization (the size of l, B) or data sampling (the size of n, T). Full multigrid approaches requirechanges in both T and B, but the authors claim that "in image reconstruction the overhead associatedwith changing the picture digitization outweighs any potential savings, (consequently) we discuss inprecise fashion only the special case in which data sampling is variable, but the picture digitization doesnot change". This is really a uni-grid approach using sub-sets of the measured data. The sub-sets werechosen as simple partitions of the measured data set. The conclusion they reached was that such apartitioned uni-grid implementation of an algebraic reconstruction technique (ART) was superior tostandard ART in terms of both asymptotic error norm and efficiency, but was still less efficient thanfiltered backprojection.Kaufman [16] applied a nested iteration to the EM algorithm reconstruction for emissiontomography. In this method, an estimate on a coarse grid was obtained which was used as the startingsolution on a fine grid after interpolation. No use was made of fine-to-coarse residual transfers.Kaufman obtained disappointing results using piecewise constant interpolation on a quad-tree pyramid.She found that interpolation artefacts persisted for many iterations on the finer grid. In hindsight, thisform of interpolation introduces an entire range of new frequency components into the starting solutionon each new grid that would be avoided with a smoother interpolant. Ranganath et al. [17] also usedcoarse-to-fine nested iteration on the EM algorithm and reported much better results than did Kaufman.Indeed, they achieved convergences an order of magnitude faster than a single-grid EM algorithm. Themethod is not described completely in the paper, however we know that a quad-tree pyramid was used.The interpolation scheme was not specified, other than stating that it satisfied a non-negativityconstraint. Again, no use was made offme-to-coarse residual transfers.2.MULTIGRID IMPLEMENTATIONS OF THE EM ALGORITHMIn this chapter we shall give details of what we have found to be the most promisingimprovement to the EM algorithm, namely the use of multigrid methods. We have developed a naturalextension of the work reported in the introduction [13-17], and tried both nested-iteration and a fullmultigrid method on the EM iterations. Three techniques will be described. The first two are nestediteration algorithms where we perform EM iterations on a coarse grid until slow convergence appears,and then transfer this solution to a finer grid and repeat the process. The third method is a full multigridapproach.

1722 .1Au f,Algorithm I -A nested-iteration implementation of the EM algorithmAn effective, and well established strategy [18] for finding the solution to a system such asis to first solve the problem on a coarse grid (this discrete space is denoted by rl) to a desiredlevel of accuracy and then fonn an initial guess on a finer grid by a process of prolongation (i.e.interpolation). This can be written as : fhto obtain an estimate on Q Step 1:Relax Ah u Step 2:Interpolate solution onto Qh 1 : uh 1 1Z 1 uhStep 3:Relax Ah 1 uh l fh 1 to obtain an estimate on Qh !(2.1)Where a superscript h 1 denotes our finer grid variables and 1 1 is the prolongation operator betweenthe two grids 1 If our true solution is smooth this will be very effective because such a guess caneliminate many of the early relaxation sweeps generally required for naive guesses. Thus the residualerror usually starts off much smaller than it would with the naive guess. This approach can be repeatedto give a nested iteration of relaxations and prolongations. However, asymptotic convergence rates aregenerally independent of the initial guess, so the slow rates will quickly reappear on each level.Secondly, although it is suitable for generating a single accurate solution on the finest grid, it cannotgenerate solutions having the finest-level accuracy over the hierarchy of coarser grids. Finally, thismethod would be unsatisfactory if the true solution was highly oscillatory; smooth errors can be wellapproximated by coarse interpolants, but oscillatory errors cannot.For our first algorithm, we employ a nested-iteration approach as described above, except thathere we only sub-sample (restrict) the image data vector, l and the connection matrix A, we do notsub-sample the measured data vector, n, on the coarser grids. This first algorithm can be written as:Step 0:Step 1:Initialisation:h ho(the coarsest grid) .A. c(initial solution is constant valued)Perform v1 relaxations of EM algorithm on grid n T .h, (i l)h, (i)""/l.b ll.bt ln,h1\.b.rBI·A.r: Ab.,b' !Step 2:If h finest grid exit, otherwise interpolate solution onto fl lh !A. ! l.h h l,repeat step 1.(2.2)Piecewise bilinear interpolation is used for prolongation of l between grids, and injection (pointsampling) is used for restriction of the connection matrix. Linear prolongation can be considered as atwo stage process (although it may not be implemented as such). Firstly, we up-sample from gh ton 1 , assigning a zero value to pixels not in the set n . This operation mirrors the frequency spectrumof n onto the new high-frequency components introduced into n ', a process sometimes calledimaging. The second logical stage of prolongation is low-pass filtering, which attempts to reduce the1 Most multigrid schemes employ meshes which the grid spacing on a coarse mesh is twice the spacing on thenext finest mesh. This is nearly universal practice, since there seems to be no advantage in using grid spacingswith ratios other than 2, which would incur the disadvantage of increased complexity in the prolongation andrestriction operators.

173imaging effect. Similarly, linear restriction can be decomposed into low-pass filtering and downsampling stages, where the filtering is now aimed at reducing the high-frequency components in nn thatwould otherwise be aliased to low frequencies in nn-l. A large filter support allows for goodsuppression of the aliasing and imaging effects. Smaller kernels however, preserve local image featuresand have a lower computational cost. We seek a filter kernel that is compact in both the spatial and thefrequency domains. Wavelet methods are suggestive [10], the approximate Gaussian [11] and Garbor[19] functions are ideal. However, the interpolation of l between grids must not introduce negativevalues, nor must the total sum over all of l be altered (to conserve emission activity with the recordedprojection counts). With these extra constraints, the tradeoff amongst the possible kernels is rathermarginal, hence, we have chosen the simplest feasible schemes: bilinear interpolation and injection2 We must also consider the method used for extension at the periphery. We wish to extend the nn imagewhilst avoiding artefacts in the high frequency components of nn.1 that would be introduced by artificialdiscontinuities. We have a choice of replication of the boundary pixels, a zero or constant valuedextension, or a wrap-around from the opposite boundary. We have decided, for this problem, that thefirst extension has the.minimal impact on the prolongation operation.We initially implemented an accommodative algorithm with switching to a finer mesh based uponthe rate of change in the likelihood function. The onset of critical slowing is signalled once this ratestarts to diminish. However, we found that the calculation of the likelihood norm was computationallyexpensive, and the benefits of this adaptive switching were not significant. In the final version of theprogram, we therefore used a fixed switch method, with v1 set between 5 and 20 iterations.2. 2Algorithm II -Another nested-iteration implementationThis algorithm is almost the same as the first, except that we now sub-sample the measured datavector, n, on the coarser grids as well as the image data vector, l and the connection matrix A. Thenew algorithm can be written as:2There are two contrasting grid systems thatwe could use as a basis for a discreterepresentation. In a block-centred mesh, x(iJ)is identified with the average value of theunderlying continuous function in block (iJ),whereas in a point-centred mesh it is consideredto be the value at the intersection of the (iJ)grid lines.When we transform between different meshsizes, we should employ filter kernels witheven-sized supports for a block-centred grid andwith odd-sized supports for a point-centred gridto prevent spatial shifts between the mesh andthe underlying continuous function.In this work, a block-centred representation isappropriate for the vectors n and l,consequently we must employ even-sizedoperators in the interpolation and restrictionfunctions. A point-centred representation isappropriate for the connection matrix, A.Point-centred meshesBlock-centred meshesthhl Rm#MeshsizehMesh size 2h mFigure F2.1 : Grid systems for a discrete representation.

174Step 0:Initialisation:h ho(the coarsest grid)ho A c(initial solution is constant valued)Calculate the projection data sets, Nh, for all grid sizes.Step 1:Performv1 relaxations of EM algorithm on grid o.hh hAh,: (i l) ( ) bn,b,t,.BteN'Lb;h.:l.1!1 h ·;b' l·Step 2:If h finest grid exit, otherwise interpolate solution onto cf 'h IA JZ 'hA.h -h l,repeat step 1.(2.3)where N' is the set of measured data values used on grid h.Remembering that we have a fixed number of measured projection samples, we cannot restrictthis set down through a large number of levels because we will end up with no data at all. Instead, wemake the measured data vector, hn, the same size as the unknown image vector, hA, on the coarsestgrid. This should be beneficial because the equations on this grid will hopefully be consistent and givea unique solution. On successively finer grids we increase the projection data set size in line with thenumber of image pixels until we reach a stage where we run out of data and the equations becomeunder-determined.We wish to smooth some of the noise that is strongly present in the measured data set, so someform oflow-pass filtering is appropriate. Because we consider n to be a second order tensor, indexedby projection angle and detector position, we use full weighted averaging with a 2 x 2 kernel. Thefiltering necessary to construct the sets of projection data is performed once during the initialisation andsaved to file for recall as necessary.2. 3Algorithm III -A full multi grid implementation of the EM algorithmThe third algorithm we have implemented is the full multigrid V-cycle, which with reference to[9], and for the solution to the system Au r, can be defined as uh -- FMVh( uh ,fh):Step 0:Jntialisation:Find an approximate solution for uho,Setuh 0 for h hoStep 1:Repeat until sufficient accuracy is attained:douh l 0 h l J l 0 Performv0 cycles: uh 1 -MVh 1 (uh 1 ,rh l),h -h 1until (h finest grid)where the V cycle is defined recursively as follows(2.4)uh -MVh(uh ,rh):

175Step 1:Relax v 1 times on Ah uh fh to obtain an estimate on d.'Step 2:If h ho(the coarsest grid){rh-t - (fh Ah uh ),uh-t 0,(transfer residual to coarser grid)uh-t . Mvh-t ( uh-t, rh-t ),uh . uh(fmd solution on coarser grid by recursion) IZ-t u -1(correct approximation on d.'Perform v2 relaxation sweeps onStep 3:Ah uh) fh(2.5)where IZ- denotes the process of restriction (or decimation as it is known in a signal processing context)in which residuals on a fme grid are used to correct solutions on a coarser grid.1An immediate problem that we face in implementing the FMV algorithm is that the EM relaxationgiven in (1.3) is formulated for the maximum likelihood objective for a set of Poisson-distributedvariables. For any particular pixel, we can write the prior distribution for the value of that pixel, A, as :f(A) exp( )AlA E(A)(2.6)Then if we consider the case where the set of equations (1.1) are consistent, the maximum likelihoodestimate of A is just its expected value, A, and the prior distribution of the error e A- A is 3:exp(-A)A(A-.1.)r(A-A l)f(e) (2.7)and is shown in figure 2.1. It is clear that the error does not have a prior Poisson distribution, so that0.200.100.00-Poisson distribution-Error distribution .-.,.,.,.,.,.,""!"; 5.0-5.0-10.00.05.010.015.020.0'Figure 2.1 : The distributions f( A) and f( e) for A 10.3This is derived as follows: The cumulative distribution of the error is given by :F(e e1 ) Pr(e et) Pr( A- A et) Pr(A A-e 1 ) fooA- ,exp(-A)A'I'(x 1)dx(F3.1)So the error distribution is :d F ,(.,.e -e-'-'-t)f (et ) -del(2.7) follows using Leibnitz's rule for differentiation of integrals.(F3.2)

176the EM relaxation of (1.3) is invalid for the set of residual equations. Indeed it is not obvious thatlikelihood function of the error will be a concave function. Consequently, the Kuhn-Tucker conditionsthat were used to derive (1.3), may be neither the sufficient nor the necessary conditions for amaximum.Clearly, we cannot use the EM iterations in the form of (1.3) for relaxation of the residualequation. We have been unable to derive a closed form EM-type algorithm for the distribution in (2.7)as it stands. It may be possible to do so if the error distribution is approximated by a Gaussian,however, a problem remains with the unknown value for A. This could be overcome if we admit thepossibility of using A. A, but we find this unpalatable, given the large errors that occur duringreconstruction. As we cannot derive a closed form expression of the EM algorithm, we have beenforced to drop the maximum likelihood objective for the residual equations and look for an alternativemethod.We have found that simple iterative techniques, such as Gauss-Seidel or successive overrelaxation, which attempt to solve either the residual equations or their associated normal form, haveerratic and very slow convergence for this problem. The primary reason for this behaviour is that theresidual equations are even more inconsistent than the direct equations. The secondary reason is thatmatrix A is not diagonally-dominant. Hence, we feel that such a solution of the residual equations isnot feasible and that instead, we must opt for some form of minimum-norm solution.A workable alternative we have used is the regularized least-squares objective :l( e) jjr- Aejb y2 i!eib(2.8)where the parameter y can be altered to reflect the degree of smoothing we wish to impose on thesolution. We know of several methods that can be used to solve this objective: QR factorisation,truncation of the singular value decomposition (SVD) and variants on the conjugate gradient method.The direct methods based on QR, or the more computationally expensive, SVD factorizations arepopular for small dense, or large sparse structured matrices, because we can write the solution to theleast squares problem directly in terms of the factored matrices. If A has a regular structure, thecolumns of A can be pre-ordered to also make R sparse. For unstructured sparse matrices, the denserows of A will causeR to fill-in, and the storage requirements of this direct method will blow out. Thesame problem affects SVD methods; again it is not possible to preserve the same sparsity pattern for thedecomposed arrays as for the original matrix. Also, if A is not of full rank, numerical errors willusually cause the SVD algorithm to break down, if the matrix is not diagonally dominant.When A is Hermitian positive definite, the conjugate gradient method is perhaps the mosteffective iterative solver [20]. The popularity of this method is due in part to its optimality; at each stepthe A-norm of the error is minimised over some sub-space. By minimising in other than the A-norm,different conjugate gradient methods result, some of which are applicable to non-positive definitematrices. A review of these is given in [21]. These methods are characterised by their need for only afew vectors of working storage, and their theoretical convergence in at most N iterations (where A issquare N x N). The methods work well when A is well conditioned and has many nearly equalsingular values.

177This least squares objective in (2.8) can be solved using an analytically equivalent method to theoriginal conjugate gradient method, but with more favourable numerical properties. This so-calledLSQR algorithm has been published by Paige and Saunders [22, 23]. It is based upon a Lanczosprocess applied to a symmetric matrix derived from A4 . The method is iterative, storage-efficient andonly requires the user to supply kernels for multiplication by A and AT.The use of this unbalanced scheme, with one relaxation method applied to the direct equation,and a different method applied to the residual equation, Ae r , does have some side effects:AA. n, Firstly, the convergence rates and smoothing properties of the two relaxation processes will bedifferent, making analysis of the performance of the algorithm difficult; Secondly, the rate of convergence and the fmal reconstructed image will be strongly influencedby the regularization parameter, y. As r- o, the data component of the regularized objectivedominates and the solution vector, e, found on application of LSQR will be highly oscillatory.Application of such an solution fore in the residual correction will degrade, rather than enhance,the current direct solution A. As yis increased, the solution from LSQR is smoothed yielding aless deleterious effect on A. , however convergence is slower; Finally, each iteration of the EM algorithm automatically conserves the image intensity:BBTI,.t. l) I,.t. l I,n,b lb ll l(2.9)We would like the solution of the residual equation to satisfy :BTb lt lI.erl I,r,Cil o(2.10)So that the residual update to A. will not violate (2.9). However, this is not the case with LSQR.With additive algorithms such as the FMV cycle, we are faced with the unpleasant choicebetween enforcing the non-negativity constraint (which imposes a considerable computationalburden on the relaxation algorithm, see for instance Bertero and Dovl [24]) or ignoring it, whichleads to problematic negative in

MULTIGRID METHODS FOR TOMOGRAPHIC RECONSTRUCTION T.J. Monks 1. INTRODUCTION In many inversion problems, the random nature of the observed data has a very significant impact on the accuracy of the reconstruction. In these situations, reconstruction teclmiques that are based on the known statistical properties of the data, are particularly useful.

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

ble, tomographic reconstructionsof 3D fields canbe realizedwithout TagedPspatial sweeping of the illumination fields and thus without associ-ated loss of time. Examples of volumetric tomography techniques in combusting flows include tomographic PIV for volumetric velocime-try [78,79], tomographic X-ray imaging for fuel mass distributions

mismatch and delivering reconstructed tomographic datasets just few seconds after the data have been acquired, enabling fast parameter and image quality evaluation as well as efficient post-processing of TBs of tomographic data. With this new tool, also able to accept a stream of data directly from a detector, few selected tomographic slices are

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Spring Awakening [1891/1906]. Translated by Jonathan Franzen. Faber & Faber 2007. [on loan from NYU Berlin] o Oskar Kokoschka. Murderer, Hope of Women In: Plays and Poems [1907/1910. ]. Translated by Michael Mitchell. Ariadne Press 2001. pp. 21 – 28. [course reader] o David F. Kuhns. German Expressionist Theatre: The Actor and the Stage .