Recent Advances In Approximate Message Passing

2y ago
20 Views
2 Downloads
2.20 MB
52 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Mia Martinelli
Transcription

Recent Advances in Approximate Message PassingPhil SchniterSupported in part by NSF grant CCF-1716388.July 5, 2019

Overview1Linear Regression2Approximate Message Passing (AMP)3Vector AMP (VAMP)4Unfolding AMP and VAMP into Deep Neural Networks5Extensions: GLMs, Parameter Learning, Bilinear ProblemsPhil Schniter (Ohio State Univ.)July’192 / 52

Linear RegressionOutline1Linear Regression2Approximate Message Passing (AMP)3Vector AMP (VAMP)4Unfolding AMP and VAMP into Deep Neural Networks5Extensions: GLMs, Parameter Learning, Bilinear ProblemsPhil Schniter (Ohio State Univ.)July’193 / 52

Linear RegressionThe Linear Regression ProblemConsider the following linear regression problem:Recover xo fromy Axo wwithTypical methodologies: unknown signal xo RnA Rm n known linear operator w Rmwhite Gaussian noise.1Optimization (or MAP estimation): 12b arg minxkAx yk2 R(x)2x2Approximate MMSE:b E{x y} for x p(x),xy x N (Ax, νw I)3Plug-and-play:1 iteratively apply a denoising algorithm like BM3D4Train a deep network to recover xo from y.1 Venkatakrishnan,Bouman,Wohlberg’13Phil Schniter (Ohio State Univ.)July’194 / 52

Approximate Message Passing (AMP)Outline1Linear Regression2Approximate Message Passing (AMP)3Vector AMP (VAMP)4Unfolding AMP and VAMP into Deep Neural Networks5Extensions: GLMs, Parameter Learning, Bilinear ProblemsPhil Schniter (Ohio State Univ.)July’195 / 52

Approximate Message Passing (AMP)The AMP MethodologyAll of the aforementioned methodologies can be addressed using theApproximate Message Passing (AMP) framework.AMP tackles these problems via iterative denoising.We will write the iteration-t denoiser as η t (·) : Rn Rn .Each method defines the denoiser η t (·) differently:Optimization: η t (r) arg minx {R(x) MMSE: η t (r) E x r x N (0, ν t )1kx2ν t rk22 } , “proxRν t (r)”Plug-and-play: η t (r) BM3D(r, ν t )Deep network: η t (r) is learned from training data.Phil Schniter (Ohio State Univ.)July’196 / 52

Approximate Message Passing (AMP)The AMP Algorithmb 0 0, v 1 0initialize xfor t 0, 1, 2, . . . N t 1bt 1 ) corrected residualvdiv η t 1 (bxt 1 AT vv t y Abxt Mb t 1 η t (bxxt A T v t )wheredenoising 1div η (r) , trnt η t (r) r “divergence.”Note:Original version proposed by Donoho, Maleki, and Montanari in 2009.ttThey considered “scalar” denoisers,such Pn thatt ′[η (r)]j η (rj ) jt1For scalar denoisers, div η (r) n j 1 η (rj )Can be recognized as iterative shrinkage/thresholding2 plus “Onsagercorrection.”Can be derived using Gaussian & Taylor-series approximations of loopybelief-propagation (hence “AMP”).2 Chambolle,DeVore,Lee,Lucier’98Phil Schniter (Ohio State Univ.)July’197 / 52

Approximate Message Passing (AMP)AMP’s Denoising PropertyOriginal AMP AssumptionsA Rm n is drawn i.i.d. Gaussianm, n s.t. mn δ (0, )tt[η (r)]j η (rj ) with Lipschitz η(·). . . “large-system limit”. . . “scalar denoising”b t AT v t obeys3Under these assumptions, the denoiser’s input r t , xrjt xo,j N (0, νrt )That is, r t is a Gaussian-noise corrupted version of the true signal xo .It should now be clear why we think of η t (·) as a “denoiser.”Furthermore, the effective noise variance can be consistently estimated:νbrt ,t 21m kv k νrt .3 Bayati,Montanari’11Phil Schniter (Ohio State Univ.)July’198 / 52

Approximate Message Passing (AMP)AMP’s State EvolutionAssume that the measurements y were generated viay Axo N (0, νw I)where xo empirically converges to some random variable Xo as n .Define the iteration-t mean-squared error (MSE)Et ,1xtn kb xo k2 .Under above assumptions, AMP obeys the following state evolution (SE):4for t 0, 1, 2, . . .n tνrt νw mE 2t 1E E η t Xo N (0, νrt ) Xo4 Bayati,Montanari’11Phil Schniter (Ohio State Univ.)July’199 / 52

Approximate Message Passing (AMP)Achievability Analysis via the AMP SEAMP’s SE can be applied to analyze achievability in various problems.E.g., it yields a closed-form expression5 for the sparsity/sampling region whereℓ1 -penalized regression is equivalent to ℓ0 -penalized regression:ρ(δ) maxc 01 2δ 1 [(1 c2 )Φ( c) cφ(c)],1 c2 2[(1 c2 )Φ( c) cφ(c)]ρ k/m (sparsity rate)10.80.60.4MMSE reconstructempirical AMP0.2weak ℓ1 /ℓ0 equiv05 Donoho,Maleki,Montanari’09Phil Schniter (Ohio State Univ.)0.20.40.60.8δ m/n (sampling rate)July’1910 / 52

Approximate Message Passing (AMP)MMSE Optimality of AMPNow suppose that the AMP Assumptions hold, and thaty Axo N (0, νw I),where the elements of xo are i.i.d. draws of some random variable Xo .Suppose also that η t (·) is the MMSE denoiser, i.e., η t (R) E Xo R Xo N (0, νrt )b t converges6Then, if the state evolution has a unique fixed point, the MSE of xto the replica prediction of the MMSE as t .Under the AMP Assumptions, the replica prediction of the MMSE was shownto be correct.786 Bayati,Montanari’11,Phil Schniter (Ohio State Univ.)7 Reeves,Pfister’16,8 Barbier,Dia,Macris,Krzakala’16July’1911 / 52

Approximate Message Passing (AMP)Universality of AMP State EvolutionUntil now, it was assumed that A is drawn i.i.d. Gaussian.The state evolution also holds when A is drawn from i.i.d. Aij such thatE{Aij } 0E{A2ij } 1/mE{A6ij } C/m for some fixed C 0.often abbreviated as “sub-Gaussian Aij .”The proof 9 assumes polynomial scalar denoising η t (·) of bounded order.9 Bayati,Lelarge,Montanari’15Phil Schniter (Ohio State Univ.)July’1912 / 52

Approximate Message Passing (AMP)Deriving AMP via Loopy BP (e.g., sum-product alg)p1 1 (x1 )1Message from yi node to xj node: N via CLTZz } { QPN yi ; l ail xl , νwpi j (xj ) l6 j pi l (xl ) Z{xl }l6 jN (yi ; zi , νw ) Nzizi ; zbi (xj ), νiz (xj )x1N (y1 ; [Ax]1 , νw ) f (x1 )x2N (y2 ; [Ax]2 , νw ).f (x2 ).xnN (ym ; [Ax]m , νw )f (xn )pm n (xn ) NTo compute zbi (xj ), νiz (xj ), the means and variances of {pi l }l6 j suffice,implying Gaussian message passing, similar to expectation-propagation.Remaining problem: we have 2mn messages to compute (too many!).2Exploiting similarity among the messages{pi j }mi 1 , AMP employs a Taylor-seriesapproximation of their difference whoseerror vanishes as m for dense A (andsimilar for {pi j }nj 1 as n ). Finally,need to compute only O(m n) messages!Phil Schniter (Ohio State Univ.)p1 1 (x1 )x1N (y1 ; [Ax]1 , νw )N (y2 ; [Ax]2 , νw ).N (ym ; [Ax]m , νw )f (x1 )x2.f (x2 ).xnpm n (xn )July’19f (xn )13 / 52

Approximate Message Passing (AMP)Understanding AMPThe belief-propagation derivation of AMP provides very little insight!Loopy BP is suboptimal, even if implemented exactlyThe i.i.d. property of A is never used in the derivationAnd the rigorous proofs of AMP’s state evolution are very technical!As a middle ground, we suggest an alternate derivation that gives insight intohow and why AMP works.Based on the idea of “first-order cancellation” We will assume equiprobable Bernoulli aij 1/ m and polynomial η(·)Phil Schniter (Ohio State Univ.)July’1914 / 52

Approximate Message Passing (AMP)AMP as First-Order CancellationRecall the AMP recursion: n t 1vdiv η(r t 1 )v t y Abxt mb t 1 η(bxxt A T v t ) {z }, rtNotice that Pb t 1 l al vkt 1 where aT[Abxt ]i aTiη xi is the ith row of A Pt 1t 1t 1Tb ai η x l6 i al vl ai vi{z} , r t 1,whichremovesthe direct contribution of ai from r t 1i t 1 η t 1) aT(r )ai vit 1 O(1/m) using a Taylor expansioni η(r i r iP t 1t 1) O(1/ m) aT) vit 1 j a2ij η ′ (riji η(r i Pt 1t 1n t 1 1) mvi n j η ′ (rij aT) O(1/ m) since a2ij 1/m iji η(r i{z} )div η(r t 1iwhich uncovers the Onsager correction.Phil Schniter (Ohio State Univ.)July’1915 / 52

Approximate Message Passing (AMP)AMP as First-Order Cancellation (cont.)Now use [Abxt ]i to study jth component of denoiser input error et , r t xo :XX Xt 1etj aijail xo,l η(ril) aij wii Xil6 jhn t 1aij mviidiv η(rt 1 ) n t 1m vi)div η(r t 1i i O(1/ m) where the divergence difference can be absorbed into the O(1/ m) term. . . XXX t 1) ail xo,l η(rilaij wi O(1/ m){z} iil6 j {z }, ǫil P{z} N 0, 1 i wi2 mPP N 0, m12 i l6 j (ǫtil )2aijt 1 n}l 1using the CLT and assuming independence of {ail }nl 1 and {ril n (t). . . the AMP state evolution N 0, mE νw O(1/ m) PPm2(t)n12where E (t) , n1 j 1 xo,j xbjand νw , mi 1 wiPhil Schniter (Ohio State Univ.)July’1916 / 52

Approximate Message Passing (AMP)AMP with Non-Separable DenoisersUntil now, we have focused on separable denoisers, i.e., [η t (r)]j η t (rj ) jCan we use sophisticated non-separable η(·) with AMP?Yes! Many examples. . .Markov chain,10 Markov field,12 Markov tree,12 denoisers in 2010Blockwise & TV denoising considered by Donoho, Johnstone, Montanari in 2011BM3D denoising considered by Metzler, Maleki, Baraniuk in 2015Rigorous state-evolution proven by Berthier, Montanari, Nguyen in 2017.Assumes A drawn i.i.d. GaussianAssumes η is Lipschitz and “convergent under Gaussian inputs”10 S’10,11 Som,S’11,Phil Schniter (Ohio State Univ.)12 Som,S’12July’1917 / 52

Approximate Message Passing (AMP)AMP at Large but Finite DimensionsUntil now, we have focused on the large-system limit m, n withm/n δ (0, )The non-asymptotic case was analyzed by Rush and Venkataramanan.13They showed that probability of ǫ-deviation between the finite and limiting SEfalls exponentially in m, as long as the number of iterations t o( logloglogn n )13 Rush,Venkataramanan’18Phil Schniter (Ohio State Univ.)July’1918 / 52

Approximate Message Passing (AMP)AMP Summary: The good, the bad, and the uglyThe good:With large i.i.d. sub-Gaussian A, AMP is rigorously characterized by a scalarstate-evolution whose fixed points, when unique, are MMSE optimal underproper choice of denoiser.Empirically, AMP behaves well with many other “sufficiently random” A(e.g., randomly sub-sampled Fourier A & i.i.d. sparse x).The bad:With general A, AMP gives no guarantees.The ugly:With some A, AMP may fail to converge!(e.g., ill-conditioned or non-zero-mean A)Phil Schniter (Ohio State Univ.)July’1919 / 52

Vector AMP (VAMP)Outline1Linear Regression2Approximate Message Passing (AMP)3Vector AMP (VAMP)4Unfolding AMP and VAMP into Deep Neural Networks5Extensions: GLMs, Parameter Learning, Bilinear ProblemsPhil Schniter (Ohio State Univ.)July’1920 / 52

Vector AMP (VAMP)Vector AMP (VAMP)Recall goal is linear regression: Recover xo from y Axo N (0, I/γw ).Now it will be easier to work with inverse variances, i.e., precisionsVAMP is like AMP in many ways, but supports a larger class of randommatrices.VAMP yields a precise analysis for right-orthogonally invariant A: U : deterministic orthogonalsvd(A) U SV T for S: deterministic diagonal V : “Haar;” uniform on set of orthogonal matricesof which i.i.d. Gaussian is a special case.Can be derived as a form of messagepassing on a vector-valued factor graph.Phil Schniter (Ohio State Univ.)N (y; Ax2 , I/γw )δ(x1 x2 )x2p(x1 )x1July’1921 / 52

Vector AMP (VAMP)VAMP: The AlgorithmWith SVD A U Diag(s)V T , damping ζ (0, 1], and Lipschitz η t1 (·) : Rn Rn .Initialize r 1 , γ1 .For t 1, 2, 3, . . .b 1 η t1 (r 1 )xξ1 γ1 / divdenoising of r 1 xo N (0, I/γ1 ) η t1 (r 1 )b 1 γ1 r 1 )/(ξ1 γ1 )r 2 (ξ1 xOnsager correctionγ 2 ξ1 γ 1b 2 η 2 (r 2 ; γ2 )xξ2 γ2 / div η 2 (r 2 ; γ2 ) b 2 γ2 r 2 )/(ξ2 γ2 ) (1 ζ)r 1r 1 ζ(ξ2 xγ1 ζ(ξ2 γ2 ) (1 ζ)γ1whereLMMSE estimate of x N (r 2 , I/γ2 )from y Ax N (0, I/γw )Onsager correctiondampingη 2 (r 2 ; γ2 ) (γw AT A γ2 I) 1 (γw AT y γ2 r 2 ) 1 VPγw Diag(s)2 γ2 Iγw Diag(s)U T y γ2 V T r 22 1ξ2 n1 ntwo mat-vec mults per iteration!j 1 (γw sj γ2 )Phil Schniter (Ohio State Univ.)July’1922 / 52

Vector AMP (VAMP)VAMP’s Denoising PropertyOriginal VAMP AssumptionsA Rm n is right-orthogonally invariantm, n s.t. m/n δ (0, )[η t1 (r)]j η1t (rj )with Lipschitzη1t (·). . . “large-system limit”. . . “separable denoising”Under Assumption 2, the elements of the denoiser’s input r t1 obey14tr1,j xo,j N (0, ν1t )That is, r t1 is a Gaussian-noise corrupted version of the true signal xo .As with AMP, we can interpret η 1 (·) as a “denoiser.”14 Rangan,S,Fletcher’16Phil Schniter (Ohio State Univ.)July’1923 / 52

Vector AMP (VAMP)VAMP’s State Evolution0Assume empirical convergence of {sj } S and {(r1,j, xo,j )} (R10 , Xo ), and defineEit ,1kbxtin xo k2 for i 1, 2.Then under the VAMP Assumptions, VAMP obeys the following state-evolution:for t 0, 1, 2, . . . 2 E1t E η1t Xo N (0, ν1t ) Xo ′α1t E η1t (Xo N (0, ν1t )) t1 αtt 2 t1ν1γ2t γ1t αt 1 , ν2t (1 αt )2 E 1 α 111 1E2t E γw S 2 γ2t 1 α2t γ2t E γw S 2 γ2t t1 αtt 2 t1γ1t 1 γ2t αt 2 , ν1t 1 (1 αν2t )2 E 2 α 22MSEdivergenceMSEdivergence2Note: Above equations assume η 2 (·) uses true noise precision γw .If not, there are more complicated expressions for E2t and α2t .Phil Schniter (Ohio State Univ.)July’1924 / 52

Vector AMP (VAMP)MMSE Optimality of VAMPNow suppose that the VAMP Assumptions hold, and thaty Axo N (0, I/γw ),where the elements of xo are i.i.d. draws of some random variable Xo .Suppose also that η1t (·) is the MMSE denoiser, i.e., η1t (R1 ) E Xo R1 Xo N (0, ν1t )b t1Then, if the state evolution has a unique fixed point, the MSE of x1516converges to the replica prediction of the MMSE as t .15 Rangan,S,Fletcher’16,Phil Schniter (Ohio State Univ.)16 Tulino,Caire,Verdu,Shamai’13July’1925 / 52

Vector AMP (VAMP)Experiment with MMSE DenoisingComparison of several algorithms17 with MMSE denoising.0median normalized MSE [dB]-5-10n 1024m/n 0.5AMPS-AMPdamped GAMPVAMPreplica MMSEA U Diag(s)V TU , V Haarsj /sj 1 φ jφ determines κ(A)-15-20-25-30-35Xo Bernoulli-GaussianPr{X0 6 0} 0.1-40-45-5010 010 110 210 310 410 510 6SNR 40dBcondition number κ(A)VAMP achieves the replica MMSE over a wide range of condition numbers.17 S-AMP: Cakmak,Fleury,Winther’14,Phil Schniter (Ohio State Univ.)damped GAMP: Vila,S,Rangan,Krzakala,Zdeborová’15July’1926 / 52

Vector AMP (VAMP)Experiment with MMSE Denoising (cont.)median NMSE [dB]median NMSE [dB]Comparison of several algorithms with priors matched to data.0condition number 1n 1024m/n 0.5AMPS-AMPdamped GAMPVAMPVAMP SE-10-20-30-40-500100101102103condition number 1000AMPS-AMPdamped GAMPVAMPVAMP SE-10-20Xo Bernoulli-GaussianPr{X0 6 0} 0.1-30-40010101102A U Diag(s)V TU , V Haarsj /sj 1 φ jφ determines κ(A)103SNR 40dBiterationsVAMP is relative fast even when A is ill-conditioned.Phil Schniter (Ohio State Univ.)July’1927 / 52

Vector AMP (VAMP)VAMP for OptimizationConsider the optimization problem b arg min 21 kAx yk2 R(x)xxwhere R(·) is strictly convex and A is arbitrary (e.g., not necessarily RRI).If we choose the denoiserη t1 (r) γ1t2 arg min R(x) kx rk proxR/γ1t (r)x2and the damping parameterζ 2 min{γ1 , γ2 },γ1 γ2b from above.then a double-loop version of VAMP converges18 to xFurthermore, if the γ1 and γ2 variables are fixed over the iterations, thenVAMP reduces to the Peaceman-Rachford variant of ADMM.18 Fletcher,Sahraee,Rangan,S’16Phil Schniter (Ohio State Univ.)July’1928 / 52

Vector AMP (VAMP)Example of AMP & VAMP on the LASSO Problem0iid Gaussian A matrix0VAMPAMPChambolle-PockFISTA-5-15-5NMSE [dB]NMSE [dB]-10-20-25-30column-correlated (0.99) A -4501010110iterations2103-3010 010 110 210 310 4iterationsSolving LASSO to reconstruct 40-sparse x R1000 from noisy y R400 . 1b arg minxky Axk22 λkxk1 .2xPhil Schniter (Ohio State Univ.)July’1929 / 52

Vector AMP (VAMP)Deriving VAMP from ECIdeally, we would like to compute the exact posterior densityZp(x)ℓ(x; y)p(x y) for Z(y) , p(x)ℓ(x; y) dx,Z(y)but the high-dimensional integral in Z(y) is difficult to compute.We might try to circumvent Z(y) through variational optimization: p(x y) arg min D b(x) p(x y) where D(·k·) is KL divergenceb arg min D b(x) p(x) D b(x) ℓ(x; y) H b(x){z} bGibbs free energy arg min D b1 (x) p(x) D b2 (x) ℓ(x; y) H q(x){z}b1 ,b2 ,q , JGibbs (b1 , b2 , q)s.t. b1 b2 q,but the density constraint keeps the problem difficult.Phil Schniter (Ohio State Univ.)July’1930 / 52

Vector AMP (VAMP)Deriving VAMP from EC (cont.)In expectation-consistent approximation (EC)19 , the density constraint isrelaxed to moment-matching constraints:p(x y) arg min JGibbs (b1 , b2 , q)b1 ,b2 ,q E{x b1 } E{x b2 } E{x q}s.t.tr(Cov{x b1 }) tr(Cov{x b2 }) tr(Cov{x q}).The stationary points of EC are the densities b1 (x) p(x)N (x; r 1 , I/γ1 )bE{x b1 } E{x b2 } xb2 (x) ℓ(x; y)N (x; r 2 , I/γ2 ) s.t. 11tr(Cov{x b}) tr(Cov{x b12 }) nnb , I/ξ)q(x) N (x; x1ξb , ξ above.VAMP iteratively solves for the quantities r 1 , γ1 , r 2 , γ2 , xLeads to η t1 (·) being the MMSE denoiser of r 1 xo N (0, I/γ1t )In this setting, VAMP is simply an instance of expectation propagation (EP)20 .But VAMP is more general than EP, in that it allows non-MMSE denoisers η 1 .19 Opper,Winther’04,Phil Schniter (Ohio State Univ.)20 Minka’01July’1931 / 52

Vector AMP (VAMP)Plug-and-play VAMPRecall the scalar denoising step of VAMP (or AMP):b 1 η1t (r 1 ) where r 1 xo N (0, I/γ1t )xFor many signal classes (e.g., images), very sophisticated non-separabledenoisers η 1 (·) have been developed (e.g., BM3D, DnCNN).These non-separable denoisers can be “plugged into” VAMP!Their divergence can be approximated via Monte Carlo21 tKt 1 X pTk η 1 (r ǫpk ) η 1 (r)tdiv η 1 (r) Knǫk 1with random vectors pk { 1}n and small ǫ 0. Empirically, K 1 suffices.A rigorous state-evolution has been established for plug-and-play VAMP.2221 Ramani,Blu,Unser’08,Phil Schniter (Ohio State Univ.)22 Fletcher,Rangan,Sarkar,S’18July’1932 / 52

Vector AMP (VAMP)Experiment: Compressive Image Recovery with BM3DPlug-and-play versions of VAMP and AMP behave similarly with i.i.d. Gaussian Ais i.i.d., but VAMP can handle a larger class of random matrices A.iid Gaussian A40spread spectrum A (M/N .30.40.5001010sampling rate M/N1102103104condition numberResults above are averaged over 128 128 versions oflena, barbara, boat, fingerprint, house, peppersand 10 random realizations of A, w.Phil Schniter (Ohio State Univ.)July’1933 / 52

Unfolding AMP and VAMP into Deep Neural NetworksOutline1Linear Regression2Approximate Message Passing (AMP)3Vector AMP (VAMP)4Unfolding AMP and VAMP into Deep Neural Networks5Extensions: GLMs, Parameter Learning, Bilinear ProblemsPhil Schniter (Ohio State Univ.)July’1934 / 52

Unfolding AMP and VAMP into Deep Neural NetworksDeep learning for sparse reconstructionUntil now we’ve focused on designing algorithms to recover xo p(x) frommeasurements y Axo w.yalgorithmmodel p(x), AbxWhat about training deep networks to predict xo from y?Can we increase accuracy and/or decrease computation?ytraining data {(xd , y d )}Dd 1deepnetworkbxAre there connections between these approaches?Phil Schniter (Ohio State Univ.)July’1935 / 52

Unfolding AMP and VAMP into Deep Neural NetworksUnfolding Algorithms into NetworksConsider, e.g., the classical sparse-reconstruction algorithm, ISTA.23v t y Abxttbx η(bx AT v t )b t 1 η(S xb t By) withx t 1S , I AT AB , ATGregor & LeCun24 proposed to “unfold” it into a deep net and “learn” improvedparameters using training data, yielding “learned ISTA” (LISTA):η(·)yBb1xS η(·)b2xS η(·)b3xS η(·)b4xThe same “unfolding & learning” idea can be used to improve AMP, yielding“learned AMP” (LAMP).2523 Chambolle,DeVore,Lee,Lucier’98.Phil Schniter (Ohio State Univ.)24 Gregor,LeCun’10.25 Borgerding,S’16.July’1936 / 52

Unfolding AMP and VAMP into Deep Neural NetworksOnsager-Corrected Deep Networkstth LISTA layer:btx Btvtyrtη( ;λt )At b t 1xv t 1yto exploit low-rank B t At in linear stage S t I B t At .tth LAMP

1 n Pn j 1η t′(r j) Can be recognized as iterative shrinkage/thresholding2 plus “Onsager correction.” Can be derived using Gaussian & Taylor-series approximations of loopy belief-propagation (hence “AMP”). 2Chambolle,DeVore,Le

Related Documents:

advances in agronomy adv anat em advances in anatomy embryology and cell biology adv anat pa advances in anatomic pathology . advances in organometallic chemistry adv parasit advances in parasitology adv physics advances in physics adv physl e advances in physiology education adv poly t advances in polymer technology

tpdequeue() Routine to dequeue a message from a queue. tpenqueue() Routine to enqueue a message. tpqattach() Connects an application program to the OTMQ message queuing space by attaching it to a message queue. tpqdetach() Detaches a selected message queue or all of the application's message queues from the message queuing qspace.

used to understand where messages are generated and logged. Message Structure Message help describes the cause of a message and describes any action you should take in response to the message. Message identifiers consist of a three character message prefix, followed by a four or five digit message number, followed by a single letter suffix. For .

Android's concurrency frameworks are built using reusable classes Looper Run a message loop for a thread Applies Thread-Specific Storage pattern to ensure only one Looper is allowed per Thread Elements of Android Concurrency Frameworks Message Message Message Message Message Message Queue UI Thread (main thread) Message

Advances in Aquatic Microbiology: 1·3, 1 g77.95 ··Look under DR 1 D5.A3 Advances in Carbohydrate Chemistry and Biochemistry: 24-37, 1g5g.ao ··Continues Advances in Carbohydrate Chemistry·· Look under DD321.A3 Advances in Carbohydrate Chemistry: 15·23, 1g60·6B ··Continued by Advances in Carbohydrate Chemistry and

advances in clinical cancer research, and identifies those that will have the greatest impact on patient care. This report, Clinical Cancer Advances 2008: Major Research Advances in Cancer Treatment, Screening, and Prevention, highlights 31 of the most significant advances over the past

approximate string joins. More formally, we wish to ad-dress the following problem: Problem 1 (Approximate String Joins) Given tables 1 and # 3with string attributes 1 ,an integer ), retrieve all pairs of ecords 1 3 such that edit distance(1 0) 3) . Our techniques for approximate string processing in databases share a principlecommon .

is a query string. Approximate selections are special cases of the similarity join operation. While several predicates are introduced and benchmarked in [5], the extension of ap-proximate selection to approximate joins is not considered. Furthermore, the effect of threshold values on accuracy for approximate joins is also not considered. 3 .