Compressive Sensing - Texas A&M University

3y ago
16 Views
3 Downloads
463.02 KB
44 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Javier Atchley
Transcription

Compressive SensingCollection Editors:Mark A. DavenportRichard BaraniukRonald DeVore

Compressive SensingCollection Editors:Mark A. DavenportRichard BaraniukRonald DeVoreAuthors:Wai Lam ChanMark A. DavenportRonald DeVoreMarco F. DuarteJason LaskaMichael A LexaChris RozellShriram SarvothamMichael WakinOnline: http://cnx.org/content/col10458/1.1/ CONNEXIONSRice University, Houston, Texas

This selection and arrangement of content as a collection is copyrighted by Mark A. Davenport, Richard Baraniuk,Ronald DeVore. It is licensed under the Creative Commons Attribution 2.0 license ection structure revised: September 21, 2007PDF generated: October 26, 2012For copyright and attribution information for the modules contained in this collection, see p. 35.

Table of ContentsIntroduction. . 11 Encoding of Analog Signals1.11.21.31.41.5The Shannon-Whitaker Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Optimal Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Kolmogorov Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Optimal Encoding of Bandlimited Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Stable Signal Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Sparse Signals and Sparse Approximation2.12.22.32.42.5Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13New Signal Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Sparse Approximation and pSpaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Thresholding and Greedy Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Greedy Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Compressive Sensing3.13.23.33.43.53.63.7Compressive Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Gelfand n-widths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Instance Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26The Restricted Isometry Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27The Nullspace Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Optimality and the MRIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Attributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

ivAvailable for free at Connexions http://cnx.org/content/col10458/1.1

Introduction1Throughout this course, we shall be interested in the analog to digital conversion of signalsshall always assumef L2and usually assume additional properties ofIn particular, we want to study two mappings: the encoding ofbit streams into approximations or estimates ofwherefis the approximation ofway of quantifying how wellk f f k.1 Thisfff bitsD:bits streamsde ned byf.f (t) , t R.Wein order to get meaningful results.into bit streams and the decoding of thef,E:approximatesffstreams(Encoder) f(Decoder)f : D (E (f )).In general,(1)f 6 f ,so we shall need someNormally, the distortion between is measured by some normTypical choices include: R 1/22 f (t) dttheL2normk f kL2: theL normk f kL theLpnormk f kLp: sup f (t) R t p 1/p f (t) dt: content is available online at http://cnx.org/content/m15136/1.1/ .Available for free at Connexions http://cnx.org/content/col10458/1.1 1(2)

2Available for free at Connexions http://cnx.org/content/col10458/1.1

Chapter 1Encoding of Analog Signals1.1 The Shannon-Whitaker Sampling Theorem1The classical theory behind the encoding analog signals into bit streams and decoding bit streams back intosignals, rests on a famous sampling theorem which is typically refereed to as the Shannon-Whitaker SamplingTheorem. In this course, this sampling theory will serve as a benchmark to which we shall compare the newtheory of compressed sensing.To introduce the Shannon-Whitaker theory, we rst de ne the class of bandlimited signals. A bandlimitedsignal is a signal whose Fourier transform only has nite support. We shall denote this class asBAand de neit in the following way: BA : {f L2 (R) :f (ω) 0, ω Aπ}.Here, the Fourier transform offis de ned by 1f (ω) : 2πThis formula holds for any(1.1)f L1Zf (t) e iωt dt.(1.2)Rand extends easily tof L2via limits. The inversion of the Fouriertransform is given by1f (t) : 2πTheorem 1.1:Iff BA ,then Zf (ω) eiωt dω.(1.3)RShannon-Whitaker Sampling Theoremfcan be uniquely determined by the uniformly spaced samplesis given byf (t) X n fsinc (π (At n)) ,AfnA and in fact,(1.4)n ZwhereProof:sinc (t) sintt .A 1, since all other cases can be reduced to this through a simple changef BA 1 , the Fourier inversion formula takes the formIt is enough to considerof variables. Because1f (t) 2π1 ThisZπ f (ω) eiωt dω. πcontent is available online at http://cnx.org/content/m15146/1.2/ .Available for free at Connexions http://cnx.org/content/col10458/1.1 3(1.5)

4CHAPTER 1.De neF (ω)as the2πENCODING OF ANALOG SIGNALS periodization off,F (ω) : X f (ω 2nπ) .(1.6)n ZBecauseF (ω)is periodic, it admits a Fourier series representationF (ω) Xcn e inω ,(1.7)n Zwhere the Fourier coe cientscngiven bycn 12πRπ 12πRπ πF (ω) einω dω(1.8) πf (ω) einω dω.By comparing ((1.8)) with ((1.5)), we conclude that1cn f (n) .2π(1.9)Therefore by plugging ((1.9)) back into ((1.6)), we have that1 XF (ω) f (n) e inω .2π n Z(1.10) 1 Xf (n) e inω χ[ π,π] ,f (ω) F (ω) χ[ π,π] 2π n Z(1.11)Now, becauseand because of the facts thatF χ[ π,π] F (g (t n))we concludef (t) X 1 sinc (πω)2π inω eandF (g (t)) ,f (n) sinc (π (t n)) .(1.12)(1.13)n ZComments:1. (Good news) The setthat theL2{sinc (π (t n)) }n Zis an orthogonal system and therefore, has the propertynorm of the function and its Fourier coe cients are related by,k f k2L2 2πX f (n) 2(1.14)n Z2. (Bad news) The representation ofXn Zfin terms of sinc functions is not a stable representation, i.e. sinc (π (t n)) Xn Z1 divergences t n 1Available for free at Connexions http://cnx.org/content/col10458/1.1 (1.15)

521.2 Optimal EncodingWe shall consider now the encoding of signals on[ T, T ]whereT 0 is xed.BA However,interested in encoding classes of bandlimited signals like the classUltimately we shall bewe begin the story byconsidering the more general setting of encoding the elements of any given compact subsetlinear spaceKinX.One can determine the best encoding ofKKof a normedby what is known as the Kolmogorov entropy ofX.To begin, let us consider an encoder-decoder pairstream of bits to a signal inX.(E, D) EmapsKto a nite stream of bits.Dmaps aThis is illustrated in Figure 1.1. Note that many functions can be mappedonto the same bitstream.Figure 1.1:Illustration of encoding and decoding.De ne the distortiondfor this encoder-decoder byd (K, E, D, X) : supLetn (K, E) supf K #Ef wheref Kk f D (Ef ) kX .(1.16)#Ef is the number of bits in the bitstream Ef . Thus n is the maximumf K . There are two ways we can de ne optimal encoding:length of the bitstreams for the variousε, the maximum distortion that we are willing to tolerate. For this ε, nd the smallestnε (K, X) : inf (E,D) {n (K, E) : d (K, E, D, X) ε}. This is the smallest bit budget under which wecould encode all elements of K to distortion ε.Prescribe N : nd the smallest distortion d (K, E, D, X) over all E, D with n (K, E) N . This is the1. Prescribe2.best encoding performance possible with a prescribed bit budget.There is a simple mathematical solution to these two encoding problems based on the notion of KolmogorovEntropy.2 Thiscontent is available online at http://cnx.org/content/m15139/1.1/ .Available for free at Connexions http://cnx.org/content/col10458/1.1

6CHAPTER 1.1.3 Kolmogorov EntropyFigure 1.2:Givenε 0,Coverings ofK3by balls of radiusand the compact setK,ENCODING OF ANALOG SIGNALSε.consider all coverings ofKby balls of radiusε,as shown in Figure 1.2.In other words,NK Ui 1b (fi , ε) .LetXNε : inf {N : over all such covers}. Nε (K)K , we write it as Nε Nε (K, X).(1.17)is called the covering number ofK.Since it depends onandRule 1.1:Kolmogorov entropyThe Kolmogorov entropy, denoted byHε (K, X),of the compact setKinXis de ned as thelogarithm of the covering number:Hε (K, X) logNε (K, X) .(1.18)The Kolmogorov entropy solves our problem of optimal encoding in the sense of the following theorem.Theorem 1.2:For any compact setK X,we havenε (K, X) dHε (K, X)e,whered·eis the ceiling function.Proof:Sketch: We can de ne an encoder-decoder as follows To encode: Sayball it is covered by. Because the number of balls isNε K, X f K.Just specify which, we need at most dlogNε K, X ebits to specify any such ball ball.To decode: Just take the center of the ball speci ed by the bitstream.It is now easy to see that this encoder-decoder pair is optimal in either of the senses given above. The above encoder is not practical. However, the Kolmogorov entropy tells us the best performance wecan expect from any encoder-decoder pair. Kolmogorov entropy is de ned in the deterministic setting. It isthe analogue of the Shannon entropy which is de ned in a stochastic setting.3 Thiscontent is available online at http://cnx.org/content/m15137/1.1/ .Available for free at Connexions http://cnx.org/content/col10458/1.1

741.4 Optimal Encoding of Bandlimited SignalsWe now turn back to the encoding of signals. We are interested in encoding the setBA (M ) {f BA : f (t) M, t R}M is arbitrary but xed. We shall restrictL [ T, T ] where T 0 is arbitrary but xed.L [ T, T ].whereinFigure 1.3:(1.19)our discussion to the case where distortion is measuredBA (M )Then,nSample points λA are chosen in the intervalis a compact subset of[ T (1 δ) , T (1 δ)].We shall sketch how one can construct an asymptotically optimal encoder/decoder forfor this construction can be found in . We know f (ω) 0 for ω Aπ , andλ λ (T ) 1 f M .L : BA (M ) How can we encodefBA .The detailsin practice? We begin by chosing(see Figure 1.3) which will represent a slight oversampling factor we shall utilize. Given a k 1 ε 2 k . Given f , we shall encode f by rst taking ε n0, we choose k so that 2nsamples fλA for λA [ T (1 δ) , T (1 δ)] where δ (T ) 0. In other words, we sample f on a slightlynlarger interval than [ T, T ]. For each sample fλA , we shall use the rst k k0 (T ) bits ofn its binarynexpansion. In other words, our encoder takes f and the samples fλA and then assigns to f λA the rsttarget distortionk k0 (T )bits of this number.To decode, the receiver would take the bits and construct the approximationfnλA tofnAλ from thebits provided. Notice that we have the accuracyfWe utilize the functiongλ n n f 2 k k0 · M.λAλA(1.20)satisfying () to de nef (t) Xfn NT n gλ (λAt n) ,λAwhereNT : {n : T (1 δ) n T (1 δ)}.λA(1.21)(1.22)We then have f (t) f (t) Pn NT 4 ThisPfnλA fn λA T (1 δ)fnλA nλA · gλ (λAt n) · gλ (λAt n) content is available online at http://cnx.org/content/m15140/1.2/ .Available for free at Connexions http://cnx.org/content/col10458/1.1 (1.23)

8CHAPTER 1.The termThe termfn k k0that appears in the rst summation in ((1.23)) is bounded by M · 2.λAthat appears in the second summation in the same equation is bounded by M . Therefore, nf λA fnλA f (t) f (t) P n NT We can estimateENCODING OF ANALOG SIGNALSS1PM · 2 k k0 · gλ (λAt n) n λA T (1 δ)(1.24)M · gλ (λAt n) : S1 S2byS1M · 2 k k0 · gλ (λAt n) PM · 2 k k0 · n gλ (λAt n) P n NT k k0M · C0 (λ) · 2(becauseg (·)(1.25)decays fast)k0 su ciently large, then S1 M · C0 (λ) · 2 k k0 2ε . The second summation S2ε/2 by using the fast decay of the function gλ (see ()).To make the encoder/decoder speci c we need to precisely de ne δ and λ. It turns out that the bestchoices (in terms of bit rate performance on the class BA ) depend on T . But δT 0 and λT 1 asT . Recall that Shannon sampling requires 2T λA samples. Since our encoder/decoder uses k k0 bitsper sample, the total number of bits is (k k0 ) · 2λAT (1 δ), and so coding will require roughly k bits perTherefore, if we choosecan also be bounded byShannon sample.This encoder/decoder can be proven to be optimal in the sense of averaged performance as we shall nowdescribe. The average of performance of optimal encoding is de ned bynε (BA (M ) , L b T, T c)T 2TlimIf we replace the optimal bit ratenε(1.26)in ((1.26)) by the number of bits required by our encoder/decoder thenthe resulting limit will be the same as that in ((1.26)).In summary, to encode band limited signals on an interval[ T, T ],slightly higher rate than Nyquist and on a slightly large interval thanan optimal strategy is to sample at a[ T, T ].Each sample should then bequantized by using the binary expansion of the sample. In this way, for an investment ofrate sample, we get a distortion ofkbits per Nyquist2 k .A 106 (signals band limitedA · k · 2T 106 · 10 · 105 1012 bits.To get a feel for the number of bits required by such an encoder, let us sayto 1Mhz). SayT 24hours 105seconds, andk 10bits. Then,This is too BIG!The above encoding is is known as Pulse Coded Modulation (PCM). In practice, people frequently useanother encoder called Sigma-Delta Modulation.Instead of oversampling just slightly, Sigma Delta oversamples a lot and then assign only one (or a few) bits per sample.Why is Sigma-Delta preferred to PCM in practice? There are two reasons commonly given:1. Getting accurate samples, quantization, etc. is not practical because of noise. For better accuracy, weneed more expensive hardware.2. Noise shaping. In Sigma-Delta, the distortion is higher but the distortion is spread over frequenciesoutside of the desired range.2 k ), whereas for Sigma-Delta, the distortion decays like a1).Althoughthedistortiondecaysfaster in PCM, the distortion in Sigma-Delta is spreadkmoutside the desired frequency range.In PCM, the distortion decays exponentially (likepolynomial (likeAvailable for free at Connexions http://cnx.org/content/col10458/1.1

951.5 Stable Signal RepresentationsTo x the instability of the Shannon representation, we assume that the signal is slightly more bandlimitedthan before f (ω) 0and instead of usingχ[ π,π] , ω π δ, δ 0,forwe multiply by another function g (ω)(1.27)which is very similar in form to thecharacteristic function, but decays at its boundaries in a smoother fashion (i.e. it has more derivatives). Acandidate functionFigure 1.4: gis sketched in Figure 1.4.Sketch of g.Now, it is a property of the Fourier transform that an increased smoothness in one domain translatesinto a faster decay in the other. Thus, we can x our instability problem, by choosingandany g (ω) 1, ω π δ and g 0, ω π .given m 1, choose g to satisfyBy choosing the smoothness of g (t) g gso that gis smoothsuitably large, we can, formC( t 1)(1.28)for some constant C 0. Using such a g , we can rewrite () as 1 Xf (ω) F (ω) g (ω) f (n) e inω g (ω) .2π n Z(1.29)Thus, we have the new representationf (t) Xf (n) g (t n) ,(1.30)n Zwhere we gain stability from our additional assumption that the signal is bandlimited on5 Thiscontent is available online at http://cnx.org/content/m15144/1.1/ .Available for free at Connexions http://cnx.org/content/col10458/1.1 [ π δ, π δ].

10CHAPTER 1.ENCODING OF ANALOG SIGNALSDoes this assumption really hurt? No, not really because if our signal is really bandlimited tonot[ π δ, π δ],we can always take a slightly larger bandwidth, say[ λπ, λπ]whereλ[ π, π] andis a little largerthan one, and carry out the same analysis as above. Doing so, would only mean slightly oversampling thesignal (small cost).Recall that in the end we want to convert analog signals into bit streams. Thus far, we have the tworepresentationsf (t) f (t)Shannon's Theorem tells us that ifPf (n) sinc (π (t n)) , ng (λt n) .n Z f λn Z(1.31)P f BA ,we should samplefat the Nyquist rateA(which is twice the support of f ) and then take the binary representation of the samples. Our more stable representationsays to slightly oversamplefand then convert to a binary representation. Both representations o er perfectreconstruction, although in the more stable representation, one is straddled with the additional task ofchoosing an appropriateλ.In practical situations, we shall be interested in approximatingfon an interval[ T, T ]for someT 0and not for all time. Questions we still want to answer include1. How many bits do we need to representfinBA 1on some interval[ T, T ]in the normL [ T, T ]?2. Using this methodology, what is the optimal way of encoding?3. How is the optimal encoding implemented?Towards this end, we de ne BA : {f L2 (R) : f (ω) 0, ω Aπ}.Then for anyf BA ,we can writef Figure 1.5:(1.32)Fourier transform ofIn other words, samples at 0,X n · sinc π (At n) .fAn(1.33)gλ (·). A1 , A2 , · · · are su cient to reconstruct f .Recall also that sinc (x) sin(x)xdecays poorly (leading to numerical instability). We can overcome this problem by slight over-sampling. Saywe over-sample by a factorλ 1.Then, we can writef X n fgλ (λAt n) .λAAvailable for free at Connexions http://cnx.org/content/col10458/1.1 (1.34)

11Hence we need samples at 0,12 λA, λA ,etc. What is the advantage? Sampling more often than necessarybuys us stability because we now have a choice forgλ (·).If we choosegλ (·)in nitely di erentiable whoseFourier transform looks as shown in Figure 1.5 we can obtain gλ (t) and thereforegλ (·)cλ,k k,(1 t )k 1, 2, .(1.35)decays very fast. In other words, a sample's in uence is felt only locally. Note however,that over-sampling generates basis functions that are redundant (linearly dependent), unlike the integertranslates of the sinc (·) function.Figure 1.6:To reconstruct signals inIf we restrict our reconstruction toforc 1 (see Figure 1.6),[ T, T ],the sampling interval is[ cT, cT ].t in the interval [ T, T ], we will only need samples only from [ cT, cT ],[ T, T ].because the distant samples will have little e ect on the reconstruction inAvailable for free at Connexions http://cnx.org/content/col10458/1.1

12CHAPTER 1.ENCODING OF ANALOG SIGNALSAvailable for free at Connexions http://cnx.org/content/col10458/

Compressive Sensing Collection Editors: Mark A. Davenport Richard Baraniuk Ronald DeVore Authors: Wai Lam Chan Mark A. Davenport Ronald DeVore Marco F. Duarte

Related Documents:

Proximity Sensor Sensing object Reset distance Sensing distance Hysteresis OFF ON Output Proximity Sensor Sensing object Within range Outside of range ON t 1 t 2 OFF Proximity Sensor Sensing object Sensing area Output (Sensing distance) Standard sensing object 1 2 f 1 Non-metal M M 2M t 1 t 2 t 3 Proximity Sensor Output t 1 t 2 Sensing .

m eslami@sbu.ac.ir, h safavi@sbu.ac.ir 21 May 2017 Compressive Sensing DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering 1 / 78. Outline 1 Introduction 2 Compressive Sensing Recovery Constraints Spark NSP RIP Mutual Coherence

Fast spectrophotometry with compressive sensing Spectroscopy Compressive Sensing Absorption Spectroscopy Emission Spectroscopy Absorption Spectroscopy LED bandwidth 400 - 800 nm Max LED Power 500 mW Collected LED Power 121 nW Transmission Grating 600 lines/mm DMD Resolution 608 x 684 (10.8 m) Si-Photodiode Detector 13 mm2 Time per measurement 0.1 s

2.87 GPa ASTM D 4255 Shear modulus G 13 G 23 157.48 MPa ASTM D 732 Sheet compressive strength 71.20 MPa Modified ASTM D 695 Sheet compressive modulus 3.50 GPa Modified ASTM D 695 Core compressive strength 8.73 MPa ASTM C 365 Core compressive modulus 268.9 MPa ASTM C 365 Sheet density 3960 kg/m - Core density 156 kg/m3 - 4 U T T U I 2( / sin )cos ( / )(2 / 1) 2 * h l h l t t l t (1) where, ρ .

PRINCIPLES OF REMOTE SENSING Shefali Aggarwal Photogrammetry and Remote Sensing Division Indian Institute of Remote Sensing, Dehra Dun Abstract : Remote sensing is a technique to observe the earth surface or the atmosphere from out of space using satellites (space borne) or from the air using aircrafts (airborne). Remote sensing uses a part or several parts of the electromagnetic spectrum. It .

lacks the understanding of sensing needs, which diminishes the sensing flexibility, isolation, and security when multiple sensing applications need to use sensor resources. In this work, we propose VirtSense , an ARM TrustZone based virtual sensing system, to provide each sensing application a virtual sensor instance, which

Scope of remote sensing Remote sensing: science or art? The remote sensing process Applications of remote sensing Information flow in remote sensing The EMRreflected, emitted, or back-scattered from an object or geographic area is used as a surrogatefor the actual property under investigation.

Texas Math Course 1 (Grade 6) Texas Math Course 2 (Grade 7) Texas Math Course 3 (Grade 8) Texas Grade 6 iScience Texas Grade 7 iScience Texas Grade 8 iScience Texas Biology Texas Chemistry Texas Integrated Physics and Chemistry Texas Physics MHEtexas.com MK14M03416