Compressive Sensing: - Theory, Applications . - Sbu.ac.ir

2y ago
19 Views
2 Downloads
5.67 MB
82 Pages
Last View : 27d ago
Last Download : 3m ago
Upload by : Macey Ridenour
Transcription

Compressive Sensing:Theory, Applications, Challenges and FutureDr. Mohammad Eslami and Seyed Hamid SafaviShahid Beheshti UniversityDigital Signal Processing LabarotarY(DiSPLaY)http://display.sbu.ac.irm eslami@sbu.ac.ir, h safavi@sbu.ac.ir21 May 2017Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering1 / 78

Outline12IntroductionCompressive SensingRecovery Constraints345SparkNSPRIPMutual CoherenceRecovery AlgorithmsMPOMPBPS 0Compressive Sensing6ChallengesApplicationsSolversSparseLabCVX 1 -MagicSPGL1SL0ISTA, FISTAADMMSummary and Future DirectionsDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering2 / 78

IntroductionCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering3 / 78

x RnD Rn mx Dαα RmAtom Compositionkαk0 k0x Dα eCompressive Sensingk0 mkek2 6 εk0 sparseM (D, k0 , ε)DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering4 / 78

Atomic DecompositionP0 (D, x, δ) : min kαk0subject to kx Dαk2 δP1 (D, x, δ) : min kαk1subject to kx Dαk2 δααCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering5 / 78

Inverse Problemy Hx υP0 (HD, y, δ ε)Compressive Sensingkυk2 δx̂ Dαδ ε0DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering6 / 78

Inverse ProblemSynthesis Basedα0δ ε min kαk0αsubject to ky HDαk2 δ εx̂ Dαδ ε0Analysis Basedx̂ min kD αk0xCompressive Sensingsubject to ky Hxk2 δ εDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering7 / 78

Compressive Sensingx,Compressive SensingN 512,20 sparsey Φx,L 50DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering8 / 78

Compressive Sensingx̃ Φ 1 yCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering9 / 78

Compressive Sensingx̂Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering10 / 78

Problem DefinitionLinear Inverse ProblemsSystem of linear equations: y1 a11 x1 a12 x2 . a1N xN y2 a21 x1 a22 x2 . a2N xN. . yM aM1 x1 aM2 x2 . aMN xNOr in a mtrix form:y AxApplications of Linear inverse problems: optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering11 / 78

Problem DefinitionLinear Inverse Problemsy: Measurements,A RM N : Linear Measurement MatrixM N, In this case the A is square matrix and the system has aunique solution if det(A) 6 0.M N, the problem is underdetermind (fewer equations thanunknowns). An underdetermined linear system has either no solutionor infinitely many solutions.M N, the problem is overdetermind (more equations thanunknowns). An overdetermined system is almost always inconsistent(it has no solution) when constructed with random coefficients.Homogeneous case has a trivial all zero solution.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering12 / 78

Problem Definitionill-posednessCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering13 / 78

Problem Definitionill-posednessQ. what is ill-posedness?A. unstability with respect to measurement errors.Q. How to deal with ill-posed inverse problems?A. It depends! There is no universal method for solving ill-posed problems.In every specific case, the main “trouble” — instability — has to be tackledin its own way.“All happy families resemble one another, each unhappy family is unhappyin its own way”. Leo TolstoySolution: Adding regularization such as Tikhonov regularization, 1 regularization, etc.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering14 / 78

IntroductionFinding fake coin between coins?Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering15 / 78

IntroductionCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering16 / 78

IntroductionFinding fake coin between coins?Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering17 / 78

IntroductionCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering18 / 78

Compressive SensingCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering19 / 78

Why CS?Imaging at wavelengths where silicon is blind is considerably more complicated, bulky, and expensive. Thus, for comparable resolution, a US 500digital camera for the visible becomes a US 50,000 camera for the infrared.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering20 / 78

Dimensionality ReductionData with high-frequency content is often not intrisically high-dimensional.Signals often obey low-dimensional models such as:SparsityLow-rank matricesManifoldsThe intrinsic dimension can be much less than the ambient dimension.Courtesy of Dr. Mark DavenportCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering21 / 78

Why CS?Success has many fathers .Sampling Theorem: sampling attwice the highest frequency.Compressive Sensing: sampling atsub-Nyquist rate!Whittaker, Nyquist, Kotelnikov, ShannonDonoho, Candes, Romberg, Tao“Can we not just directly measure the part that will not end up being thrownaway?”[Donoho, 2006]“why spend so much effort acquiring all the data when we know that most of itwill be discarded?” [Candes, 2006]Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering22 / 78

Problem Statement“Measure what should be measured .”1CompressiveSensing3Compressive SensingTransformSparsity2Enable solving underdeterminedlinear inverse problemsNon-adaptiveLinear SamplingReconstruction SparkNull Space PropertyRIPMutual Coherence Greedy Algorithms (MP, OMP,StOMP, ) Basis Pursuit (BP), Basis PursuitDenoising (BPDN) LASSO Smoothed L0 (SL0) DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering23 / 78

Single Pixel CameraCourtesy of Dr. Marco F. Duarte and Dr. Richard G. BaraniukCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering24 / 78

Problem StatementDefinition:We say that a signal x is K -sparse when it has at most K nonzeros, i.e.,kxk0 6 k. Also, we let Σk {x : kxk0 6 k} denote the set of all K -sparsesignals.Figure: (a) Measurement process with a random Gaussian matrix Φ and discrete cosinetransform (DCT) matrix Ψ. (b) The measurement vector y is a linear combination of 4 columnsin dictionary [Baraniuk2007]Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering25 / 78

Recovery ConstraintsCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering26 / 78

Recovery ConstraintsP0 : min ksk0 subject to y ΦΨssP1 : min ksk1 subject to y ΦΨssFigure: (a)The set of all k-sparse (k 2) vectors in R 3 . (b)Visualization of 2 minimization. (c)Visualization of 1 minimization. [Baraniuk, 2007]Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering27 / 78

Recovery Constraints [Davenport, 2011]Spark (for sparse signals) σ (Φ) 2k.Null Space Property (for approximately sparse signals)khΛc k1 .khΛ k2 C NSPkRestricted Isometry Property (for approximate sparse signals withnoise ) (1 δk ) kxk22 6 kΦxk22 6 (1 δk ) kxk22 . φTi φj Mutual Coherence µ (Φ) max kφ k φi 2 k j k2i6 jSpark (Φ) 1 Compressive Sensing1µ(Φ)DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering28 / 78

Spark [Davenport, 2011]Definition:The spark of a given matrix Φ is the smallest number of columns of Φthat are linearly dependent.Spark (ΦM N ) , min {k; 1 6 i1 6 . 6 ik 6 N, α1 , ., αk R,kαk2 6 0,kXj 1Compressive Sensing αj Φij 0 DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering29 / 78

Spark [Davenport, 2011]Theorem:For any vector y R M , there exists at most one signal x Σk such thaty Φx if and only if σ (Φ) 2k.Proof:If we wish to be able to recover all sparse signals x from the measurementsΦx, then it is immediately clear that for any pair of distinct vectorsx1 , x2 Σk , we must have Φx1 6 Φx2 , since otherwise it would beimpossible to distinguish x1 from x2 based solely on the measurements.Mathematicallyifx1 6 x2 , and x1 , x2 k-sparse Φx1 6 Φx2 Φ (x1 x2 ) 6 0Note:Spark (ΦM N ) [2, M 1]Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering30 / 78

Null Space Property [Davenport, 2011]Definition:A matrix Φ satisfies the null space property (NSP) of order K if thereexists a constant CNSP 0 such that, h NΦ , Λ {1, ., N} , Λ k, Λc {1, ., N} \ΛCNSPkhΛ k2 khΛc k1kNote:we must also ensure that NΦ does not contain any vectors that are toocompressible in addition to vectors that are sparse.Theorem:Any K -sparse vector x is a unique solution to P1 if and only if matrix Φsatisfies NSP of order K with CNSP 1.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering31 / 78

Restricted Isometry Property [Davenport, 2011]Definition:A matrix Φ satisfies the restricted isometry property (RIP) of order K ifthere exists a δk (0, 1) such that(1 δk ) kxk22 6 kΦxk22 6 (1 δk ) kxk22holds for all x Σk .Theorem: Noiseless recoveryAssume that δ2k obeys 2 1 . Then the solution x N 1 to 1 minimizationC0kx xk2 6 x x(k)k1(k)where xN 1 is the best K -sparse approximate of x and C0 is a constant .Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering32 / 78

Restricted Isometry Property [Davenport, 2011]Theorem: Recoery in the presence of bounded noise Assume that δ2k 2 1 and let y Φx e, where kek2 6 . Then thesolution x N 1 to 1 minimization obeysC0kx xk2 6 x x(k)k1 C1 (k)where xN 1 is the best K -sparse approximate of x and C0 and C1 is aconstant as follows: 1 1 2 δ2k1 δ2k C0 2, C1 41 1 2 δ2k1 1 2 δ2kCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering33 / 78

RIP and NSP [Davenport, 2011]Theorem: The relationship between the RIP and the NSPSuppose that Φ satisfies the RIP of order 2k with δ2k satisfies the NSP of order 2k with constant 2δ2k C0 1 1 2 δ2kCompressive Sensing 2 1. Then ΦDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering34 / 78

Recovery Constraints [Davenport, 2011]Mutual CoherenceDefinition: The coherence of a matrix Φ, µ (Φ), is the largest absoluteinner product between any two columns φi , φj of Φ: φTi φj µ (Φ) max kφ k φi 2 k j k2i6 jLemma: For any matrix ΦSpark (Φ) 1 1µ (Φ)Theorem:If k 12 (1 1/µ (Φ)) then for each measurement vectory R M there exists at most one signal x Σk such that y Φx.Welch Bound:It is possibleto show that the lower bound for coherence of a matrix is:qN Mµ M(N 1)Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering35 / 78

Recovery ConstraintsCourtesy of the concept by Dr. Arash AminiCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering36 / 78

Measurement MatricesHow should we design the measurement matrix so that the required numberof measurements becomes as small as possible?Gaussian random matricesBernulli random matricesHadamard matricesFourier matricesDeterministic matricesMeasurement boundsLet Φ be an M N matrix that satisfies the RIP of order 2k with1constant δ 0, 2 . ThenM C (Klog (N/K )) where C 1/2 log 24 1 0.28.How can we recover the desired signal from the measurements?Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering37 / 78

Recovery AlgorithmsCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering38 / 78

Recovery AlgorithmsDesigning Criteria:Minimal number of measurementsRobustness to measurement noise and model mismatchSpeedPerformance guaranteesGreedy AlgorithmsMatching Pursuit (MP)Orthogonal MP (OMP)Stage-wise OMPSubspace Pursuit (SP)CoSaMPIterative Hard Thresholding (IHT)Convex Optimization based MethodsBasis Pursuit (BP)Basis Pursuit Denoising (BPDN)LASSOSmoothed 0Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering39 / 78

Matching Pursuit (MP)y Dx y kX(i)xa(i) da(i) rk y(k) r(k)i 1Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering40 / 78

Orthogonal Matching Pursuit (OMP)Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering41 / 78

BP, BPDN and LASSO 1 -minimization:Basis Pursuit (BP)P1 : mins RNksk1s.t. y ΦΨsBasis Pursuit Denoising (BPDN)mins RNs.t.ksk1ky minΦΨsk22s RN6δ12ky ΦΨsk22 λksk1Least Absolute Shrinkage and Selection Operator (LASSO)mins RNky ΦΨsk22s.t. ksk1 6 τCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering42 / 78

S 0 : Smoothed 0 Algorithm (21xlim fσ (xi ) lim exp i 2 σ 0σ 02σ0Fσ (x) NXif xi 0if xi 6 0fσ (xi ) kxk0 N Fσ (x)i 1SL0 is an algorithm for finding the sparsest solutions of anunderdetermined system of linear equations.SL0 is a very fast algorithm. For example, it is about 2 to 3 orders ofmagnitude faster than 1 -magic.SL0 tries to directly minimize the L0 norm. This is contrary to mostof other existing algorithms (e.g. Basis Pursuit), which replace L0norm by other cost functions (like L1). Note also that the equivalenceof minimizing L1 and L0 is only assymptotic, and does not alwayshold.Software Link: http://ee.sharif.edu/ SLzeroCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering43 / 78

ChallengesCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering44 / 78

Challenges [Str12, Ela12]Galileo Galilei“Measure what can be measured and make measurable what cannot bemeasured” This quote attributed to Galileo Galilei.Thomas Strohmer“Measure what should be measured”Challenges:Structured Sensing MatricesStructured Sparsity and Prior InformationNon-Linear Compressive SensingBetter Bounds for Compressive SensingHardware ImplementationCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering45 / 78

Structured Sensing MatricesQuestion:Can we use a novel structure for sensing matrices that results in agood performance?We usually do not have luxury to choose measurement matrix as weplease.Measurement matrix is often dictated by the physical properties ofthe measurement process. e.g. the laws of wave propagation.Specific structure in measurement matrix can give rise to fastalgorithms which will significantly speed up recovery algorithms.Structure: Block based compressive sensing, Separable matrices likeKronecker product matricesRemark: The typical measurement matrix is not Gaussian or Bernoulli,but one with a very specific structure.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering46 / 78

Structured Sensing MatricesQuestion:Can we use a novel structure for sensing matrices that results in agood performance?We usually do not have luxury to choose measurement matrix as weplease.Measurement matrix is often dictated by the physical properties ofthe measurement process. e.g. the laws of wave propagation.Specific structure in measurement matrix can give rise to fastalgorithms which will significantly speed up recovery algorithms.Structure: Block based compressive sensing, Separable matrices likeKronecker product matricesRemark: The typical measurement matrix is not Gaussian or Bernoulli,but one with a very specific structure.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering46 / 78

Structured Sparsity and Prior InformationQuestion:How can we best take advantage of the knowledge that all sparsitypatterns may not be equally likely in a signal?Structured SparsityBlock SparsityCompressive SensingTree Structureof Wavelet CoefficientsDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering47 / 78

Non-Linear Compressive SensingQuestion:For which types of nonlinear measurements can we build aninteresting and relevant compressive sensing theory?In many applications we can only take nonlinear measurements.An important example is the case where we observe signalintensities. So, the phase information is missing.Phase retrieval: X-ray crystallography, diffraction imaging,astronomy, and quantum tomography.Quantized samples, and in the extreme case, 1-bit measurements.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering48 / 78

Non-Linear Compressive SensingQuestion:For which types of nonlinear measurements can we build aninteresting and relevant compressive sensing theory?In many applications we can only take nonlinear measurements.An important example is the case where we observe signalintensities. So, the phase information is missing.Phase retrieval: X-ray crystallography, diffraction imaging,astronomy, and quantum tomography.Quantized samples, and in the extreme case, 1-bit measurements.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering48 / 78

Weighted CSFigure: Enhancing Sparsity: At the origin, the canonical 0 sparsity count f0 (t) is better approximated by thelog-sum penalty function flog , (t) than by the traditionalconvex 1 relaxation f1 (t) [Candes2008].P 1 : min ksk1s RNs.t. y θs PRW 1 : minNPs RN i 1log ( si )s.t. y θsSolve using Majorization Minimization:PRW 1 : min kWsk1s RNs.t. y θs, PRW 1 : min kzk1z RN(l 1)wiCompressive Sensing s.t. y θW 1 z1(l)si εDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering49 / 78

Perceptual Compressive Sensing1StatisticalAvoided by usingTransform Codingand VariableLength CodingRedundanciesAvoided byassigning weightsto the TransformCoding coefficients2PerceptualQuestionHow we can avoid acquisition of perceptual redundancies in compressivesensing?Answer: Perceptual Weighted Compressive Sensing#S.H. Safavi, F. Torkamani-Azar, “Perceptual Compressive Sensing based on Contrast Sensitivity Function: Can we avoidnon-visible redundancies acquisition?”, ICEE, 2017.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering50 / 78

Perceptual Compressive Sensing1StatisticalAvoided by usingTransform Codingand VariableLength CodingRedundanciesAvoided byassigning weightsto the TransformCoding coefficients2PerceptualQuestionHow we can avoid acquisition of perceptual redundancies in compressivesensing?Answer: Perceptual Weighted Compressive Sensing#S.H. Safavi, F. Torkamani-Azar, “Perceptual Compressive Sensing based on Contrast Sensitivity Function: Can we avoidnon-visible redundancies acquisition?”, ICEE, 2017.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering50 / 78

Perceptual Compressive SensingLuminance Contrast sensitivity Function (CSF)10.90.8Relative ial FrequencyFigure: Contrast Sensitivity Function (CSF)1.1 (0.114fi,j )H (fi,j ) 2.6 (0.0192i,j ) er 0.114f 221ifi,j 2N θjy(cpd)θxFigure: illustration of spatialfrequency#S.H. Safavi, F. Torkamani-Azar, “Perceptual Compressive Sensing based on Contrast Sensitivity Function: Can we avoidnon-visible redundancies acquisition?”, ICEE, 2017.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering51 / 78

Diffusive Compressive SensingApplications:1.5Thermal monitoring of CPUhot spots to avoid leakageand reduce the CPU timefailure10.50-0.5150150100Climate change study100505000Enviromental Monitoring.Figure: Thermal Field example: LocalizedSource#A. Hashemi, S.H. Safavi, M. Rostami, N.M. Cheung, “Efficient Sampling Scheme for Reconstruction of Diffiusion Fields”,Preprint, 2017.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering52 / 78

Diffusive Compressive SensingThermal Field ReconstructionSide Information: 2D Partial Differential Equation (PDE) 2 f (x, y , t) 2 f (x, y , t) f (x, y , t) γ , t, γ 0. t x 2 y 2Frame Processing:ftt0 (x, y ) γwhere, ftt0 (x, y ) follows: f (x,y ,t) t t0 . t 2 ft0 (x, y ) 2 ft0 (x, y ) x 2 y 2 The above equation can be discretized asb γ (Dx Dx Dy Dy ) f.#A. Hashemi, S.H. Safavi, M. Rostami, N.M. Cheung, “Efficient Sampling Scheme for Reconstruction of Diffiusion Fields”,Preprint, 2017.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering53 / 78

Diffusive Compressive SensingFigure: Geometrical depiction of the proposed method. This figure intuitively explains whyadding the additional constraint can help to decrease overcompleteness of the CS problem.#A. Hashemi, S.H. Safavi, M. Rostami, N.M. Cheung, “Efficient Sampling Scheme for Reconstruction of Diffiusion Fields”,Preprint, 2017.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering54 / 78

Diffusive Compressive 0y1286060606040404040202020200002040608010000120 12820406080100120 12800204060xx(a)80100120 20200406080100x1201280020406080x10012012880100120 ualization of 10’th frame of reconstructed fields when xd yd 4 forsetting A(first row) and setting B (second row): (a,e) Original field,reconstructed fields for (b,f) CS , (c,g) DCS-I, (d,h) DCS-II.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering55 / 78

ApplicationsCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering56 / 78

SolversCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering57 / 78

SparseLabFunctionsSolveBP(A, y, N, maxIters, lambda, OptTol)SolveLasso(A, y, N, algType, maxIters, lambdaStop, resStop, solFreq,verbose, OptTol)SolveMP(A, y, N, maxIters, lambdaStop, solFreq, verbose, OptTol)SolveOMP(A, y, N, maxIters, lambdaStop, solFreq, verbose, OptTol)SolveStOMP(A, y, N, thresh, param, maxIters, verbose, OptTol)Software Link: https://sparselab.stanford.eduCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering58 / 78

SparseLabSolveBPA: Either an explicit M N matrix, with rank(A) min(M, N) byassumption, or a string containing the name of a functionimplementing an implicit matrix.y: Measurement vector of length M.N: length of the solution vector smaxIters: maximum number of PDCO iterations to perform, default20.lambda: If 0 or omitted, Basis Pursuit is applied to the data,otherwise, Basis Pursuit Denoising is applied with parameter lambda(default 0).OptTol: Error tolerance, default 1e 3 .Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering59 / 78

SparseLabSolveLassoA: Either an explicit M N matrix, with rank(A) min(M, N) byassumption, or a string containing the name of a functionimplementing an implicit matrix.y: Measurement vector of length M.N: length of the solution vector salgType: ’lars’ for the Lars algorithm, ’lasso’ for lars with the lassomodification (default). Add prefix ’nn’ (i.e. ’nnlars’ or ’nnlasso’) toadd a non-negativity constraint (omitted by default)maxIters: maximum number of Lars iterations to perform. If notspecified, runs to stopping condition (default)lambdaStop: If specified (and 0), the algorithm terminates whenthe Lagrange multiplier lambdaStop.Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering60 / 78

SparseLabSolveLasso (Cont.)resStop: If specified (and 0), the algorithm terminates when the L2norm of the residual resStop.solFreq: if 0 returns only the final solution, if 0, returns an arrayof solutions, one every solFreq iterations (default 0).verbose: 1 to print out detailed progress at each iteration, 0 for nooutput (default)OptTol: Error tolerance, default 1e 5 .Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering61 / 78

SparseLabSolveMP and SolveOMPA: Either an explicit M N matrix, with rank(A) min(M, N) byassumption, or a string containing the name of a functionimplementing an implicit matrixy: Measurement vector of length M.N: length of the solution vector smaxIters: maximum number of iterations to perform. If not specified,runs to stopping condition (default)lambdaStop: If specified, the algorithm stops when the last coefficiententered has residual correlation lambdaStop.solFreq: if 0 returns only the final solution, if 0, returns an arrayof solutions, one every solFreq iterations (default 0).verbose: 1 to print out detailed progress at each iteration, 0 for nooutput (default)OptTol: Error tolerance, default 1e 5 .Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering62 / 78

CVX:Matlab Software for Disciplined Convex ProgrammingCVX is a Matlab-based modeling system for convex optimization. CVX turnsMatlab into a modeling language, allowing constraints and objectives to bespecified using standard Matlab expression syntax. Website link: http://cvxr.com/cvxNotationBP:BPDN:N constantcvx beginvariables s(N)minimize (norm(s,1))subject toy ΦΨscvx endN constantcvx beginvariables s(N)minimize (0.5 * norm(y ΦΨs,2) λ norm(s,1))cvx endSoftware Link: http://cvxr.com/cvxCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering63 / 78

1 -MagicL1-MAGIC is a collection of MATLAB routines for solving the convex optimization programs central to compressive sampling. The algorithms arebased on standard interior-point methods, and are suitable for large-scaleproblems.l1decode pd: Decoding via linear programming. Solveminx b Ax 1 . Recast as the linear program and solve usingprimal-dual interior point method.l1eq pd: Solve minx x 1 s.t.Ax b. Recast as linear program anduse primal-dual interior point method.l1qc logbarrier: Solve quadratically constrained l1 minimization:min x 1 s.t. Ax b 2 . Reformulate as the Second-OrderCone Program (SOCP) and use a log barrier algorithm.Software Link: https://statweb.stanford.edu/ candes/l1magicCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering64 / 78

SPGL1:A solver for large-scale sparse reconstructionSPGL1 is a Matlab solver for large-scale one-norm regularized least squares.It is designed to solve any of the following three problems:1BP: minimize s 1 subject to As b,2BPDN: minimize s 1 subject to As b σ,3LASSO: minimize As b 2 subject to s 1 τ .SPGL1 relies only on matrix-vector operations As and AT y and acceptsboth explicit matrices and functions that evaluate these products. SPGL1is suitable for problems that are in the complex domain.Software Link:http://www.cs.ubc.ca/ mpf/spgl1/downloads/spgl1-1.9.zipCompressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering65 / 78

SPGL1:A solver for large-scale sparse reconstructionspg bp: Solve the basis pursuit (BP) problem.[s, r, g, info] spg bp(A,y,options )spg bpdn: Solve the basis pursuit Denoising (BPDN) problem.[s, r, g, info] spg bpdn(A,y,sigma,options )spg lasso: Solve the LASSO problem.[s, r, g, info] spg lasso(A,y,tau,options )spg group: Solve jointly-sparse BPDN.[s, r, g, info] spg group(A,y,groups,sigma,options )spg mmv: Solve multi-measurement BPDN.[s, r, g, info] spg mmv(A,y,sigma,options )spgl1: Solve BP, BPDN, and LASSO.[s, r, g, info] spgl1(A,y,tau, sigma, s, options )Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering66 / 78

SL0: Smoothed L0 (SL0) AlgorithmFunctions SL0(A, y, sigma min, sigma decrease factor, mu 0, L, A pinv, true s)A: The Dictionary.y: Measurement vector of length M.sigma min: The first element of the sequence of sigma is calculatedautomatically. The last element is given by ’sigma min’, and thechange factor for decreasing sigma is given by ’sigma decrease factor’.sigma decrease factor: The default value of ’sigma decrease factor’ is0.5. Larger value gives better results for less sparse sources, but ituses more steps on sigma to reach sigma min, and hence it requireshigher computational cost.µ0 : The value of µ0 scales the sequence of mu. For each vlue ofsigma, the value of mu is chosen via µ µ0 sigma2 . Note that thisvalue effects Convergence. The default value is µ0 2 (see thepaper).Compressive SensingDiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering67 / 78

SL0: Smoothed L0 (SL0) AlgorithmFunctionL: number of iterations of the internal (steepest ascent) loop. Thedefault value is L 3.A pinv: is the pseudo-inverse of matrix A defined byA pinv A0 inv (A A0 ). If it is not provided, it will be calculatedwithin the function.true s: is the true value of t

m eslami@sbu.ac.ir, h safavi@sbu.ac.ir 21 May 2017 Compressive Sensing DiSPLaY Group, Shahid Beheshti University, Faculty of Electrical Engineering 1 / 78. Outline 1 Introduction 2 Compressive Sensing Recovery Constraints Spark NSP RIP Mutual Coherence

Related Documents:

SBU produces several series of reports. SBU Assesses includes systematic reviews conducted by SBU's expert panels. The series addresses established technologies (Yellow Reports) and new technologies (Alert Reports). SBU Comments summarises and comments on reviews of medical evidence from abroad.

E-mail: G_Tabarsa@sbu.ac.ir Hamid Reza Ghasemi Tarbiat Modarres University, Iran E-mail: S_Talaie@sbu.ac.ir Rouhollah Bagheri Shahid Beheshti (National) University, Iran E-mail: R.bagheri@aut.ac.ir Shahab Talaie Shokri (Corresponding Author) Hekmat Institute for Policy and Strategic Studies, Iran E-mail: S_Talaie@sbu.ac.ir Submission: 01/10/2015

Proximity Sensor Sensing object Reset distance Sensing distance Hysteresis OFF ON Output Proximity Sensor Sensing object Within range Outside of range ON t 1 t 2 OFF Proximity Sensor Sensing object Sensing area Output (Sensing distance) Standard sensing object 1 2 f 1 Non-metal M M 2M t 1 t 2 t 3 Proximity Sensor Output t 1 t 2 Sensing .

Fast spectrophotometry with compressive sensing Spectroscopy Compressive Sensing Absorption Spectroscopy Emission Spectroscopy Absorption Spectroscopy LED bandwidth 400 - 800 nm Max LED Power 500 mW Collected LED Power 121 nW Transmission Grating 600 lines/mm DMD Resolution 608 x 684 (10.8 m) Si-Photodiode Detector 13 mm2 Time per measurement 0.1 s

2.87 GPa ASTM D 4255 Shear modulus G 13 G 23 157.48 MPa ASTM D 732 Sheet compressive strength 71.20 MPa Modified ASTM D 695 Sheet compressive modulus 3.50 GPa Modified ASTM D 695 Core compressive strength 8.73 MPa ASTM C 365 Core compressive modulus 268.9 MPa ASTM C 365 Sheet density 3960 kg/m - Core density 156 kg/m3 - 4 U T T U I 2( / sin )cos ( / )(2 / 1) 2 * h l h l t t l t (1) where, ρ .

xijdi qfsnjut vosftusjdufe vtf ejtusjcvujpo boe sfqspevdujpo jo boz nfejvn qspwjefe uif psjhjobm xpsl jt qspqfsmz djufe 4qpoubofpvtmz %jbcfujd 5psjj -fqs 4%5 gbuuz sbu ftubcmjtife cz jouspevdjoh uif gb bmmfmf pg uif ;vdlfs gbuuz sbu joup 4%5 sbu hfopnf jt b ofx npefm pg pcftf uzqf ejbcfu

1 sb 52 mg 100 6 2 sb 102 hc 150 2 6 3 sb 152 dc 200 cc 350 1 cb 350 mg 200 7 sbu 160 mg 300 4 sb 202 es 60 hc 350 2 7 sbu 2220 hc 450 5 sb 302 es 70 dc 400 mg 400 8 sbu 340 mg 500 6 sb 452 es 80 8 sb 552 7 sb 702 9 8 mb 1500 ec 90 ec 100 dc 600 dc 1000 cc 1650 cc 950 1 cb 750 cb 950 hc 850 bc 2100 mg 800 9 m

Tulang Penyusun Sendi Siku .41 2. Tulang Penyusun Sendi Pergelangan Tangan .47 DAFTAR PUSTAKA . Anatomi dan Biomekanika Sendi dan Pergelangan Tangan 6 Al-Muqsith Ligamentum annularis membentuk cincin yang mengelilingi caput radii, melekat pada bagian tepi anterior dan posterior insicura radialis pada ulna. Bagian dari kondensasi annular pada caput radii disebut dengan “annular band .