SPECTRAL ANALYSIS OF SIGNALS - Uppsala University

2y ago
54 Views
2 Downloads
2.47 MB
447 Pages
Last View : 1m ago
Last Download : 2m ago
Upload by : Brenna Zink
Transcription

iii“sm2”2004/2/page iiSPECTRAL ANALYSIS OFSIGNALSPetre Stoica and Randolph MosesPRENTICE HALL, Upper Saddle River, New Jersey 07458iiii

iii“sm2”2004/2/2page iiiLibrary of Congress Cataloging-in-Publication DataSpectral Analysis of Signals/Petre Stoica and Randolph Mosesp. cm.Includes bibliographical references index.ISBN 0-13-113956-81. Spectral theory (Mathematics) I. Moses, Randolph II. Title512’–dc21 2005QA814.G2700-055035CIPAcquisitions Editor: Tom RobbinsEditor-in-Chief: ?Assistant Vice President of Production and Manufacturing: ?Executive Managing Editor: ?Senior Managing Editor: ?Production Editor: ?Manufacturing Buyer: ?Manufacturing Manager: ?Marketing Manager: ?Marketing Assistant: ?Director of Marketing: ?Editorial Assistant: ?Art Director: ?Interior Designer: ?Cover Designer: ?Cover Photo: ?c 2005 by Prentice Hall, Inc.Upper Saddle River, New Jersey 07458All rights reserved. No part of this book maybe reproduced, in any form or by any means,without permission in writing from the publisher.Printed in the United States of America10 9 8 7 6 5 4 3 2 1ISBN arsonPearsonPearsonEducation LTD., LondonEducation Australia PTY, Limited, SydneyEducation Singapore, Pte. LtdEducation North Asia Ltd, Hong KongEducation Canada, Ltd., TorontoEducacion de Mexico, S.A. de C.V.Education - Japan, TokyoEducation Malaysia, Pte. Ltdiiii

iii“sm2”2004/2/page iiiiContents1 Basic Concepts1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .1.2 Energy Spectral Density of Deterministic Signals . .1.3 Power Spectral Density of Random Signals . . . . .1.3.1 First Definition of Power Spectral Density . .1.3.2 Second Definition of Power Spectral Density .1.4 Properties of Power Spectral Densities . . . . . . . .1.5 The Spectral Estimation Problem . . . . . . . . . . .1.6 Complements . . . . . . . . . . . . . . . . . . . . . .1.6.1 Coherency Spectrum . . . . . . . . . . . . . .1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . .1134678121212142 Nonparametric Methods222.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2 Periodogram and Correlogram Methods . . . . . . . . . . . . . . . . 222.2.1 Periodogram . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2.2 Correlogram . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3 Periodogram Computation via FFT . . . . . . . . . . . . . . . . . . 252.3.1 Radix–2 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.3.2 Zero Padding . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.4 Properties of the Periodogram Method . . . . . . . . . . . . . . . . . 282.4.1 Bias Analysis of the Periodogram . . . . . . . . . . . . . . . . 282.4.2 Variance Analysis of the Periodogram . . . . . . . . . . . . . 322.5 The Blackman–Tukey Method . . . . . . . . . . . . . . . . . . . . . . 372.5.1 The Blackman–Tukey Spectral Estimate . . . . . . . . . . . . 372.5.2 Nonnegativeness of the Blackman–Tukey Spectral Estimate . 392.6 Window Design Considerations . . . . . . . . . . . . . . . . . . . . . 392.6.1 Time–Bandwidth Product and Resolution–Variance Tradeoffs in Window Design . . . . . . . . . . . . . . . . . . . . . . 402.6.2 Some Common Lag Windows . . . . . . . . . . . . . . . . . . 412.6.3 Window Design Example . . . . . . . . . . . . . . . . . . . . 452.6.4 Temporal Windows and Lag Windows . . . . . . . . . . . . . 472.7 Other Refined Periodogram Methods . . . . . . . . . . . . . . . . . . 482.7.1 Bartlett Method . . . . . . . . . . . . . . . . . . . . . . . . . 492.7.2 Welch Method . . . . . . . . . . . . . . . . . . . . . . . . . . 502.7.3 Daniell Method . . . . . . . . . . . . . . . . . . . . . . . . . . 522.8 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552.8.1 Sample Covariance Computation via FFT . . . . . . . . . . . 552.8.2 FFT–Based Computation of Windowed Blackman–Tukey Periodograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572.8.3 Data and Frequency Dependent Temporal Windows: TheApodization Approach . . . . . . . . . . . . . . . . . . . . . . 59iiiiiii

iii“sm2”2004/2/page iviiv2.92.8.4 Estimation of Cross–Spectra and Coherency Spectra . . . . .2.8.5 More Time–Bandwidth Product Results . . . . . . . . . . . .Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Parametric Methods for Rational Spectra3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.2 Signals with Rational Spectra . . . . . . . . . . . . . . . . . . . . .3.3 Covariance Structure of ARMA Processes . . . . . . . . . . . . . .3.4 AR Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.4.1 Yule–Walker Method . . . . . . . . . . . . . . . . . . . . . .3.4.2 Least Squares Method . . . . . . . . . . . . . . . . . . . . .3.5 Order–Recursive Solutions to the Yule–Walker Equations . . . . .3.5.1 Levinson–Durbin Algorithm . . . . . . . . . . . . . . . . . .3.5.2 Delsarte–Genin Algorithm . . . . . . . . . . . . . . . . . . .3.6 MA Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.7 ARMA Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.7.1 Modified Yule–Walker Method . . . . . . . . . . . . . . . .3.7.2 Two–Stage Least Squares Method . . . . . . . . . . . . . .3.8 Multivariate ARMA Signals . . . . . . . . . . . . . . . . . . . . . .3.8.1 ARMA State–Space Equations . . . . . . . . . . . . . . . .3.8.2 Subspace Parameter Estimation — Theoretical Aspects . .3.8.3 Subspace Parameter Estimation — Implementation Aspects3.9 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.9.1 The Partial Autocorrelation Sequence . . . . . . . . . . . .3.9.2 Some Properties of Covariance Extensions . . . . . . . . . .3.9.3 The Burg Method for AR Parameter Estimation . . . . . .3.9.4 The Gohberg–Semencul Formula . . . . . . . . . . . . . . .3.9.5 MA Parameter Estimation in Polynomial Time . . . . . . .3.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51171171181191221251294 Parametric Methods for Line Spectra1444.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444.2 Models of Sinusoidal Signals in Noise . . . . . . . . . . . . . . . . . . 1484.2.1 Nonlinear Regression Model . . . . . . . . . . . . . . . . . . . 1484.2.2 ARMA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494.2.3 Covariance Matrix Model . . . . . . . . . . . . . . . . . . . . 1494.3 Nonlinear Least Squares Method . . . . . . . . . . . . . . . . . . . . 1514.4 High–Order Yule–Walker Method . . . . . . . . . . . . . . . . . . . . 1554.5 Pisarenko and MUSIC Methods . . . . . . . . . . . . . . . . . . . . . 1594.6 Min–Norm Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644.7 ESPRIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1664.8 Forward–Backward Approach . . . . . . . . . . . . . . . . . . . . . . 1684.9 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704.9.1 Mean Square Convergence of Sample Covariances for LineSpectral Processes . . . . . . . . . . . . . . . . . . . . . . . . 1704.9.2 The Carathéodory Parameterization of a Covariance Matrix . 172iiii

iii“sm2”2004/2/page viv4.9.3Using the Unwindowed Periodogram for Sine Wave Detectionin White Noise . . . . . . . . . . . . . . . . . . . . . . . . . .4.9.4 NLS Frequency Estimation for a Sinusoidal Signal with TimeVarying Amplitude . . . . . . . . . . . . . . . . . . . . . . . .4.9.5 Monotonically Descending Techniques for Function Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.9.6 Frequency-selective ESPRIT-based Method . . . . . . . . . .4.9.7 A Useful Result for Two-Dimensional (2D) Sinusoidal Signals4.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1741771791851931985 Filter Bank Methods2075.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2075.2 Filter Bank Interpretation of the Periodogram . . . . . . . . . . . . . 2105.3 Refined Filter Bank Method . . . . . . . . . . . . . . . . . . . . . . . 2125.3.1 Slepian Baseband Filters . . . . . . . . . . . . . . . . . . . . . 2135.3.2 RFB Method for High–Resolution Spectral Analysis . . . . . 2165.3.3 RFB Method for Statistically Stable Spectral Analysis . . . . 2185.4 Capon Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2225.4.1 Derivation of the Capon Method . . . . . . . . . . . . . . . . 2225.4.2 Relationship between Capon and AR Methods . . . . . . . . 2285.5 Filter Bank Reinterpretation of the Periodogram . . . . . . . . . . . 2315.6 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355.6.1 Another Relationship between the Capon and AR Methods . 2355.6.2 Multiwindow Interpretation of Daniell and Blackman–TukeyPeriodograms . . . . . . . . . . . . . . . . . . . . . . . . . . . 2385.6.3 Capon Method for Exponentially Damped Sinusoidal Signals 2415.6.4 Amplitude and Phase Estimation Method (APES) . . . . . . 2445.6.5 Amplitude and Phase Estimation Method for Gapped Data(GAPES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2475.6.6 Extensions of Filter Bank Approaches to Two–DimensionalSignals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2515.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576 Spatial Methods6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .6.2 Array Model . . . . . . . . . . . . . . . . . . . . . .6.2.1 The Modulation–Transmission–Demodulation6.2.2 Derivation of the Model Equation . . . . . .6.3 Nonparametric Methods . . . . . . . . . . . . . . . .6.3.1 Beamforming . . . . . . . . . . . . . . . . . .6.3.2 Capon Method . . . . . . . . . . . . . . . . .6.4 Parametric Methods . . . . . . . . . . . . . . . . . .6.4.1 Nonlinear Least Squares Method . . . . . . .6.4.2 Yule–Walker Method . . . . . . . . . . . . . .6.4.3 Pisarenko and MUSIC Methods . . . . . . . .6.4.4 Min–Norm Method . . . . . . . . . . . . . . .6.4.5 ESPRIT Method . . . . . . . . . . . . . . . . . . . . . . . .Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263263265266268273276279281281283284285285iiii

iii“sm2”2004/2/page viivi6.56.6Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6.5.1 On the Minimum Norm Constraint . . . . . . . . . . . . . . .6.5.2 NLS Direction-of-Arrival Estimation for a Constant-ModulusSignal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6.5.3 Capon Method: Further Insights and Derivations . . . . . . .6.5.4 Capon Method for Uncertain Direction Vectors . . . . . . . .6.5.5 Capon Method with Noise Gain Constraint . . . . . . . . . .6.5.6 Spatial Amplitude and Phase Estimation (APES) . . . . . .6.5.7 The CLEAN Algorithm . . . . . . . . . . . . . . . . . . . . .6.5.8 Unstructured and Persymmetric ML Estimates of the Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . .Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286286288290294298305312317319APPENDICESA Linear Algebra and Matrix Analysis ToolsA.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .A.2 Range Space, Null Space, and Matrix Rank . . . . . . .A.3 Eigenvalue Decomposition . . . . . . . . . . . . . . . . .A.3.1 General Matrices . . . . . . . . . . . . . . . . . .A.3.2 Hermitian Matrices . . . . . . . . . . . . . . . . .A.4 Singular Value Decomposition and Projection OperatorsA.5 Positive (Semi)Definite Matrices . . . . . . . . . . . . .A.6 Matrices with Special Structure . . . . . . . . . . . . . .A.7 Matrix Inversion Lemmas . . . . . . . . . . . . . . . . .A.8 Systems of Linear Equations . . . . . . . . . . . . . . . .A.8.1 Consistent Systems . . . . . . . . . . . . . . . . .A.8.2 Inconsistent Systems . . . . . . . . . . . . . . . .A.9 Quadratic Minimization . . . . . . . . . . . . . . . . . .328328328330331333336341345347347347350353B Cramér–Rao Bound ToolsB.1 Introduction . . . . . . . . . . . . . .B.2 The CRB for General Distributions .B.3 The CRB for Gaussian DistributionsB.4 The CRB for Line Spectra . . . . . .B.5 The CRB for Rational Spectra . . .B.6 The CRB for Spatial Spectra . . . .355355358359364365367C Model Order Selection ToolsC.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .C.2 Maximum Likelihood Parameter Estimation . . . . . . .C.3 Useful Mathematical Preliminaries and Outlook . . . . .C.3.1 Maximum A Posteriori (MAP) Selection Rule . .C.3.2 Kullback-Leibler Information . . . . . . . . . . .C.3.3 Outlook: Theoretical and Practical PerspectivesC.4 Direct Kullback-Leibler (KL) Approach: No-Name Rule.377377378381382384385386.iiii

iii“sm2”2004/2/page viiiviiC.5C.6C.7C.8Cross-Validatory KL Approach: The AIC Rule . . . . . . .Generalized Cross-Validatory KL Approach: the GIC Rule .Bayesian Approach: The BIC Rule . . . . . . . . . . . . . .Summary and the Multimodel Approach . . . . . . . . . . .C.8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . .C.8.2 The Multimodel Approach . . . . . . . . . . . . . .D Answers to Selected rences Grouped by Subject413Index420iiii

iii“sm2”2004/2/page viiiviiiiiii

iii“sm2”2004/2/page ixiList of ExercisesCHAPTER 11.1Scaling of the Frequency Axis1.2Time–Frequency Distributions1.3Two Useful Z–Transform Properties1.4A Simple ACS Example1.5Alternative Proof that r(k) r(0)1.6A Double Summation Formula1.7Is a Truncated Autocovariance Sequence (ACS) a Valid ACS?1.8When Is a Sequence an Autocovariance Sequence?1.9Spectral Density of the Sum of Two Correlated Signals1.10Least Squares Spectral Approximation1.11Linear Filtering and the Cross–SpectrumC1.12 Computer Generation of Autocovariance SequencesC1.13 DTFT Computations using Two–Sided SequencesC1.14 Relationship between the PSD and the Eigenvalues of the ACS MatrixCHAPTER 22.1Covariance Estimation for Signals with Unknown Means2.2Covariance Estimation for Signals with Unknown Means (cont’d)2.3Unbiased ACS Estimates may lead to Negative Spectral Estimates2.4Variance of Estimated ACS2.5Another Proof of the Equality φ̂p (ω) φ̂c (ω)2.6A Compact Expression for the Sample ACS2.7Yet Another Proof of the Equality φ̂p (ω) φ̂c (ω)2.8Linear Transformation Interpretation of the DFT2.9For White Noise the Periodogram is an Unbiased PSD Estimator2.10Shrinking the Periodogram2.11Asymptotic Maximum Likelihood Estimation of φ(ω) from φ̂p (ω)2.12Plotting the Spectral Estimates in dB2.13Finite–Sample Variance/Covariance Analysis of the Periodogram2.14Data–Weighted ACS Estimate Interpretation of Bartlett and Welch Methods2.15Approximate Formula for Bandwidth Calculation2.16A Further Look at the Time–Bandwidth Product2.17Bias Considerations in Blackman–Tukey Window Design2.18A Property of the Bartlett WindowC2.19 Zero Padding Effects on Periodogram EstimatorsC2.20 Resolution and Leakage Properties of the PeriodogramC2.21 Bias and Variance Properties of the Periodogram Spectral EstimateC2.22 Refined Methods: Variance–Resolution TradeoffC2.23 Periodogram–Based Estimators applied to Measured Dataixiiii

iii“sm2”2004/2/page xixCHAPTER 33.1The Minimum Phase Property3.2Generating the ACS from ARMA Parameters3.3Relationship between AR Modeling and Forward Linear Prediction3.4Relationship between AR Modeling and Backward Linear Prediction3.5Prediction Filters and Smoothing Filters3.6Relationship between Minimum Prediction Error and Spectral Flatness3.7Diagonalization of the Covariance Matrix3.8Stability of Yule–Walker AR Models3.9Three Equivalent Representations for AR Processes3.10An Alternative Proof of the Stability Property of Reflection Coefficients3.11Recurrence Properties of Reflection Coefficient Sequence for an MA Model3.12Asymptotic Variance of the ARMA Spectral Estimator3.13Filtering Interpretation of Numerator Estimators in ARMA Estimation3.14An Alternative Expression for ARMA Power Spectral Density3.15Padé Approximation3.16(Non)Uniqueness of Fully Parameterized ARMA EquationsC3.17 Comparison of AR, ARMA and Periodogram Methods for ARMA SignalsC3.18 AR and ARMA Estimators for Line Spectral EstimationC3.19 Model Order Selection for AR and ARMA ProcessesC3.20 AR and ARMA Estimators applied to Measured DataCHAPTER 44.1Speed Measurement by a Doppler Radar as a Frequency DeterminationProblem4.2ACS of Sinusoids with Random Amplitudes or Nonuniform Phases4.3A Nonergodic Sinusoidal Signal4.4AR Model–Based Frequency Estimation4.5An ARMA Model–Based Derivation of the Pisarenko Method4.6Frequency Estimation when Some Frequencies are Known4.7A Combined HOYW-ESPRIT Method for the MA Noise Case4.8Chebyshev Inequality and the Convergence of Sample Covariances4.9More about the Forward–Backward Approach4.10ESPRIT and Min–Norm Under the Same Umbrella4.11Yet Another Relationship between ESPRIT and Min–NormC4.12 Resolution Properties of Subspace Methods for Estimation of Line SpectraC4.13 Model Order Selection for Sinusoidal SignalsC4.14 Line Spectral Methods applied to Measured DataCHAPTER 55.1Multiwindow Interpretation of Bartlett and Welch Methods5.2An Alternative Statistically Stable RFB Estimate5.3Another Derivation of the Capon FIR Filter5.4The Capon Filter is a Matched Filter5.5Computation of the Capon Spectrum5.6A Relationship between the Capon Method and MUSIC (Pseudo)Spectra5.7A Capon–like Implementation of MUSICiiii

iii“sm2”2004/2/page xiixi5.85.9C5.10C5.11C5.12C5.13Capon Estimate of the Parameters of a Single Sine WaveAn Alternative Derivation of the Relationship between the Capon and ARMethodsSlepian Window SequencesResolution of Refined Filter Bank MethodsThe Statistically Stable RFB Power Spectral EstimatorThe Capon MethodCHAPTER 66.1Source Localization using a Sensor in Motion6.2Beamforming Resolution for Uniform Linear Arrays6.3Beamforming Resolution for Arbitrary Arrays6.4Beamforming Resolution for L–Shaped Arrays6.5Relationship between Beamwidth and Array Element Locations6.6Isotropic Arrays6.7Grating Lobes6.8Beamspace Processing6.9Beamspace Processing (cont’d)6.10Beamforming and MUSIC under the Same Umbrella6.11Subspace Fitting Interpretation of MUSIC6.12Subspace Fitting Interpretation of MUSIC (cont’d.)6.13Subspace Fitting Interpretation of MUSIC (cont’d.)6.14Modified MUSIC for Coherent SignalsC6.15 Comparison of Spatial Spectral EstimatorsC6.16 Performance of Spatial Spectral Estimators for Coherent Source SignalsC6.17 Spatial Spectral Estimators applied to Measured Dataiiii

iii“sm2”2004/2/page xiiixiiiiii

iii“sm2”2004/2/page xiiiPrefaceSpectral analysis considers the problem of determining the spectral content(i.e., the distribution of power over frequency) of a time series from a finite set ofmeasurements, by means of either nonparametric or parametric techniques. Thehistory of spectral analysis as an established discipline started more than a centuryago with the work by Schuster on detecting cyclic behavior in time series. Aninteresting historical perspective on the developments in this field can be found in[Marple 1987]. This reference notes that the word “spectrum” was apparentlyintroduced by Newton in relation to his studies of the decomposition of white lightinto a band of light colors, when passed through a glass prism (as illustrated on thefront cover). This word appears to be a variant of the Latin word “specter” whichmeans “ghostly apparition”. The contemporary English word that has the samemeaning as the original Latin word is “spectre”. Despite these roots of the word“spectrum”, we hope the student will be a “vivid presence” in the course that hasjust started!This text, which is a revised and expanded version of Introduction to SpectralAnalysis (Prentice Hall, 1997), is designed to be used with a first course in spectral analysis that would typically be offered to senior undergraduate or first–yeargraduate students. The book should also be useful for self-study, as it is largelyself-contained. The text is concise by design, so that it gets to the main pointsquickly and should hence be appealing to those who would like a fast appraisal onthe classical and modern approaches of spectral analysis.In order to keep the book as concise as possible without sacrificing the rigorof presentation or skipping over essential aspects, we do not cover some advancedtopics of spectral estimation in the main part of the text. However, several advancedtopics are considered in the complements that appear at the end of each chapter,and also in the appendices. For an introductory course, the reader can skip thecomplements and refer to results in the appendices without having to understandin detail their derivation.For the more advanced reader, we have included three appendices and a number of complement sections in each chapter. The appendices provide a summaryof the main techniques and results in linear algebra, statistical accuracy bounds,and model order selection, respectively. The complements present a broad range ofadvanced topics in spectral analysis. Many of these are current or recent researchtopics in the spectral analysis literature.At the end of each chapter we have included both analytical exercises andcomputer problems. The analytical exercises are more–or–less ordered from leastto most difficult; this ordering also approximately follows the chronological presentation of material in the chapters. The more difficult exercises explore advancedtopics in spectral analysis and provide results which are not available in the maintext. Answers to selected exercises are found in Appendix D. The computer problems are designed to illustrate the main points of the text and to provide the readerwith first–hand information on the behavior and performance of the various spectralanalysis techniques considered. The computer exercises also illustrate the relativexiiiiiii

iii“sm2”2004/2/page xivixivperformance of the methods and explore other topics such as statistical accuracy,resolution properties, and the like, that are not analytically developed in the book.We have used Matlab1 to minimize the programming chore and to encouragethe reader to “play” with other examples. We provide a set of Matlab functionsfor data generation and spectral estimation that form a basis for a comprehensiveset of spectral estimation tools; these functions are available at the text web sitewww.prenhall.com/stoica.Supplementary material may also be obtained from the text web site. Wehave prepared a set of overhead transparencies which can be used as a teachingaid for a spectral analysis course. We believe that these transparencies are usefulnot only to course instructors but also to other readers, because they summarizethe principal methods and results in the text. For readers who study the topic ontheir own, it should be a useful exercise to refer to the main points addressed inthe transparencies after completing the reading of each chapter.As we mentioned earlier, this text is a revised and expanded version of Introduction to Spectral Analysis (Prentice Hall, 1997). We have maintained theconciseness and accessability of the main text; the revision has primarily focusedon expanding the complements, appendices, and bibliography. Specifically, we haveexpanded Appendix B to include a detailed discussion of Cramér-Rao bounds fordirection-of-arrival estimation. We have added Appendix C, which covers modelorder selection, and have added new computer exercises on order selection. Wehave more than doubled the number of complements from the previous book to 32,most of which present recent results in spectral analysis. We have also expandedthe bibliography to include new topics along with recent results on more establishedtopics.The text is organized as follows. Chapter 1 introduces the spectral analysisproblem, motivates the definition of power spectral density functions, and reviewssome important properties of autocorrelation sequences and spectral density functions. Chapters 2 and 5 consider nonparametric spectral estimation. Chapter2 presents classical techniques, including the periodogram, the correlogram, andtheir modified versions to reduce variance. We include an analysis of bias andvariance of these techniques, and relate them to one another. Chapter 5 considersthe more recent filter bank version of nonparametric techniques, including bothdata-independent and data-dependent filter design techniques. Chapters 3 and 4consider parametric techniques; Chapter 3 focuses on continuous spectral models(Autoregressive Moving Average (ARMA) models and their AR and MA specialcases), while Chapter 4 focuses on discrete spectral models (sinusoids in noise).We have placed the filter bank methods in Chapter 5, after Chapters 3 and 4,mainly because the Capon estimator has interpretations as both an averaged ARspectral estimator and as a matched filter for line spectral models, and we needthe background of Chapters 3 and 4 to develop these interpretations. The dataindependent filter bank techniques in Sections 5.1–5.4 can equally well be covereddirectly following Chapter 2, if desired.Chapter 6 considers the closely-related problem of spatial spectral estimationin the context of array signal processing. Both nonparametric (beamforming) and1 Matlab Ris a registered trademark of The Mathworks, Inc.iiii

iii“sm2”2004/2/page xvixvparametric methods are considered, and tied into the temporal spectral estimationtechniques considered in Chapters 2, 4 and 5.The Bibliography contains both modern and classical references (ordered bothalphabetically and by subject). We include many historical references as well, forthose interested in tracing the early developments of spectral analysis. However,spectral analysis is a topic with contributions from many diverse fields, includingelectrical and mechanical engineering, astronomy, biomedical spectroscopy, geophysics, mathematical statistics, and econometrics to name a few. As such, anyattempt to accurately document the historical development of spectral analysis isdoomed to failure. The bibliography reflects our own perspectives, biases, and limitations; while there is no doubt that the list is incomplete, we hope that it gives thereader an appreciation of the breadth and diversity of the spectral analysis field.The background needed for this text includes a basic knowledge of linear algebra, discrete-time linear systems, and introductory discrete-time stochastic processes (or time series). A basic understanding of estimation theory is helpful, thoughnot required. Appendix A develops most of the needed background results on matrices and linear algebra, Appendix B gives a tutorial introduction to the CramérRao bound, and Appendix C develops the theory of model order selection. We haveincluded concise definitions and descriptions of the required concepts and resultswhere needed. Thus, we have tried to make the text as self-contained as possible.We are indebted to Jian Li and Lee Potter for adopting a former version ofthe text in their spectral estimation classes, for their valuable feedback, and forcontributing to this book in several other ways. We would like to thank TorstenSöderström for providing the initial stimulus for preparation of lecture notes that ledto the book, and Hung-Chih Chiang, Peter Händel, Ari Kangas, Erlendur Karlsson,and Lee Swindlehurst for careful proofreading and comments, and for many ideason and early drafts of the computer problems. We are grateful to Mats Bengtsson,Tryphon Georgiou, K.V.S. Hari, Andreas Jakobsson, Erchin Serpedin, and AndreasSpanias for comments and suggestions that helped us eliminate some inadvertenciesand typographical errors from the previous edition of the book. We also wishto thank Wallace Anderson, Alfred Hero, Ralph Hippenstiel, Louis Scharf, andDouglas Williams, who reviewed a former version of the book and provided uswith numerous useful comments and suggestions. It was a pleasure to work withthe excellent staff at Prentice Hall, and we are particularly appreciative of TomRobbins for his professional expertise.Many of the topics described in this book are outgrowths of our research programs in statistical signal and array processing, and we wish to thank the sponsorsof this research: the Swedish Foundation for Strategic Research, the Swedish Research Council, the Swedish Institute, the U.S. Army Research Laboratory, the U.S.Air Force Research Laboratory, and the U.S. Defense Advanced Research ProjectsAdministration.Finally, we are indebted to Anca and Liz for their continuing support andunderstanding throughout this project.Petre StoicaUppsala UniversityRandy MosesThe Ohio State Universityiiii

iii“sm2”2004/2/page xvixviiiii

iii“sm2”2004/2/page xviNotational ConventionsRthe set of real numbersCthe set of complex numbersN (A)the null space of the matrix A (p. 328)R(A)the range space of the matrix A (p. 328)Dnthe nth definition in Appendix A or BRnthe nth result in Appendix Akxkthe Euclidean norm of a vector x convolution operatorTtranspose of a vector or matrixc(·)conjugate of a vector or matrix(·) conjugate transpose of a vector or matrix;also used for scalars in lieu of (·)cAijthe (i, j)th element of the matrix Aaithe ith element of the vector ax̂an estimate of the quantity xA 0 ( 0)A is positive definite (positive semidefinite) (p. 341)arg max f (x)the value of x that maximizes f (x)arg min f (x)the value of x

SPECTRAL ANALYSIS OF SIGNALS Petre Stoica and Randolph Moses PRENTICE HALL, Upper Saddle River, New Jersey 07458 \sm2" 2004/2/22 page ii i i i i i i i i Library of Congress Cataloging-in-Publication Data Spectral Analysis of Signals/Petre Stoica and Randolp

Related Documents:

Signals And Systems by Alan V. Oppenheim and Alan S. Willsky with S. Hamid Nawab. John L. Weatherwax January 19, 2006 wax@alum.mit.edu 1. Chapter 1: Signals and Systems Problem Solutions Problem 1.3 (computing P and E for some sample signals)File Size: 203KBPage Count: 39Explore further(PDF) Oppenheim Signals and Systems 2nd Edition Solutions .www.academia.eduOppenheim signals and systems solution manualuploads.strikinglycdn.comAlan V. Oppenheim, Alan S. Willsky, with S. Hamid Signals .www.academia.eduSolved Problems signals and systemshome.npru.ac.thRecommended to you based on what's popular Feedback

procedure, social psychology *Department of Government, Uppsala University, Sweden **Erasmus School of Law, Erasmus University Rotterdam, The Netherlands Corresponding author: Karin Leijon, Department of Government, Uppsala University, Box 514, 751 20 Uppsala, Sweden. E-mail: karin.leijon@statsvet.uu.se Article Maastricht Journal of European and

The DTFT analysis equation, Equation (13.4), shows how the weights are determined. We also refer to X(Ω) as the spectrum or spectral distribution or spectral content of x[·]. Example1(SpectrumofUnitSampleFunction) Considerthesignal x[n] δ[n],theunit sample function. From the definition in Equation (13.4), the spectral distribution is given

Spectral iQ Gain refers to gain applied to the newly generated spectral cue. This value is relative to gain prescribed across the target region. For example, if the target region spans 1000 Hz and 3000 Hz and the gain applied in that range is 20dB, a Spectral iQ gain setting of 3dB will cause Spectral iQ to generate new cues that peak at

Spectral Methods and Inverse Problems Omid Khanmohamadi Department of Mathematics Florida State University. Outline Outline 1 Fourier Spectral Methods Fourier Transforms Trigonometric Polynomial Interpolants FFT Regularity and Fourier Spectral Accuracy Wave PDE 2 System Modeling Direct vs. Inverse PDE Reconstruction 3 Chebyshev Spectral Methods .

speech enhancement techniques, DFT-based transforms domain techniques have been widely spread in the form of spectral subtraction [1]. Even though the algorithm has very . spectral subtraction using scaling factor and spectral floor tries to reduce the spectral excursions for improving speech quality. This proposed

Power Spectral Subtraction which itself creates a bi-product named as synthetic noise[1]. A significant improvement to spectral subtraction with over subtraction noise given by Berouti [2] is Non -Linear Spectral subtraction. Ephraim and Malah proposed spectral subtraction with MMSE using a gain function based on priori and posteriori SNRs [3 .

Business architecture is directly based on business strategy (see Fig. 1). This business architecture is the foun-dation for subsequent architectures (strategy embedding), where it is detailed .