A Gradient Descent Implementation Of Adaptive Pulse Compression

1y ago
11 Views
3 Downloads
601.65 KB
5 Pages
Last View : 4d ago
Last Download : 3m ago
Upload by : Fiona Harless
Transcription

A Gradient Descent Implementation of AdaptivePulse CompressionPatrick M. McCormick1, Shannon D. Blunt1, and Thomas Higgins21Radar Systems Lab, University of Kansas, Lawrence, KS2Radar Division, Naval Research Laboratory, Washington, DCAbstract—Gradient descent is an iterative method ofdetermining the minima or maxima of a function. The algorithmcan be used to solve a linear system of equations when thecomputational cost of a matrix inverse is too expensive for anapplication. Here, gradient descent is applied to Adaptive PulseCompression (APC), yielding the GraD-APC algorithm.Specifically, a unit-gain constrained version of GraD-APC withoptimal step size is derived for use with frequency modulated(FM) waveforms, particularly for cases in which the waveformtime-bandwidth product is large enough to prohibit practical useof the original matrix inverse based APC. The range-profileestimation of GraD-APC is compared to that of fully adaptiveAPC using both simulated and experimentally measured data.Keywords—adaptive filtering, gradient descentI. INTRODUCTIONThe Adaptive Pulse Compression (APC) algorithm [1,2]was developed to mitigate the sidelobe interference that arisesduring pulse compression by generating an adaptive filter foreach range cell. Since the inception of APC, it has branched into many variants from joint-domain adaptive filtering [3,4] toapplicability for FM waveforms [5,6]. APC employs reiterative minimum mean-square error (RMMSE) estimation toobtain a range-adaptive filter for each range cell and requires atleast one matrix inversion (of a structured covariance matrix)for each filter. The need for a matrix inverse is problematicwhen considering waveforms with high time-bandwidthproduct and/or the need for real-time implementation.An early attempt to avoid full matrix inversion relied on thematrix inversion lemma [2] which performed reliably in lowdynamic range scenarios. However, modifications to thealgorithm were required to address high dynamic rangescenarios due to numerical imprecision when using the matrixinversion lemma leading to error propagation for the APC filterupdate as a function of range. Dimensionality reductiontechniques were also investigated to trade-off adaptive degreesof freedom for lower computational cost [7]. These reductiontechniques split the full dimension covariance matrix intomultiple lower dimension covariance matrices, thus reducingthe overall computation cost. Interestingly, in [5] it was shownthat this dimensionality reduction is also useful to compensatefor the “oversampling” (relative to waveform 3 dB bandwidth)that is needed to represent FM waveforms with sufficientfidelity for application of APC. Here, the FM-amenableversion of APC from [5] is formulated using a gradient descentimplementation (yielding what is denoted as GraD-APC) as ameans to avoid the matrix inverse altogether with minimaldegradation in performance.This work was supported by the Radar Division of the Naval Research Laboratory.Descent methods are in most cases non-optimal (with lessthan infinite iterations), yet they have the desirable qualities ofreduced computational complexity, which is usually O( M ) forfilter length M, and better numerical stability [8]. Because ofthe reduction in computational cost, GraD-APC allows for theapplication of APC and its variants to problems with highdimensionality (e.g. high time-bandwidth product waveforms)and a lower computational threshold to achieve real-timeoperation.An analog to APC and GraD-APC is the application ofgradient descent to the deterministic minimum mean-squareerror (MMSE) beamformer or, if the problem is constrained,the linearly-constrained minimum variance (LCMV)beamformer [8]. The difference between these two sets ofalgorithms is that for APC/GraD-APC the desired signal andthe covariance matrix are unknown (albeit structured), andtherefore estimation of the covariance matrix and desiredsignal are determined using an alternating bootstrappingapproach based on the initial matched filter (or mismatchedfilter) response.With the incorporation of gradient descent, GraD-APC hastwo distinct iterative elements: an “inner loop” to estimate thefilter (via gradient descent) and the original “outer loop” toestimate the range cell complex amplitude (the APC structure).This formulation could be directly extended to also incorporatejoint adaptivity in the spatial [3], fast-time Doppler [9], slowtime Doppler [4], and/or polarization [6] domains.Section II summarizes the signal model and FM-basedAPC filter derivation from [5]. Section III then introduces thegradient descent formulation and how it is applied to APC toform GraD-APC. In Section IV, the performance of GraD-APCis demonstrated in both a simulated environment and using freespace measurements made with an LFM waveform.II. ADAPTIVE PULSE COMPRESSIONThe received signal captured at discretized delaymodeled ascan bey( ) xT ( )s u( )(1)where s is a length-M discretized version of the transmittedwaveform s(t ) , u ( ) is a sample of additive noise present atthe receiver, x( ) [ x( ) x( 1)x( M 1)] are Mcontiguous complex samples of the range profile ground truth(not known), and ( )T is the transpose operation. The rangeprofile vector x( ) contains the complex scaling that includesT

transmitted power, antenna gain, spherical spreading losses,and radar cross section of the illuminated scatterers.The signal-to-noise ratio (SNR) maximizing matched filteris denoted asHxˆMF ( ) w MFy( ) ,(2)found using (6). The flowchart in Fig. 1 is for a single rangecell and it is assumed that estimation updates for all the rangecells of interest are performed before the next iteration of filterupdates is performed. It typically takes 2-5 iterations for thisinstantiation of APC to converge depending on the density ofsignificant scatterers in the illuminated environment.where y( ) [ y( ) y( 1)is they( M 1)]collection of M contiguous discrete received samples that allcontain the scatterer response x( ) and ( ) H is the Hermitianoperation. Here it is assumed that the matched filter, which isdelay independent, is normalized asTw MF s.sH s(3)Now define the delay dependent filtering asxˆ( ) w H ( )y( ) ,(4)where the delay dependent filter is found by minimizing thegeneralized APC cost function [5]J ( ; w ( )) 2 1 HEx()()() wy kkk 0 KK 1 Re w Hk ( )s k 1 k 0K 1(5)Fig. 1. Flowchart of APC operationfor use with FM waveforms, where K is the degree of“oversampling” on receive with respect to the 3-dB bandwidthof the transmitted waveform, which is needed to represent theFM waveform with sufficient fidelity. Thus w k ( ) , y k ( ) ands k are the kth length M K polyphase-decomposed versions ofthe APC filter, receive vector, and transmitted waveform,respectively. The decomposition is necessary when applyingthe APC formulation for FM waveforms to avoid a noiseenhancement effect that otherwise occurs in the inversion ofthe full M M covariance matrix. Here it is assumed the datais resampled such that K is an integer. The bottom term in (5)is a unit gain constraint on the overall (non-decomposed) APCfilter where λ is the Lagrange multiplier. Minimizing (5) withrespect to w k ( ) yields the APC filter [5]wk ( ) Ck () R k sk 1K 1 s C ( ) R i 0Hiii 1(6)siIII. GRADIENT DESCENT - ADAPTIVE PULSE COMPRESSIONGiven the gradient of cost function J as w* J , thegradient descent (GD) structure is [10]wn wn 1 n ( w* J (w)) ,(7)where n is the GD iteration index and n is a non-negativestep size that may be iteration dependent.The APC cost function for use with FM waveforms in (5)employed a polyphase decomposition to avoid noiseenhancement effects that arose during matrix inversion (due tothe need to “over-sample” the FM waveform for sufficientfidelity). Because the GD structure in (7) avoids the need formatrix inversion, this polyphase decomposition is also nolonger required. As such, the gain-constrained (and nondecomposed) APC cost function can be expressed as 2J ( ; w( )) E x( ) w H ( )y( ) Re w H ( )s 1 . (8) where R k is the kth decomposed M K M K noisecovariance matrix, and Ck ( ) is the kth decomposedM K M K structured covariance matrix.Taking the gradient of (8) and assuming each range cell isuncorrelated with neighboring range cells and the noise yieldsFig. 1 shows a flowchart of the APC process. The blue boxrepresents the range profile initialization using the matchedfilter, the green boxes represent polyphase decomposition orrecombining and the red boxes represent steps in the APCprocedure. The APC loop is enclosed in the dashed box andflows according to the bold arrows. The box labeled w k ( ) iswhere R is the M M full-dimension noise covariance (foruncorrelated noise R u2 I M , where u2 is the noise powerand I M is an M M identity matrix). The M M fulldimension structured covariance matrix C( ) is defined as w* J ( ; w( )) C( ) R w( ) x ( ) s s ,(9)

M 1 C( ) M 1 ( ) s s H ,(10)where ( ) xˆ ( ) is the power of the current estimate of2 n,opt ( ) Re g nH ( ) C( ) R Ps w n 1 ( ) w q g nH ( ) C( ) R g n ( )g n ( ) Ps C( ) R w n 1 ( ) .sM 1 01 ]T for 0 sM 1 ]Tfor 0 (20)whererange cell x( ) and [ s s 1s [01 s0 s1,(21)(11)While calculation of the optimal step size at each iteration maybe computationally prohibitive, the formulation in (20) and(21) may provide a good guide to establish practical rules forselecting the step-size in practice (such as with LMS [10]).As in [5], the K 1 range cells on either side of thecurrent range index are zeroed in (11) asWe have now defined the two iterative processes for GraDAPC: the APC-based “outer loop” to converge to an estimateof x( ) ; and the GD-based “inner loop” to converge to anestimate of the APC filter for a given range cell. Table Iprovides the details of the GraD-APC algorithm. Denote N asthe total number of inner loop (GD) iterations. The number ofouter loop (APC) iterations depends on the desired sidelobesuppression. A convergence criterion for the inner loop couldalso be implemented in lieu of a fixed number of iterations.are delay-shifted versions of the discretized waveform. ( k) 0(12)for k 0,1, , K 1 . This zeroing has the effect of wideningthe beam in range to that of the matched filter resolution,which serves to minimize straddling loss [5] and focuses theadaptive degrees of freedom on sidelobe suppression insteadof narrowing the mainbeam for super-resolution, which yieldssignificant SNR loss and greatly increases convergence time.Inserting (9) into the GD structure of (7) producesw n ( ) w n 1 ( ) n ( ) C( ) R w( ) x ( )s s . (13)The Lagrange multiplier can be determined by solving for λ inw nH ( )s 1 w n 1 ( ) n ( ) C( ) R w( ) x ( )s s , (14)H 1 1.(15)Subsequently inserting (15) into (13) and simplifying yieldsw n ( ) Ps I M n ( ) C( ) R w n 1 ( ) w q ,Obtain the initial range profile estimate xˆ ( ) xˆMF( ) via (2) and (3)and initialize the APC filter to the quiescent filter via (18)w0 ( ) wq .3.Compute the power estimates ( ) xˆ( ) 2 and use to calculate thestructured covariance matrice C from (10) while implementing thePs I M s s H s s H 1Initialize inner loop iteration index to n 1 .5.Calculate the optimal step size n,opt ( ) via (20) and (21).6.Calculate w n ( ) via (16) – (18) .7.If n N , go to step 5 and increment inner loop index n n 1 .Otherwise, go to step 8.Apply each GraD-APC filter to the associated data vector to obtain the8.updated range profile estimate as xˆ( ) wHN ( )y( ) .9.Initialize filters for next outer loop iteration as w0 ( ) w N ( ) usingcurrent final filter estimates.10. Go to step 3 unless convergence or desired suppression is achieved.(17)and w q is the quiescent filter 14.(16)where Ps is the orthogonal projection matrixwq s sH s 2.into thezero-filling constraint for ( ) from (12). 1 s H s s H C( ) R w n ( ) x ( )Collect M range samples corresponding to range indexvector y .sresulting in n 1 ( ) s H s n 1 ( ) s H s s H w n ( )TABLE I: IMPLEMENTATION OF GRAD-APC ALGORITHM1.(18)that is identical to the normalized matched filter of (3).The optimal step size n,opt ( ) at the nth iteration can bedetermined by solvingThe outer loop comprises steps 3 thru 8 in Table I while theinner loop is steps 5 thru 7. Note the initialization of the innerloop filter is dependent on the outer loop iteration. For the firstiteration of the outer loop, the initial filter of the inner loopw 0 ( ) is set to the quiescent or matched filter. For theremaining outer loop iterations, w 0 ( ) is set to the final filterof the previous inner loop (step 9). Per outer loop iteration perrange cell, APC from (6) requires O(M 3 / K 3 ) complex(19)operations, while GraD-APC requires O(M 2 N ) and is readilyamenable for further reduction via parallel processing.where w n ( ) is defined in (16). Plugging (16) into (8) andminimizing with respect to n ( ) yields the optimal step sizeThe nested loop structure of GraD-APC presents aninteresting trade-off between filter convergence andmin J ; w n ( ) , n ( )

convergence of the range-profile estimate (or covariancematrix). It is advantageous in terms of time to not fullyconverge to a filter solution before updating the range profileestimate, but to find a “good enough” solution. Thiscompromise permits a more rapid update of the gradientdescent direction. These choices give flexibility to the desiredamount of adaptivity/suppression. A comprehensiveconvergence analysis on the nested loop structure of GraDAPC is needed to understand fully the “optimal” choice inparameters.IV. SIMULATION AND EXPERIMENTAL RESULTSTo judge the effectiveness of GraD-APC in estimating therange profile, it will be compared to the range profilesgenerated by the matched filter and by APC from (6). First, ahigh dynamic range profile containing two target responses issimulated using a linear frequency modulated waveform withtime-bandwidth product of 100. Second, measured datacaptured in an open air environment using an LFM of timebandwidth product of 64 is examined.A. Simulation ResultsConsider a range profile containing two point scattererslocated at range indices 150 and 160. The magnitudes of thesetargets are 70 dB and 20 dB, respectively, relative to the noisepower that is normalized to 0 dB. The phases of the twoscatterers are randomly chosen from a uniform distribution onU[0,2π]. The illuminating waveform is an LFM with timebandwidth product of 100. The sampling frequency is chosento be K 3 times that of the 3-dB bandwidth of the waveform.Therefore K 1 2 range cells are zeroed on either side of the“current” range index for both GraD-APC and APC.To start, the number of outer loop iterations is chosen to be5 for GraD-APC and 2 for APC as it takes more iterations forGraD-APC to converge. GraD-APC is tested with twodifferent amounts of inner loop iterations: N 5 and N 10 .APC variants exposed the 20 dB target after processing. Thebest overall sidelobe suppression is provided by the APCimplementation of (7). The additional 5 inner loop iterations inGraD-APC2 reduce the range sidelobes an additional 5 dB ascompared to GraD-APC1. Compared to APC, both GraD-APCvariants have higher close-in sidelobes near the mainlobe of thelarger target. These sidelobes are due to the GraD-APCsolutions not fully converging during the given outer and innerloop iterations. Although the sidelobes for GraD-APC are notsuppressed as much as with APC, the GraD-APC suppressionis still close to 30 dB better than the matched filter response.B. Open-Air Experimental ResultsThe dataset used here is the same used in [5] that wascaptured in part to test the modifications to APC to make itamenable to FM waveforms. Figure 3 shows the field of viewfor the experiment obtained from Google Maps. The mainscatterers in the scene are indicated by the orange trianglesand the radar location is marked by the orange circle. Figure 4shows the quasi-monostatic setup of two quad-ridge antennasfor simultaneous transmit and receive. An LFM with anapproximate time-bandwidth of product of 64 occupies 80MHz of bandwidth. The center frequency was 2.3 GHz andthe transmit power was approximately 24 dBm. The basebanddata were resampled to a sampling frequency K 3 times the3-dB bandwidth of the LFM waveform.Fig. 3. Field of view for measured resultsFig. 2. Two-target range profile estimation using matched filter (blue), APC(red), GraD-APC (N 5) (yellow) and GraD-APC (N 10) (purple)Figure 2 shows the simulation results for the two-targetscenario. The yellow trace denoting GraD-APC1 is thescenario with N 5 inner loop iterations and the purple tracedenoting GraD-APC2 has N 10 inner loop iterations. AllFigure 5 shows the range profile estimates using a matchedfilter (blue), APC (red), GraD-APC for N 5 inner loopiterations (yellow) and GraD-APC for N 10 inner loopiterations (purple). The number of outer loop iterations wasthe same as for the simulated results with 2 for APC and 5 forGraD-APC.

eigenspread of the structured covariance matrix to . 4. Test setup for experimental measurements[6][7][8][9][10]Fig. 5. Experimental range profile estimation using matched filter (blue), APC(red), GraD-APC (N 5) (yellow) and GraD-APC (N 10) (purple)The difference between the two GraD-APC range-profilesis negligible, with the maximum difference between thembeing 1 dB. APC again outperforms the two GraD-APCimplementations, which is to be expected considering thesimulation results. However all three of these APC-basedschemes outperform the matched filter, with sidelobesuppression improvement ranging from 15 to 20 dB for APCand 10 to 15 dB for GraD-APC.V. CONCLUSIONSA new method for implementing adaptive pulsecompression (APC) using gradient descent has beendeveloped (denoted GraD-APC). The resulting absence of amatrix inverse opens the door for application of GraD-APC tohigher dimensionality problems and may enable real-timeoperation. Further investigation is needed as to the “optimal”number of iterations, for both the inner and outer loops.Simulation and experimental results show that GraD-APC is aviable practical alternative to APC for suppression of rangesidelobes. Ongoing work is exploring methods to reduce theS. D. Blunt and K. Gerlach, “A novel pulse compression scheme basedon minimum mean-square error reiteration,” IEEE Intl. Radar Conf.,Adelaide, Australia, Sept. 2003.S.D. Blunt and K. Gerlach, “Adaptive pulse compression via MMSEestimation,” IEEE Trans. Aerospace & Electronic Systems, vol. 42, no.2, pp. 572-584, Apr. 2006.P. McCormick, T. Higgins, S.D. Blunt, and M. Rangaswamy,"Adaptive receive processing of spatially modulated physical radaremissions," IEEE Journal of Special Topics in Signal Processing, vol.9, no. 8, pp. 1415-1426, Dec. 2015.T. Higgins, S. Blunt, and A. Shackelford, "Time-range adaptiveprocessing for pulse agile radar," Intl. Waveform Diversity and DesignConf., Niagara Falls, Canada, Aug. 2010.D. Henke, P. McCormick, S.D. Blunt, and T. Higgins, “Practicalaspects of optimal mismatch filtering and adaptive pulse compressionfor FM waveforms,” IEEE Intl. Radar Conf., Washington, DC, May2015.P. McCormick, J. Jakabosky, S.D. Blunt, C. Allen, and B.Himed, "Joint polarization/waveform design and adaptive receiveprocessing", IEEE Intl. Radar Conf., Washington, DC, May 2015.S.D. Blunt, T. Higgins, and K. Gerlach, “Dimensionality reductiontechniques for efficient adaptive pulse compression,” IEEE Trans.Aerospace and Electronic Systems, vol. 46, no. 1, pp. 349-362, Jan.2010.Van Trees, H.L., Optimum Array Processing, John Wiley & Sons,2002, Chap. 2.S.D. Blunt, A. Shackelford, K. Gerlach, and K.J. Smith, "Dopplercompensation & single pulse imaging via adaptive pulse compression,"IEEE Trans. Aerospace & Electronic Systems, vol. 45, no. 2, pp. 647659, Apr. 2009.S. Haykin, Adaptive Filter Theory, 5th ed., Prentice Hall, 2013.

A Gradient Descent Implementation of Adaptive Pulse Compression Patrick M. McCormick1, Shannon D. Blunt1, and Thomas Higgins2 1Radar Systems Lab, University of Kansas, Lawrence, KS 2Radar Division, Naval Research Laboratory, Washington, DC Abstract—Gradient descent is an iterative method of determining the minima or maxima of a function.

Related Documents:

Method of Gradient Descent The gradient points directly uphill, and the negative gradient points directly downhill Thus we can decrease f by moving in the direction of the negative gradient This is known as the method of steepest descent or gradient descent Steepest descent proposes a new point

2 f( ). While any method capable of minimizing this objective function can be applied, the standard approach for differentiable functions is some form of gradient descent, resulting in a sequence of updates t 1 t trf( t). The performance of vanilla gradient descent, however, is hampered by the fact that it only makes use

5.4.2 Steepest descent It is a close cousin to gradient descent and just change the choice of norm. Let’s suppose q;rare complementary: 1 q 1 r 1. Steepest descent just update x x t x, where x kuk r u u argmin kvk q 1 rf(x)T v If q 2, then x r f(x), which is exactly gradient descent.

16.2.1 The Basic Gradient Descent Method Gradient descent is an iterative algorithm to approximate the opti-mal solution x. The main idea is simple: since the gradient tells us the direction of steepest increase, we’d like to move opposite to the

Mirror descent 5-2 Convex and Lipschitz problems minimizex f (x) subject to x ! C f is convex andLf-Lipschitz continuous Mirror descent 5-35 Outline Mirror descent Bregman divergence Alternative forms of mirror descent Convergence analysis f (xt) !! f (xt),x " xt " " 1 2!t #x " xt#2

and the hubness phenomenon (Section 2.2), which, as we will de-monstrate later, has a significant impact on NN-Descent. 2.1 NN-Descent The main purpose of the NN-Descent algorithm is to create a good approximation of the true K-NNG, and to create it as fast as possi-ble. The basic assumption made by NN-Descent can be summarized

Good Habits for Successful Gradient Separations Developing good gradient habits is the key to long term success. In this session we will start by discussing what it takes to maximize gradient efficiency by balancing gradient speed with adequate resolution needs. Since even the best gradient can be compromised we are going to look at optimizing LC

Alex Rider: Never say Die by Anthony Horowitz Below are the complete reviews, written by the Lovereading4kids members. George Hutton - Dormston Secondary School Alex Rider receives a suspicious email from who could be Jack Starbright who was kidnapped on his previous mission. However, whilst trying to locate Jack, he accidentally manages to get tangled up in another MI6 Mission which could put .