A Diagonalisation Algorithm And Its Application In .

2y ago
17 Views
2 Downloads
219.52 KB
7 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Julius Prosser
Transcription

Journal of Global Positioning Systems (2003)Vol. 2, No. 1: 35-41A Diagonalisation Algorithm and Its Application in Ambiguity SearchGuochang XuGeoForschungsZentrum Potsdam (GFZ), Dept. 1, Telegrafenberg, 14473 Potsdam, GermanyReceived: 5 March 2002 / Accepted: 22 July 2002Abstract. A diagonalisation algorithm of the leastsquares normal equation is proposed in this paper. Theequivalent observation equations related to thediagonalised normal equations are also derived in detail.For the equivalent observation equations and their normalequations, the related equivalent ambiguity search criteriaare outlined. Theoretical application of the proposedalgorithm in ambiguity search is briefly summarised.Using this algorithm, the ambiguity search turns out to bea search in a diagonal space so that the search can bedone very quickly. Numerical examples to illustrate thediagonalisation process of the normal equation andobservation equation are also given.Key words: GPS, Ambiguity Search, DiagonalisationAlgorithm, Adjustment, Equivalent Equations1 IntroductionIt is well known that the ambiguity resolution is a keyproblem, which has to be solved in GPS static andkinematic precise positioning. Some well-derivedambiguity fixing and searching algorithms have beenpublished during the last decades. A search process isusually needed in the most methods. Generally speaking,the search is an intensive computing and time consumingprocess. Therefore some optimal algorithms have beencreated to reduce the search area and to accelerate thesearch process (e.g. Euler and Landau, 1992; Han andRizos, 1997; Teunissen, 1995). Using the well-knownleast squares ambiguity search criterion for searching inambiguity domain, most modifications are directly basedon the used criterion, e.g. via decomposition (fastmethod, e.g. Euler and Landau, 1992) or de-correlation(LAMBDA method, e.g. Teunissen, 1995). Alternatively,we will first try to work on equivalently diagonalisednormal equations (and observation equations), and thenuse the related equivalent ambiguity search criteria forthe equivalent problems. In this way the search processturns out to be a search in a diagonal space so that thetime consuming on the search is negligible. This methodis originally derived for the so-called general criterion(Xu, 2003) used in KSGSoft (Kinematic/Static GPSSoftware) (Xu et al., 1998); however, it may be directlyused for the least squares ambiguity search criterion, too.2 A Diagonalisation Algorithm of Least SquaresNormal EquationIn least squares adjustment the unknowns can be dividedinto two groups and then solved in a block-wise manner.In practice, sometimes only one group of unknowns is ofinterest, the other group, called nuisance parameters, isbetter to be eliminated, for example because of the largesize. The nuisance parameters can be eliminated blockwisely to obtain an equivalently eliminated normalequation system of the interested unknowns. Using theelimination process twice for the two groups ofunknowns respectively, the normal equation can bediagonalised. The algorithm can be outlined as follows.Linearized observation equation system can berepresented by (Cui et al., 1982; Gotthardt, 1978; Zhou etal., 1997):V L ( A1 X A2 ) 1 , X2 P(1)where L is the observational vector of dimension m, A1and A2 are coefficient matrices of dimension m (n r) andm r, X1 and X2 are unknown vectors of dimension n-r andr, V is residual vector of dimension m, n and m arenumbers of total unknowns and observationsrespectively, P is a symmetric and definite weight matrixof dimension m m. For un-correlated observation vectorL, P is a diagonal matrix.Least squares normal equation of (1) can be formed thenby

Journal of Global Positioning Systems36( A1A2 )T P ( A1 X A2 ) 1 ( A1 X2 A2 )T PL .(2)By denoting A1T PA1 AT PA 2 1A1T PA2 M 11 A2T PA2 M 21W1 A1T PL ,M 2' X 2 B2'M 12 MM 22 Q12 Q Inv ( M ) Q 11QQ22 21(3)(4)From the first equation of (4), one hasX 1 Inv ( M 11 )(W1 M 12 X 2 )Set X1 into the second equation of (4), one gets anequivalently eliminated normal equation of X2M 2 X 2 B2 ,(5)whereM 2 M 22 M 21 Inv ( M 11 ) M 12B2 W2 M 21 Inv ( M 11 )W1(6)Set X2 into the first equation of (4), one gets anequivalently eliminated normal equation of X1(7)(8)0 X 1 B1 ,M 2 X 2 B2 (9) 1 T A2T P( E A1 M 11A1 P ) A2 1 TB2 A2T PL A2T PA1 M 11A1 PL(13) 1 T A2T P ( E A1 M 11A1 P ) L 1 TJ A1 M 11A1 P 1 T 1 TJ 2 ( A1 M 11A1 P )( A1 M 11A1 P ) 1 T 1 T A1 M 11A1 PA1 M 11A1 P 1 T A1 M 11A1 P Jwhere (e.g. Cui et al., 1982; Gotthardt, 1978)Q11 Inv ( M 1 ), Q22 Inv ( M 2 ).3 The Forming of Equivalent Observation Equationsthen one has the properties ofCombining (5) and (7), one has M1 0Of course one may also diagonalise the X1 related normalequation (7) if necessary. To be emphasized is that thenormal equation system (12) is equivalent to the normalequation system (9) and (4). Through the diagonalisationprocess (12) is a diagonal equation system of X1 and X2.The sub-equation of X2 is a diagonal sub-equation. Afully diagonalisation of the normal equation is indeednearly the same as a Gauss-Jordan algorithm to eliminatethe non-diagonal elements of the normal matrix of thenormal equation and to solve the equation. The propertyof (12) could be very useful for many adjustmentapplications.where E is an identity matrix. Denoting (cf. Wang et al.,1988; Xu and Qian, 1986; Zhou, 1985)whereB1 W1 M 12 Inv ( M 22 )W2(12) 1 TM 2 A2T PA2 A2T PA1 M 11A1 PA2X 2 Inv ( M 22 )(W2 M 21 X 1 ) .M 1 M 11 M 12 Inv ( M 22 ) M 210 X 1 B1 .M 2' X 2 B2′ Using the notations of (3), (6) can be rewritten asSimilarly, from the second equation of (4), one hasM 1 X 1 B1 ,B2' is a vector. Then (9) turns out to have a form of M1 0W2 A2T PLM 12 X 1 W1 .M 22 X 2 W2 (11)where M 2' is a diagonal matrix, r is the dimension of X2,the normal equation system (2) is written as M 11 M 21The diagonalisation process can be repeated r 1 times onthe normal equation (5) (or the second equation of (9)), sothat Equation (5) can be fully diagonalised and can berepresented by(10)It is obvious that (4) and (9) are two equivalent normalequations. The solutions of the both equations areidentical. Furthermore, Eq. (9) is a diagonalised normalequation related to the X1 and X2. So we call the processof computing (6) and (8) to form (5) and (7) (i.e. (9)) adiagonalisation process of the normal equation (4).( E J )( E J ) E 2 2 EJ J 2 E 2J J E J(14)

Xu : A Diagonalisation Algorithm and Its Applications in Ambiguity Search[P( E J )]T(4) and (9) are equivalent, so (22) is an equivalentobservation equation of (1). To be emphasised is that theequivalent observation equation (22) is a kind of diagonalequation of X1 and X2. This property is a direct derivationof the diagonal property of the normal equation (9). (E J T )P P P 1 T ( A1 M 11A1 P ) T 1 T PA1 M 11A1 PP P( E J )i.e., matrices J and (E-J) are idempotent and (E-J)TP issymmetric, orJ 2 J,(E J )2 E J(15)T( E J ) P P( E J )Using above derived properties, (13) can be rewritten as:M2 A2T P ( E J ) A2 A2T P ( E J )( E J ) A2(16) A2T ( E J ) T P( E J ) A2B2 A2T P( E J )L A2TT( E J ) PLA2T ( E J ) T P ( E J ) A2 X 2 (17)T( E J ) PLSimilarly, we may repeat above process r 1 times to theobservation equation of X2 (i.e. (18)) step by step and therelated equivalent observation equation can be formed asU 2' L' D2' X 2 ,P' .(23)where D2' is a diagonal matrix, P ' is a diagonal matrixof P, L' is a vector of L, U 2' is a residual vector whichhas the same property as V in (1). Then (22) turns out tohave a form of U 1 L D1 ' ' U 2 L 00 X 1 ,D2' X 2 P 0 ' . 0 P (24)(24) is equivalent to (22). Similarly, (12) is the normalequation of the observation equation (24).then the normal equation (5) can be rewritten asA2T37Numerical examples are given in the appendix toillustrate the diagonalisation process of the normalequation and observation equation.Eq. (17) is the least squares normal equation of thefollowing linear observation equationU 2 L ( E J ) A2 X 2 ,P.(18)where L and P are original observation vector and weightmatrix of (1), U2 is a residual vector which has the sameproperty as V in (1). Eq. (5) is the least squares normalequation of the equivalently eliminated observationequation (18).Similarly, let 1 TI A2 M 22A2 P ,(19)then Eq. (7) is the least squares normal equation of thefollowing linear observation equationU1 L ( E I ) A1 X 1 ,P.(20)Again U1 is a residual vector which has the same propertyas V in (1). DenoteD1 ( E I ) A1 , D2 ( E J ) A2 ,(21)Eq. (18) and (20) can be written together as U1 L D1 U 2 L 00 X 1 ,D2 X 2 P 00 .P (22)Eq. (22) is derived from the normal equation (9),therefore, it is true inversely, i.e. (9) is the least squaresnormal equation of observation equation (22). Becausethe normal equations of (1) and (22) are (4) and (9), and4 Equivalent Ambiguity Search CriteriaSuppose GPS observation equation is (1) and its leastsquares normal equation is (4), where X2 N (N is theambiguity sub-vector) and X1 Y (Y is the rest unknownsub-vector). The least squares ambiguity search (LSAS)criterion (e.g., Teunissen, 1995; Leick, 1995; HofmannWellenhof et al., 1997; Euler and Landau, 1992; Han andRizos, 1997) isδ (dN ) ( N 0 N ) Inv(Q22 )( N 0 N ) ,(25)where N0 is the float solution of the ambiguity sub-vector,dN N0 – N. The ambiguity search is a process to findout a vector N in the searching area so that the value ofδ(dN) reaches the minimum. The general criterion is (Xu, 2003)δ (dX ) ( X 0 X )T Inv (Q )( X 0 X ) ,(26)where X (Y N)T, X0 (Y0 N0)T, dX X0 – X, index 0denotes the float solution. The search in ambiguitydomain is a process to find out a vector X (includes N inthe searching area and Y computed) so that the value ofδ(dX) reaches the minimum. The optimal property of thiscriterion is obvious (Xu, 2003).For the equivalent observation equation (22), the relatedleast squares normal equation is (9). The related leastsquares ambiguity search (LSAS) criterion remains the

Journal of Global Positioning Systems38same as (25). The related general criterion is then (puttingQ of (9) into (26) and taking (3) and (10) into account)δ1 (dX ) ( X 0 X )T Inv (Q )( X 0 X ) (Y0 Y ) T Inv (Q11 )(Y0 Y ) ( N 0 N ) Inv (Q22 )( N 0 N )(27)equation and to solve the normal equation. In otherwords, the diagonalisation process is almost the same asthe process of solving the normal equation. Therefore thetime consuming of the searching process is minimal andnegligible. The numerical examples of the diagonalisationalgorithm are given in the appendix. δ ( dY ) δ ( dN )where index 1 is used to distinguish the criterion (27)from the (26).Acknowledgement: This study was carried out under the grantof the German Federal Ministry of Education and Research(BMBF) No. 50EP9578.For the equivalent observation equation (24), the relatedleast squares normal equation is (12). The related leastsquares ambiguity search (LSAS) criterion is then(putting Q22 of (12) into (25) and taking (3) into account)References(28)Brown R.G. and P.Y.C. Hwang (1992) Introduction ofRandom Signals and Applied Kalman Filter, SecondEdition, John Wiley & Sons, Inc., New YorkThe related general criterion is then (putting Q of (12)into (26) and taking (3) and (10) into account)Bauer M. (1994) Vermessung und Ortung mit Satelliten,Wichmann Verlag, Karslruhe.δ 2 (dN ) ( N 0 N )TM 2' ( N 0 N) .δ 2 ( dX ) ( X 0 X )T Inv (Q )( X 0 X ) (Y0 Y ) T Inv (Q11 )(Y0 Y ) ( N 0 N )T M 2' ( N 0 N )(29) δ ( dY ) δ2 ( dN )where index 2 is used to distinguish the criteria (28) and(29) from the (25) and (27).To be emphasised is that the observation equations (1),(22) and (24) are equivalent, and the related normalequations (4), (9) and (12) are also equivalent. Therefore,the LSAS criteria (25) and (28) are equivalent. Thegeneral criteria (26), (27) and (28) are equivalent too.It is obvious that the ambiguity search should be based onthe equivalent observation equations and the relatednormal equations due to their diagonal properties.Because of the diagonal property of the matrix M 2' , theambiguity search using criteria (29) or (28) turns out tobe a search in a diagonal space and can be done veryquickly.5 SummaryThe diagonalisation algorithm of the least squares normalequation proposed here may surely find interestingapplications in the adjustments and data processing. Theway of forming of the equivalent observation equation isalso significant. Using the equivalent criteria based on theequivalent observation equations and the relatedequivalent normal equations, the ambiguity search can bemade alternatively in a diagonal space. Thediagonalisation process of the normal equation is nearlythe same as the Gauss-Jordan algorithm to eliminate thenon-diagonal elements of the normal matrix of the normalCannon M.E., Lachapelle G., Szarmes M., Herbert J., Keith J.and Jokerst S. (1997) DGPS Kinematic Carrier PhaseSignal Simulation Analysis for Precise Velocity andPosition Determination, Proceedings of ION NTM 97,Santa Monica, CA.Cui X., Yu Z., Tao B. and Liu, D. (1982) Adjustment inSurveying, Surveying Publishing House, Peking.Euler H.J., and Landau H. (1992) Fast GPS ambiguityresolution on-the-fly for real-time applications,Proceedings of 6th Int. Geod. Symp. on satellitePositioning, Columbus, Ohio, ruhe.EinfuehrungindieHerbert Wichmann VerlagHan S. and Rizos C. (1995) On-The-Fly Ambiguity Resolutionfor Long Range GPS Kinematic Positioning, IAGSymposia 115, edited by Beutler, Hein, Melbourne andSeeber.Han S. and Rizos, C. (1997) Comparing GPS AmbiguityResolution Techniques, GPS World, Oct. 1997, pp54-61.Hofmann-Wellenhof B., Lichtenegger H. and Collins, J. (1997)GPS Theory and Practice, Springer-Verlag, WienKing R.W., Masters E.G. Rizos C., Stolz, A. and Collins J.(1987) Surveying with Global Positioning System FERD,Duemmler Verlag, Bonn.Leick A. (1995) GPS Satellite Surveying, New York: JohnWiley & Sons.Parkinson B.W., Spilker J.J. (Eds) (1996) Global PositioningSystem: Theory and Applications, Volume I, II, Progressin Astronautics and Aeronautics, Volume 163Remondi B. (1984) Using the Global Positioning System(GPS) phase observable for relative geodesy: modelling,processing, and results, University of Texas at Austin,Center for Space Research.

Xu : A Diagonalisation Algorithm and Its Applications in Ambiguity SearchSeeber G. (1993) Satellitengeodäsie, Walter de GruyterStrang G. and Borre K. (1997) Linear Algebra, Geodesy, andGPS, Wellesley-Cambridge PressTeunissen P.J.G. (1995) The least-squares ambiguitydecorrelation adjustment: a method for fast GPS integerambiguity estimation, Journal of Geodesy, 70, 65-82.Wang G., Chen Z., Chen, W. and Xu G. (1988) The Principleof the GPS Precise Positioning System, SurveyingPublishing House, PekingXuG., Schwintzer P., Reigber Ch. (1998) KSGSoft(Kinematic/Static GPS Software) --- Software schungsZentrum PotsdamXu G., Qian Z. (1986) The Application of Block EliminationAdjustment Method for Processing of the VLBI Data,Crustal Deformation and Earthquake, Vol.6, No.4, inChineseXu G. (2003) GPS – Theory, Algorithms and Applications,Springer Verlag, HeidelbergZhou J. (1985) On the Jie factor, Acta Geodaetica etGeophysica, No.5, in ChineseZhou J., Huang J., Yang Y. and Ou, J. (1997) Robust leastsquares method, Publishing House of HuazhongUniversity of Science and Technology, WuhanAppendix:Numerical Examples of the Diagonalisation AlgorithmAs discussed, a normal equation can be diagonalised andthe related observation equation can be formed.Numerical examples to illustrate the diagonalisationprocess of the normal equation and observation equationare given below.1). The Case of Two Variablesthe least squares normal equation is(a.2)BecauseM 1 3 4(1 / 6)4 1 / 3, B1 2 4(1 / 4)4 1 / 3,M 2 6 4(1 / 3)4 2 / 3, B2 4 4(1 / 3)2 4 / 3,(a.2) is diagonalised as 1 / 3 0 X 1 2 / 3 . 0 2 / 3 X 2 4 / 3 M 11 A1T 1 A1 (1 1 1) 1 3, 1 M 22 A2T 1 A2 (1 2 1) 2 6 1 1 1 11 I 2 (1 2 1) 26 1 6 1 1 1 11 J 1 (1 1 1) 13 1 3 1 2 1 4 2 2 1 1 1 1 1 1 1 D1 ( E I ) A1 5 2 1 1 1 1 1 2 2 2 1 1 ,6 3 1 1 2 5 1 2 1 1 1 1 1 1 1 2 1 2 2 ,3 3 1 1 1 2 1 thus (a.3) related observation equation is(a.1)(a.3)The solution (X1 2, X2 2) of (a.3) is the same as thatof (a.2). Furthermore, to form the equivalent observationequation, there areD2 ( E J ) A2For observation equation (where σ is set to 1 which doesnot affect all results) 1 1 1 X V1 2 1 2 1 V 2 1 1 1 X 2 1 0 0 1 P 2 0 1 0 σ 0 0 1 3 4 X1 2 . 4 6 X 2 4 39

Journal of Global Positioning Systems400 X1 2/ 3 2/ 3 0 0 3/ 7 5 / 7 X 2 3/ 7 . 0 5 / 7 13/ 7 X 5 / 7 3 1 1 1 0 3 1 2 1 3 X U 1 1 1 1 1 X 2 U 2 1 1 2 0 2 3 13 1 1 P 0 0 P The X2 and X3 related normal equation can be furtherdiagonalised. Because ofM 1' 3 / 7 5 (1 / 13) (5 / 7) 2 / 13,B1' 3 / 7 5 (1 / 13) ( 5 / 7) 2 / 13,(a.4)The normal equation of the observation (a.4) is exactlythe (a.3). This numerical example shows that the normalequation and the related observation equation can bediagonalised.2). The Case of Three VariablesFor observation equation (where σ is set to 1 which doesnot affect all results) 2 1 V1 1 2 V2 V 0 1 3 2 1 1P 2 E 4 41 1 X1 1 1 X 2 ,1 2 X 3 1 1 (a.5)the least squares normal equation is(a.6)Because of 4 5 1 M 22 5 7 11 7 5 1 1 , M 11 ,3 5 4 71 7 5 5 2 ,M 1 7 (5 6 ) 3 5 4 6 31 7 5 1 2 ,B1 2 (5 6 ) 3 5 4 1 3 4 5 5 11 3 5 (5 6 ) M 2 57677 5 13 1 5 11 3 B2 2 7 5 1 6 7(a.6) is diagonalised asM 2' 13 / 7 5 (1 / 3) (5 / 7) 2 / 3,B2' 5 / 7 5 (1 / 3) ( 3 / 7) 0,(a.8) is further diagonalised as00 X1 2 / 3 2/3 0 2 / 13 0 X 2 2 / 13 . 002 / 3 X 3 0 (a.7)(a.9)The solution (X1 1, X2 1, X3 0) of (a.9) is the sameas that of (a.6) and (a.8). Furthermore, to form theequivalent observation equation of (a.8), there are 1 1I 1 1 1 1 1 7 5 1 1 1 1 2 3 5 4 1 1 2 1 1 1 1 1 3 0 1σ 7 5 6 X1 2 5 4 5 X 2 1 . 6 5 7 X 1 3 (a.8)1 0 1 1 0 1 ,0 3 0 1 0 1 1 2 1 21 2 4 1J (1 2 1 1) 1 77 1 2 1 1 2 1 1 2 D1 ( E I ) A1 ,3 0 1 12111 2 3 5 1 ,D2 ( E J ) A2 8 7 2 21 thus (a.8) related observation equation is1 2 ,1 1

Xu : A Diagonalisation Algorithm and Its Applications in Ambiguity Search X1 0 X 2 D2 X3 2 P 0 1 , where L 0 P0 2 U 1 L D1 U 2 L 0(a.10)The X2 and X3 related observation can be furtherdiagonalised as follows. Because 1 1 5 7 1I ′ (1 5 8 1)7 8 13 7 1 81 5 1 1

Xu : A Diagonalisation Algorithm and Its Applications in Ambiguity Search 37 1 1 1 11 1 1 1 11 P E J P PA M A P P A M A P P P E J E J P T T T T T i.e., matrice

Related Documents:

Eigenvectors and diagonalisation Inner products and orthogonality Wrapping up Radboud University Nijmegen Last

Technical Report Number 806 Computer Laboratory UCAM-CL-TR-806 ISSN 1476-2986 On joint diagonalisation for dynamic network ana

Algorithms and Data Structures Marcin Sydow Desired Properties of a Good Algorithm Any good algorithm should satisfy 2 obvious conditions: 1 compute correct (desired) output (for the given problem) 2 be e ective ( fast ) ad. 1) correctness of algorithm ad. 2)complexity of algorithm Complexity of algorithm measures how fast is the algorithm

table of contents 1.0 introduction 3 2.0 overview 7 3.0 algorithm description 8 3.1 theoretical description 8 3.1.1 physical basis of the cloud top pressure/temperature/height algorithm 9 3.1.2 physical basis of infrared cloud phase algorithm 14 3.1.3 mathematical application of cloud top pressure/temperature/height algorithm 17 3.1.4 mathematical application of the cloud phase algorithm 24

Customize new algorithm (Check Script Algorithm) To program your own algorithm using MBP script, select Script Algorithm. The new algorithm will be independent of MBP default Java Algorithm. For more details, refer to the MBP Script Programming Manual and contact Agilent support team (mbp_pdl-eesof@agilent.com).

Algorithm 19 - Timer Functions 125 Algorithm 20 - Totalization 129 Algorithm 21 - Comparator 133 Algorithm 22 - Sequencer 136 Algorithm 23 - Four Channel Line Segment (Version 1.1 or Later) 152 Algorithm 24 - Eight Channel Calculator (Version 1.1 or Lat

Both the sum-product and the max-product algorithm have also another root in cod-ing, viz. the BCJR algorithm [5] and the Viterbi algorithm [10], which both operate on a trellis. Before the invention of turbo coding, the Viterbi algorithm used to be the workhorse of many practical coding schemes. The BCJR algorithm, despite its equally

Buku Saku Keperawatan Onkologi. Jakarta: EGC Pugalendhi, P. dan S. Manoharan. 2010. Chemopreventive Potential of Genistein and Daidzein in Combination during 7,12-Dimethylbenz (α)anthracene (DMBA) induced Mammary Carcinogenesis in Spray-Dawley Rats. Pakistan Journal of Biological Sciences. 13 (6): 279-286 Purwanto, Djoko Agus dan Achmad Toto Purnomo. 1999. Penetapan Kadar Metil Pada Hasil .