A New Step-size Searching Algorithm Based On Fuzzy Logic .

2y ago
8 Views
2 Downloads
1.22 MB
17 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Dani Mulvey
Transcription

Turkish Journal of Electrical Engineering & Computer rk J Elec Eng & Comp Sci(2016) 24: 4322 – 4338c TÜBİTAK⃝doi:10.3906/elk-1410-76Research ArticleA new step-size searching algorithm based on fuzzy logic and neural networks forLMS adaptive beamforming systemsWalter OROZCO-TUPACYUPANQUI, Mariko NAKANO-MIYATAKE, Hector PEREZ-MEANA Postgraduate Section, Mechanical Electrical Engineering, Instituto Politecnico Nacional, MexicoReceived: 13.10.2014 Accepted/Published Online: 04.08.2015 Final Version: 20.06.2016Abstract:In this paper, a novel algorithm based on fuzzy logic and neural networks is proposed to find an approximationof the optimal step size µ for least-mean-squares (LMS) adaptive beamforming systems. A new error ensemble learning(EEL) curve is generated based on the final prediction value of the ensemble-average learning curve of the LMS adaptivealgorithm. This information is classified and fed into a back propagation neural network, which automatically generatesmembership functions for a fuzzy inference system. An estimate of the optimal step size is obtained using a group oflinguistic rules and the corresponding defuzzification method. Computer simulations show that a useful approximationof the optimal step size is obtained under different signal-to-noise plus interference ratios. The results are also comparedwith data obtained from a statistical analysis performed on the EEL curve. As a result of this application, a better meansquare-error is observed during the training process of the adaptive array beamforming system, and a higher directivityis achieved in the radiation beam patterns.Key words: Adaptive filtering, adaptive beamforming, neural-fuzzy systems, least-mean-square algorithm, membershipfunctions1. IntroductionCurrently, the development of communication systems based on adaptive algorithms is widely used to minimizethe effects of noise and interferences. One of the most commonly used systems is the smart antenna array[1–3], whose filtering capability depends on the structure and the adaptive algorithm used in its design. Forarrays that incorporate the standard least-mean-squares (LMS) algorithm [4–6], the updating weights, speedof convergence, and steady-state stability generally depend on the selected fixed step size µ . Therefore, themain problem focuses on determining the appropriate step size µ to obtain the best qualities of the adaptivealgorithm, which is the core of the smart antenna array. For this reason, the step size µ must be carefullyselected to maintain a balance between a low steady-state error and a rapid convergence rate in response tothe changing environment. Moreover, a good step size can minimize the mean-square-error (MSE) during thelearning process of the adaptive algorithm.This paper proposes a novel neural-fuzzy algorithm to seek the optimal step size for LMS adaptive arrays.The proposed search algorithm consists of three processing units: the learning block, the neural-fuzzy unit, andthe searching stage. In the first part of the algorithm, the error ensemble learning (EEL) curve is generated.In addition, curve-fitting methods and divergence analysis are included to process the EEL curve and obtainsuitable information to remove irregularities due to contamination by additive white Gaussian noise (AWGN). Correspondence:4322hmperezm@ipn.mx

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp SciIn the next stage, the previous information is clustered using the fuzzy c-mean and the max-membershipmethods [7]. The data points are assigned membership values according to the cluster to which they belong.A back propagation artificial neural network uses the data points and the corresponding membership values togenerate the input and output membership functions of the fuzzy inference system (FIS). Finally, in the lastpart of the algorithm, an estimate of the optimal step size is obtained using a group of IF-THEN rules and thecentroid defuzzification method. To evaluate the proposed search algorithm, a single LMS adaptive array anda least-recurve-mean-square (LRMS) hybrid cascade array beamforming [8] system are used.The organization of this paper is as follows. In section 2, the convergence analysis of the LMS algorithmis reviewed alongside the problem of finding its optimal step size. In section 3, the methods used for eachpart of the proposed algorithm are detailed. The results of the search experiments performed to evaluate theestimate of the optimal step size are reported and discussed in section 4. Finally, the conclusions are presentedin section 5.2. Convergence of the LMS algorithmTo understand the dilemma of finding the step size, let us examine the basic concepts and equations regardingthe convergence of an LMS adaptive filter as depicted in Figure 1.Desired signal (d)Input data (x)Linear combinerWk- ΣOutput (y)Error (e)ΔWkLMS algorithmFigure 1. Adaptive LMS filter.For a transversal linear combiner of N weights, the overall adaptive filter response may be found byconsidering the sum of the samples from the signal source [9]; therefore,yk XTk Wk ,(1)where X k and W k are the sample input and the adaptive-filter coefficient vectors, respectively. For thiscase, both vectors are columns. The subscript k is the time index or the sample number, and the operator Trepresents transposition. The adaptation process of the weights is performed by the following LMS equations[10,11]:ek dk XTk Wk ,(2)Wk 1 Wk µek Xk .(3)Eq. (2) calculates the instantaneous output error between the desired signal dk and the output of the filteryk , and Eq. (3) updates the weights in each iteration of the algorithm. The parameter µ , known as the stepsize, regulates the speed and stability of adaptation. If we assume this procedure to be a stationary process in4323

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp Sciwhich successive input vectors are independent over time, and X k and W k are mutually independent, thenthe expected value of the weight vector converges to the Wiener solution W op R 1 P [11] after an adequatenumber of iterations. By substituting Eq. (2) into Eq. (3), after some straightforward manipulations, theexpected value of Eq. (3) can be expressed as()E[Wk 1 ] E[Wk ] µ E[dk Xk ] E[Xk XTk ]E[Wk ] .(4)In Eq. (4), E[dk X k ] represents the cross-correlation vector P between the desired and input signals, whileE[X k X Tk ] is the input autocorrelation matrix R [12,13]. Thus, after some substitutions and manipulations,Eq. (4) can be written as [12,14]E[Wk 1 ] E[Wk ] µ (P RE[Wk ]) .(5)Subtracting Wop from both sides of Eq. (5), and defining Mk Wk Wop , it follows thatE[Mk 1 ] (I µR) E[Mk ]. (6) Next, using the fact that the autocorrelation matrix R can be expressedas R QH ΛQ [13], where the operator H is the Hermitian transposition for matrices and Q is an orthonormaltransformation whose columns are the eigenvectors of R, Eq. (6) becomesE[Mk 1 ] QH (I µΛ) QE[Mk ].(6)Next, upon multiplying Eq. (6) on the left by Q and definingVk QE [Mk ] , it follows thatVk 1 (I µΛ) Vk .(7)Finally, iterating Eq. (7) starting from an arbitrary initial solution V 0 , it follows that Vk,0Vk,1. (1 µλ0 ) k k(1 µλ1 ).(1 µλN 1 )Vk,N 1kV0,0V0,1. . (8)V0,N 1Thus, the expected value of the weight vector will converge to the Wiener solution ifklim (I µΛ) 0 1 µλn 1 f or k 0, 1, . . . N 1.k (9)That is, the convergence of the LMS adaptive algorithm is guaranteed if and only if [12]0 µ 1λmax .(10)In general, the trace of the input autocorrelation matrix R, the sum of the elements on its main diagonal, isgreater than its largest maximum eigenvalue; therefore, an alternative expression of Eq. (10) can be used toformulate the convergence condition, as follows [12]:0 µ 11 T .tr[R]X X(11)Eq. (11) provides the range of µ in which the adaptive algorithm converges; however, the optimal step size isstill unidentified.4324

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp Sci3. The proposed search algorithmThe flow diagram of the proposed search algorithm is shown in Figure 2. To describe the structure of the newalgorithm, three subblocks have been defined. First is the learning block, in which the behavior of the leastmean-square (LMS) adaptive algorithm is characterized by generating a new error ensemble learning (EEL)curve. Second is the fuzzy-neural block in which the membership functions are automatically generated for thefuzzy inference system (FIS) using fuzzy clustering and neural network techniques. Finally, in the last block,the searching block, an approximation of the optimal step size µ is obtained for the adaptive algorithm as aresult of the combination of different fuzzy rules.Figure 2. Flow diagram of the proposed search algorithm.4325

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp Sci3.1. The learning blockThe learning block estimates the error behavior of the LMS algorithm. First, M independent experiments areperformed to train the adaptive filter under the same input statistical conditions but for different step sizes.For each k training process, an instantaneous learning curve of N T samples is calculated by [10]((k))2ei2 (di xi wi ) , 0 i NT(12)Because it is essential to evaluate the performance of the adaptive algorithm, an ensemble-average learning(EAL) curve is computed after all k experiments have concluded. A common approach is to average theindependent instantaneous learning curves as follows [10]:M1 [ (k) ]2ei, 0 i NMξ (13)k 1By examining the EAL curve of the LMS algorithm, it is possible to extract important information about theadaptation process, such as the error variance that remains in steady state and whether the adaptive algorithmreaches such a state. For example, by averaging over 200 training processes of such instantaneous learningcurves, a typical EAL curve is obtained and shown in Figure 3. Once the previous calculations have beenperformed, the simple exponential smoothing (SES) method [11] is used to estimate the final magnitude of thiscurve. This method is appropriate to predict data without a clear tendency towards a specific value. Althoughthe data in Figure 3 exhibit a certain trend to a specific MSE, a better final estimation is required to build theEEL curve. The problem is observed due to the presence of a slight sinusoidal behavior in the curve. Afterthe ensemble-average learning curve is calculated, whether or not it reaches the steady state of convergence,its horizontal axis is divided into N intervals with M samples containing the amplitude of the curve. Figure 3shows this procedure for three segments.An average initial value Ψ0 is calculated by Eq. (14) for each segment [ Si ,Se ] using the correspondingsequences of data Ψi . The estimate for each sequence of information is given by Eq. (16):(n)Ψ0 M1 ξ(m), 1 n N,M m 1(14)Ψi (m) ξ(m) [Si , Se ] ,(15)Ψ(n) (k) (1 α)Ψ0 αΨi (k 1), 1 k M.(16)Finally, the estimated value of the complete ensemble-average-learning (EAL) curve is calculated by averagingthe individual estimates as follows:Ψ0 N1 (n)Ψf or µ [µs , µe ].N n 1(17)Because this procedure is performed for each step size belonging to an interval under analysis [µs , µe ], thestatistical behavior of the variability of the final estimated error for each ensemble-learning curve is obtained.4326

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp SciAs a result of this computation, a new curve defined as the error ensemble learning curve (EEL) is generated.This curve is shown in Figure 4 for a specific µ -interval [µs , µf ].00.450.4Segment 1 Segment 2 Segment 3Fitted EELC 5 10 15AmplitudeMSE (dB)0.35Ψ(1)Ψ(2)50100150200250 300Iteration3500.2EELC0.1Ψ0 2500.250.15Ψ(3) 200.34000.050450 500Figure 3. EAL curve divided into three sections to estimate the final value.0.050.10.150.2Step size µ0.250.30.35Figure 4. TEEL curve and fitted TEEL curve.According to the graph, it is possible that the increment of the parameter µ also implies an incrementof the error magnitude. As a consequence, there is the possibility that for some range of step sizes, the LMSalgorithm diverges. Therefore, to avoid obtaining unnecessary data due to divergence, a simple divergenceanalysis is incorporated into the algorithm. When the error magnitude given by Eq. (17) exceeds a certainthreshold β for each step size, this value is eliminated from the curve, and only a significant part of the dataare used for analysis. In fact, Figure 4 shows the truncated error ensemble learning (TEEL) curve after thedivergence analysis was performed for a µ [0.01,1]. A representation of this procedure is given by the followingequation:IF ΨEEL [µs , µe ] β ΨT EEL [µs , µf ] where [µs , µf ] [µs , µe ].(18)Although the TEEL curve is a worthy source of information to generate membership functions, it is not a smoothcurve due to contamination from different sources of noise. Therefore, a polynomial curve fitting process is usedto construct a curve that has the best fit to the data points. One of the most common fitting techniques ispolynomial regression [15], in which the least-squares procedure can be readily extended to fit the data to ahigher-order polynomial given byf (x) N Pi xi ,(19)i 0where N is the order of the polynomial and Pi denotes its coefficients. In Figure 4, both the original TEELand the fitted TEEL curves are shown. It is important to indicate that similar curves can be obtained underthe same SINR, which occurs due to the random characteristics of the noise interference.3.2. The fuzzy-neural blockAfter the data have been fitted according to the desired polynomial, this information is loaded into the fuzzyneural block, in which the membership functions will be generated. First, the information is classified into two4327

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp Scisets of data, one for the input (amplitude of the EEL curve) and another for the output (truncated µ -interval)of the fuzzy inference system (FIS). For each set of information, M different classification groups are generatedusing fuzzy c-means clustering and the max-membership method. The M groups are defined according to thenumber of membership functions required for the input and output of the FIS, respectively. To establish thelocation of each data sample for a specific group in the space, each cluster requires a center with m coordinatesto describe its location. Therefore, the cluster center is given byn vij k 1δαik· xkin k 1, j 1, 2, . . . J,(20)δαikwhere αik is the membership of the k th data point in the i th cluster, and the parameter δ [1, ) is definedas the weighting parameter. The subscript j represents the total number of coordinates for each center cluster.The assignment of each data sample to its corresponding cluster is given by minimizing an object function Jmdefined as 2n cm δ(21)Jm (αik ) (xkj vij ) ,k 1 i 1Jm j 1n c 2δ(αik ) (dik ) ,(22)k 1 i 1where dik in Eq. (22) is the Euclidean distance between the i th group center and the k th data set. Finally, asoft partition matrix U s is defined to assign membership values to each point of information. The elements ofthe classification matrix are updated using a recursive process that involves a set of rules described in [7] alongwith the following equations:(opt)Jm(U(opt), v(opt) ) min J(Us , v),s Us(r 1) (r 1)α12(r 1)α22.α11α21.(r 1)αc1(r 1)αik(opt)(r 1)···α1N(r 1)···.α2N.···αcN(r 1)αc2.(r 1)(r 1)(23) , (24)(r 1) 2 1( (r) ) m 1c dik r 0, (r)djkj 1(25)where r is the number of iterations, U sis the optimal soft partition matrix, and v (opt) is the correspondingoptimal cluster center vector. Once the partitions are obtained for both the amplitude of the TEEL curve andthe truncated µ-interval, each point of information is assigned membership values between 0 and 1 for theclasses into which they were placed. Hence, a single point can have partial membership in more than one group.4328

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp SciFor example, the following soft partition matrix U s might be the outcome for three clusters and the five firstsamples of the TEEL amplitude curve: 0.0198 0.0002Us 0.8101 0.99840.1701 0.0014 0.00760.0212 0.97120.9945 00.0014 0.99980.0041 0.0002It is important to note that the sum of each column in U s is one. Next, the final fuzzy partition for each pointof information is achieved by hardening the fuzzy classification matrix U s . One of the most popular approachesis the max-membership method [7], in which the largest element of the partition matrix is assigned a value ofone and the remaining elements in each column are assigned a value of zero. The hardening equation used forthis purpose is given byαik max {αik } αik 1; αjk 0, j ̸ i f or i 2, . . . , c and k 1, 2 . . . , nj c(26)For the soft partition matrix U s given as an example, the final hard matrix U H is UH0 100 11 00 0010 0.0 1The final hard matrix U H is used to train a back propagation neural network to generate the membershipfunctions. In Figure 5, a schematic representation of the cluster idea with the fuzzy c-means and maxmembership method is shown.yCluster 1V12C1xCluster 3C4xC2xV42V22Cluster 4V32C3xCluster 2C2yC1yC3yC4yxFigure 5. Fuzzy hard partition.Because there is a straight relationship between the data point and the membership values, this characteristic is learned by the neural network during the training process. In this manner, the neural networkclassifies data points into one of the earlier defined clusters. Once the neural network is ready, a larger sample4329

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp Sciof the amplitude of the TEEL curve and the truncated µ -interval are entered into it to generate their respectivemembership functions. An example of the membership functions is shown in Figure 6.To obtain a better shape for the membership functions, a Gaussian fitting curve is used to obtain thefinal result. The equation for the Gaussian model is given by [16]g(x) N [ () ]x βi 2 σiαi e,(27)i 1110.90.90.80.8Membership gradesMembership gradeswhere αi is the amplitude, usually one for membership functions; βi is the position of the center of the peakand σi is related to the peak width; and N is the total number of functions used to fit the data points. Figure7 shows the fitted membership curves. For the case of the Gaussian fitting method, there are small differencesbetween the original curve and its fitted version, but the main characteristics such as the center peak and thewidth are 0.100.050.10.150.2Input variables0.2500.3Figure 6. Typical membership functions obtained by theneural networks.0.050.10.150.2Input variables0.250.3Figure 7. Fitted Gaussian membership functions.3.3. The search blockFor the search block, a fuzzy inference system (FIS) is designed according to the Mamdani rules [17] whoseequations are given by a collection of M linguistic IF-THEN single-input, single-output propositions:IF x is Ai , THEN y is Bi , for i 1, 2, . . . , M,(28)where Ai and Bi are fuzzy numbers. This method of representing human knowledge is known as the deductiveform, in which a conclusion (consequent) is derived from a given known fact (antecedent). For the proposedalgorithm, an initial random step size µ0 begins the search process for the optimal µ . The errors calculatedbased on the initial condition and the subsequent step sizes are the inputs of the fuzzy inference system;therefore,(i 1)µD4330(i) F IS(µD ε(i) )(29)

OROZCO-TUPACYUPANQUI et al./Turk J Elec Eng & Comp SciThe fuzzy variable µD is defuzzified by the center of gravity method described by the following equation [7]:N µD µk φ(µk )k 1N ,(30)φ(uk )k 1where µk is the selected value on the abscissa of the output membership function, φ(µk ) is the value of themembers

learning process of the adaptive algorithm. This paper proposes a novel neural-fuzzy algorithm to seek the optimal step size for LMS adaptive arrays. The proposed search algorithm consists of three processing units: the learning block, the neu

Related Documents:

grade step 1 step 11 step 2 step 12 step 3 step 13 step 4 step 14 step 5 step 15 step 6 step 16 step 7 step 17 step 8 step 18 step 9 step 19 step 10 step 20 /muimn 17,635 18,737 19,840 20,942 22,014 22,926 23,808 24,689 325,57! 26,453 /2qsohrs steps 11-20 8.48 9.0! 9.54 10.07 10.60 11.02 11.45 11.87 12.29 12.72-

Special Rates 562-600 Station Number 564 Duty Sta Occupation 0083-00 City: FAYETTEVILL State: AR Grade Suppl Rate Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Min OPM Tab Eff Date Duty Sta Occupation 0601-13 City: FAYETTEVILL State: AR Grade Suppl Rate Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Min OPM Tab Eff Date

Grade Minimum Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Mid-Point Step 8 Step 9 Step 10 Step 11 Step 12 Step 13 Step 14 Maximum Step 15 12/31/2022 Accounting Services Coordinator O-19 45.20 55.15 65.10 Hourly 94,016 114,712 135,408 Appx Annual 12/31/2022 Accounting Services Manager O-20 47.45 57.90 68.34 Hourly

Shake the bag so that everything mixes together (at least 1 min.) Store in a dark, dry place for 5 days Primary Growing Process Steps one Step two Step three Step four Step five Final step 11 12 Step two Step three Step five Step four Step one Step three Step 7, 8, & 9 Step four Step ten Step 3 &am

Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 2 Step 2 Request For Quotation (RFQ) If you're a hardball negotiator at heart, this next step should bring you some real enjoyment. On the other hand, if you are not a negotiator by trade, don't worry; this step can still be simple and painless. Now that you have a baseline of what

Save the Dates for Welcome Programs CHECKLIST Step 1: Step 2: Step 3: Step 4: Step 5: Step 6: Step 7: Step 8: Step 9: Step 10: Step 11: Step 12: Step 13: . nursing@umsl.edu umsl.edu/nursing School of Social Work 218 Bellerive Hall 314-516-7665 socialwork@umsl.edu umsl.edu/ socialwk/

Step 1: start Step 2:read n Step 3:assign sum 0,I m n,count 0 Step 4:if m 0 repeat Step 4.1:m m/10 Step 4.2:count Step 4.3:until the condition fail Step5: if I 0 repeat step 4 until condition fail Step 5.1:rem I%10 Step 5.2:sum sum pow(rem,count) Step 5.3:I I/10 Step 6:if n sum print Armstrong otherwise print not armstrong Step 7:stop

Step 1: Registration Step 2: Personal Information Step 3: Select a Job Step 4: Fill Application Step 5: Review Application Step 6: Submit Application Step 7: Check Application Status Step 8: Set up Job Alerts STEP-BY- STEP GUIDE TO APPLYING AT UNFPA