Neural-Based Control Of A Robotic Hand

3y ago
6 Views
2 Downloads
862.09 KB
6 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Camryn Boren
Transcription

Proceedings of the 2004 IEEEInternational Conference on Robotics & AutomationNew Orleans, LA April 2004Neural-Based Control of A Robotic Hand:Evidence for Distinct Muscle StrategiesPedram Afshar*§, Yoky Matsuoka†§* BioMedical Engineering, † The Robotics Institute & Mechanical Engineering§ Center for the Neural Basis of CognitionCarnegie Mellon University, Pittsburgh, Pennsylvania, 15213Email: pedram@cmu.edu, yoky@andrew.cmu.eduthese movement vectors over a group of motor neuronscorrelated with arm movement direction. To establish amore concrete relationship between neurons and limbposition, Fetz et al [7] [2] has shown that neural activitydownstream of the brain (in the spinal cord) can be used topredict muscle activity. Work by Holdefer et al [8]extended these results by showing that the activity of singleneurons in the monkey brain caused activation in specificsets of limb muscles.Abstract— The neural-based control of a robotic hand hasmany clinical and engineering applications.Currentapproaches to this problem have been limited due to a lack ofunderstanding of the relationship between neural signals anddynamic finger movements. Here, we present a technique topredict index finger joint angles from neural signals recordedfrom the associated muscles. The neural signals are convertedto a torque estimate (EBTE) and then input to artificial neuralnetworks. The networks predict the finger position moreclosely when the input to the networks are torque estimatesrather than neural signals. Furthermore, the networks trainedwith the EBTE signals could predict the joint angles fordifferent phases of finger movements (i.e. dynamic reachingand positioning task) while networks trained with the neuralsignals could not. Our results indicate that (1) similar fingermovements are executed with different synergistic strategiesand (2) different phases of finger movements employ differentneural strategies.Through these results, we havedemonstrated the first concrete technique to control a handprosthetic device or dexterous tele-manipulator using naturalneural control signals.This type of neural signal processing has enabled thedevelopment of brain-machine interfaces (BMI), whichcontrol robots using measured neural activity in rats andmonkeys [1,15,16]. These devices work by correlatingsampled neuronal activity in the brain and with limbmovement direction through various classificationtechniques. That is, these algorithms use the neural signal toselect one arm direction from a set of possible directionsrather than computing the continuous limb movement.While these top-down approaches hold promise for theconstruction of a neuroprosthetic device, they can onlyprovide coarse estimation of the limb position. This isbecause it is not clear what the neural signal is encoding. Inproducing a movement, the neural signal is modified in thebrain, the spinal cord, and by the muscle itself.Keywords-motor control, electromyography, joint anglesI.INTRODUCTIONDiscovering the relationship between neural signals andlimb position would enable the development of natural brainmachine interfaces. Tele-operated devices controlled byneural commands could be used to provide precise humanmanipulation in remote or hazardous environments, andneurally controlled prosthetics could return function to aparalyzed patient by routing neural commands to actuators[1]. These applications rest on the assumption that neuralactivity can be translated into intended movements, and thatthese movements can then be executed by a device [1-3].The method of predicting dynamic and fine movementsdirectly from cortical signals treats the brain, spinal cord, andmuscle as a black box. To predict fine and dynamicmovements, it is necessary to take a bottom-up approach.The first step is to understand the relationship between theneural signal to muscle and the movement itself. Thismapping, however, is not trivial because the neural signals tothe muscles are known to have a nonlinear and many-to-onerelationship with the forces that muscles produce.Furthermore, the complex biomechanical mapping betweenthe muscle forces and the joint angles make the alreadymany-to-one redundant relationship a difficult mapping tomodel.Decoding the neural signal into movement parameters(i.e, position and velocity) continues to be the major hurdlein solving this problem. Since Georgopolous et al [4]predicted gross arm movement direction from neural activityin monkey motor cortex, many others have used a variety oftop-down approaches (central nervous system to movement)to predict dynamic movement direction [5, 6].Georgopolous et al [4] found that neurons fired selectivelyfor a particular direction of hand movement, following aroughly cosine tuning curve with respect to movementdirection. Schwartz et al [5] further showed that averaging0-7803-8232-3/04/ 17.00 2004 IEEEIn this paper, we present the first successful method fordetermining precise finger position of a human in a dynamicmovement task using neural input to muscles. Whilesubjects traced a line with their index finger, we collectedneural signals from all seven muscles that control the indexfinger. We describe a technique to convert the neural inputinto a quantity that resembles joint torques as a way to4633

resolve the redundant relationship. Then we describe anartificial neural network that takes this quantity as its inputand predicts the joint angles accurately. Using this network,we hypothesize that the neural network trained with thetorque-like quantity can predict joint angles for allconditions, while the network trained with the unprocessedneural signals can only predict specific phases of amovement.II.METHODSA. Experimental SetupThe neural input to the muscles and the joint angles weremeasured from two subjects (2 male, age 21, 23) while theymade a point-to-point movement with their right index finger(Fig. 1). We used a PHANToMTM Premium 1.5 robot(Sensable Technologies, Inc.) to record the index fingermovement. This robot has 3 degrees of freedom and acustom-made finger cuff provided an additional tworotational degrees of freedom. The PHANToM recorded theCartesian position of the robot endpoint, and we attached apotentiometer to the pitch axis of the cuff to record the pitchof the fingertip. The subjects’ index finger was fastenedinto the finger cuff so that their distal interphalangeal joint(DIP) was aligned to the edge of the cuff. The Cartesianposition of the DIP was determined by calculating itsgeometric relationship with the PHANToM endpoint. Thesubjects’ palm and other digits were strapped to an armrestso that all the joints on the index finger could move freelywhile the hand was fixed a known distance away from therobot’s origin. We recorded the joint angles of the robot andthe finger cuff at 1 kHz.Figure 1. Experimental setup. The subject’s index fingertip endpointand the neural inputs to the muscles controlling the index finger weremeasured as the subject traced a line with his index fingertip.were measured: Flexor Digitorum Superficialis, FlexorDigitorum Profundus, Extensor Indicis, Extensor Digitorum,First Lumbrical, First Dorsal Interosseus, and First PalmarInterosseous. We used fine-wire electromyographic (EMG)technique modified from Burgar et al [9, 10]. A 30cmsegment of 50µm bifilar wire (California Fine Wire) wasthreaded through a 27gage hypodermic needle. Muscles wereidentified using anatomical landmarks and signals weretested for cross-talk by viewing the signal on an oscilloscopeprior to recording. At the beginning and end of the session,the maximum and minimum muscle contraction signals wererecorded. To find the minimum, the subjects placed theirhand on the table and were asked to relax and let the tablesupport their arm and hand. The maximum contractionsignal was determined for each muscle individually byasking the subject to contract as hard as possible against thetable in the direction of the muscle’s action. During theexperiment, the subjects were instructed to trace a 3.5 cmline segment with a tracing error of less than 1mm2 and in atime window between 450 and 550 milliseconds. The tip ofthe index finger traveled from the upper left to the lowerright corner of the index finger workspace. This motion wasselected because it spanned a wide range of each of thefinger joint angles. At the end of each trial, the subject wasgiven performance feedback on a computer monitor. Thesubject’s trace was superimposed on a drawing of the linesegment, and colored boxes indicated the outcome inachieving the desired time window and tracing errorperformance. After observing the error, the subject reset hisfinger to the starting point and waited for an audible gostimulus to begin the next trace.Given that the hand is fixed and the robot is rigidlycoupled to the fingertip, inverse kinematics equations canuniquely identify all four joint angles, provided the correctrange of motion for the joints. Assuming that (Pwx,Pwy,Pwz)is the location of the distal interphalangeal joint (DIP) center, Φ is the angle measured by the finger cuff, a2 is the distancefrom the metacarpophalangeal joint (MCP) to proximalinterphalangeal joint (PIP) and a3 is the distance from the PIPto the DIP. The MCP flexion is determined by (a a c ) Pwz P 2 w P 2 w (a a c ) P 2 w P 2 w a s Pwyxyxz23 33 3 2 3 3,MCP A tan 2 P 2 wx P 2 w y P 2 wzP 2 wx P 2 w y P 2 wz (1)the MCP abduction is determined byABD A tan 2( Pw y , Pw x ) ,(2)the PIP flexion is determined by P 2 w x P 2 w y P 2 w z a 2 2 a 3 2 PIP A tan 2 1 2a 2 a 3 P 2 wx P 2 w y P 2 w z a 2 2 a 3 2 , 2a 2 a 3 2(3)The session lasted for 120 movements. In order to assurethat all movements used for the analysis were similar, weexcluded the first 60 movements (to exclude movements thatmay include any adaptation changes) from the analysis. Wealso stopped recording at 120 movements to prevent fatigue,which may change the nature of the EMG signal [11].and the DIP flexion is determined byDIP Φ ( PIP MCP ) .(4)Along with the index finger joint angle information,neural input to all the muscles controlling the index finger4634

Since there are more muscles than degrees of freedom,there are infinitely many muscle contraction patterns thatimpart the same torque on a joint in the index finger. Thesepatterns represent differences in cocontraction (simultaneousactivity of opposing muscles around a joint), which changesjoint stiffness. Computing the EBTE effectively removes theredundant muscle actuation to joint angle mappings. Whilethe EBTE resembles the joint torques, literature suggests thatEMG signals are not equivalent to muscle force [10, 14].Thus, the EBTE, which uses Fˆm (t ) , cannot be considered asthe joint torque signal, and thus cannot be used in thetraditional dynamical equations of motion.R1Muscle 1R3 R2Muscle 2Muscle 3Figure 2. Model of cocontraction in a joint of the index finger. Jointmovements are determined by torque around the joint.Instead, a two layer artificial neural network was used topredict joint angles from the normalized EMG and theEBTE.The network, implemented using MATLAB,consisted of a single input layer and a single output layer.The input layer had 4 (EBTE network) or 7(EMG network)nodes, each connected to a 40 millisecond tapped delay line.The output layer consisted of 4 nodes, and the hidden layerconsisted of 15 nodes. The network was trained with scaledconjugate gradient descent backpropagation with 10 inputsets until achieving a mean-squared-error of 0.05.B. Data analysisFor each movement, there were 7 EMG recordings and 4joint angle measurements for a period of 250-750milliseconds, depending on the time it took for the subject toreach the endpoint. The goal of the analysis was to design amapping from EMG to joint angles as a function of time.We created two different sets of inputs for the artificialneural network to predict the joint angles.All movements to the target were stereotypical reachingmovements in that they were performed in two distinctivephases shown in Fig 3. The first phase, called the “reachingphase,” contained a high velocity (greater than 80mm/sec)profile towards the endpoint. The second phase, called the“corrective” phase, contained a slower velocity profile(between 20 and 70 mm/sec) to correct the error of thereaching phase and position the fingertip at the endpoint.Because we hypothesized that the reaching and correctivephases may use different mappings between the muscles andthe joint angles, we treated those two phases separately whiletraining the neural network. Rather than dividing the wholemovement into two segments, we took the reaching phase tobe the segment where the first bell-shaped profile exceeded75 mm/sec, and the corrective phase to be the segment wherethe second bell-shaped profile exceeded 20 mm/sec. Thissegmentation technique allowed multiple reachingmovements to align properly during the network training.We also tested a “whole” movement condition. The wholemovement was taken from the beginning of the reachingphase until the end of the corrective phase.1) Normalized EMGEMG signals are a composite of the neural signals frommany motor-neurons. As such, the signal is irregular,chaotic, and oscillatory. However, the amount of motorneuron activity is correlated with the number of zerocrossings, the number of times the signal crosses 0 volts[10].Therefore, the amount of muscle force, which is a result ofmotor-neuron firing, could be hypothesized to beproportional to the frequency of zero-crossings.Furthermore, an estimate of the force of contraction canbe obtained by normalizing the zero-crossings frequencyusing the maximum and minimum contraction. Themagnitude of the EMG signal was determined by taking thenumber of zero-crossings in a 15-millisecond time window.The signal was normalized and low pass filtered at 70Hzwith an equiripple FIR filter with Kaiser window. Thisfiltered and normalized EMG was one of the inputs used toestimate the joint angles.2) EMG Based Torque EstimateTo produce the second set of inputs, we used theassumption that this filtered and normalized EMG signal isproportional to muscle force [12] to calculate a quantity thatis related to the joint torque. Our EMG Based TorqueEstimate (EBTE), is determined by combining the EMGsignals of muscles around a joint (Fig. 2). For each joint, anEBTE was computed byτˆ j (t ) m Fˆm (t ) rm , jFor each training session, 13 movements were selected atrandom from the 60 total movements. The network wastrained with 10 movements selected at random from theselected 13 movements. After reaching the convergencecriterion (MSE 0.05), the network was tested on the 3 inputsets it had not been given. Joint angles calculated from theneural network were compared with joint angles determinedfrom the inverse kinematics equations (equations (1) – (4)).(5)Using the data collected, the EMG network and EBTEnetwork was compared in the following ways:where Fˆm (t ) is the force estimate from the EMG and rm , j isthe radius of the moment arm for each muscle around thejoint [13] (Fig. 2). Using seven EMG signals, we calculatedfour EBTEs corresponding to each degree of freedom. 4635Predicting joint angles for the whole movement. Ifthe redundancy is well represented in the EBTEmodel, the network trained with the EBTE should

reachingJoint Angle (degrees)Speed (mm/sec)AcorrectiveTime (ms)MCPPIPDIPABDTime (ms)BFigure 4. Joint angle predictions of a neural network based on EBTE.MCP metacarpophalangeal joint flexion, ABD metacarpophalangeal joint abduction, DIP distal interphalangealjoint PIP proximal interphalangeal joint.Y position (mm)STARTIII.A. Predicting Whole Movements with a Network Trainedon Whole MovementsWhen the input to neural network was the EBTE of thewhole movement, it converged after 90 epochs. Thecomparison between the actual and predicted movements fora test movement (not used during training) is shown in Fig.4. For 3 testing movements, the average R2 was 0.76 0.14.When the normalized EMG was used as the input, itconverged after 85 epochs and predicted the testingmovements with an average R2 of 0.70 0.16. Though wepredicted better performance from the EBTE network, theimproved performance was not statistically significant(p 0.61).ENDX position (mm)Figure 3. Line tracing task. A: The speed profile of the trace as afunction of time shows two bell-shaped curves: the reaching movement(150ms-310ms) and the corrective movement (475ms-550ms). B: Theposition of the index fingertip during the trace. The dotted lineindicates the prompt, the solid line indicates the subject’s trace.predict the joint angles more accurately than thenormalized EMG. Predicting joint angles for the reaching phase basedon networks trained on only the reaching phase.Both networks should also be able predict thereaching phase since cocontraction strategy isconstant (see discussion). Predicting joint angles for corrective phase based onthe networks trained on the reaching phase. Sinceonly the EBTE network should have a uniquemapping to the joint movement, it should predictjoint angles for reaching or corrective phases even ifthe network was trained on the reaching phase only.If the cocontraction strategies are different betweenthese two phases, then the EMG network shouldpredict joint angles poorly in this condition.RESULTSB. Predicting Reaching Movements with a NetworkTrained on Reaching MovementsWhen the input to the neural network was the EBTE ofthe reaching movement only, it converged after 70 epochs.For the 3 testing movements, the average R2 was 0.83 0.09.When normalized EMG was used to predict reachingmovements, the network converged in 55 trials with anaverage R2 of 0.81 0.05 for the testing movements (Fig 5).As expected, both networks were able to predict joint angleswell.C. Predicting Corrective Movements with a NetworkTrained on Reaching MovementsFinally, the network trained on the reaching movementsalone was used to predict corrective movements. The EBTEnetwork predicted corrective phase joint angles with anaverage R2 of 0.84 0.16. When normalized EMG wasused as input, the network predicted corrective movementjoint angles with an average R2 of 0.58 0.14 (Fig 5).4636

mean R-squaredtrained network would be able to predict finger positiononly in the phase in which it was trained. Fig 5 shows astatistically significant (p 0.05) decrease in performancewhen the EMG network was tested on the corrective phasemovement. No statistical difference was found between theEBTE network’s prediction of reaching movements andcorrective movements. This result shows that 1) the EBTEwe calculated is a good model to eliminate the cocontractioneffect, and 2) using the EBTE, the joint angles could bereliably predicted regardless of the neural strategies used.reaching Æ reachingreaching Æ correctiveGiven our hypothesis that the EBTE is a better predictor oflimb position than EMG, it is surprising that the EMGtrained network performed statistically the same as theEBTE trained network on the whole trial data. Oneexplanation is that the reaching movement was overlyrepresented in the whole trial data. In fact, the reachingmovement phase composed approximately ¾ of the wholetrial. Thus, it is possible that the EMG trained networklearned only the cocontraction pattern found in the reachingphase and could therefore accurately predict the joint anglesin a large portion of the whole trial. The total performanceof the normalized EMG network would therefore beinflated. This hypothesis is supported by the fact that thenormalized EMG network is qualitatively better atpredicting the joint angles in the first 210 milliseconds ofthe trial (Fig 4b). This potentially spurious result could beresolved by selecting movement phases of equal duration.whole Æ wholeFigure 5. Performance of EBTE netowork versus EMG network underthree testing conditions. The performance of the EBTE network wasnot st

Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213 Email: pedram@cmu.edu, yoky@andrew.cmu.edu Abstract— The neural-based control of a robotic hand has many clinical and engineering applications. Current approaches to this problem have been limited due to a lack of understanding of the relationship between neural signals and

Related Documents:

Neuroblast: an immature neuron. Neuroepithelium: a single layer of rapidly dividing neural stem cells situated adjacent to the lumen of the neural tube (ventricular zone). Neuropore: open portions of the neural tube. The unclosed cephalic and caudal parts of the neural tube are called anterior (cranial) and posterior (caudal) neuropores .

A growing success of Artificial Neural Networks in the research field of Autonomous Driving, such as the ALVINN (Autonomous Land Vehicle in a Neural . From CMU, the ALVINN [6] (autonomous land vehicle in a neural . fluidity of neural networks permits 3.2.a portion of the neural network to be transplanted through Transfer Learning [12], and .

neural networks. Figure 1 Neural Network as Function Approximator In the next section we will present the multilayer perceptron neural network, and will demonstrate how it can be used as a function approximator. 2. Multilayer Perceptron Architecture 2.1 Neuron Model The multilayer perceptron neural network is built up of simple components.

neural networks and substantial trials of experiments to design e ective neural network structures. Thus we believe that the design of neural network structure needs a uni ed guidance. This paper serves as a preliminary trial towards this goal. 1.1. Related Work There has been extensive work on the neural network structure design. Generic algorithm (Scha er et al.,1992;Lam et al.,2003) based .

markers are expressed in the dorsal neural tube (SOX9, SOX10, SNAI2, and FOXD3), the neural tube is closed, and the ectodermal cells are converging on the midline to cover the neural tube. (d, d 0 ) By HH9, the NC cells are beginning to undergo EMT and start detaching from the neural tube.

Deep Neural Networks Convolutional Neural Networks (CNNs) Convolutional Neural Networks (CNN, ConvNet, DCN) CNN a multi‐layer neural network with – Local connectivity: Neurons in a layer are only connected to a small region of the layer before it – Share weight parameters across spatial positions:

Neural Network, Power, Inference, Domain Specific Architecture ACM Reference Format: KiseokKwon,1,2 AlonAmid,1 AmirGholami,1 BichenWu,1 KrsteAsanovic,1 Kurt Keutzer1. 2018. Invited: Co-Design of Deep Neural Nets and Neural Net Accelerators f

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1075 GenSoFNN: A Generic Self-Organizing Fuzzy Neural Network W. L. Tung and C. Quek, Member, IEEE Abstract— Existing neural fuzzy (neuro-fuzzy) networks pro-posed in the literature can be broadly classified into two groups.