Evolution Of Cooperation Through Genetic Collective Learning And .

1y ago
2 Views
2 Downloads
1.04 MB
8 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Nixon Dill
Transcription

Evolution of Cooperation through Genetic Collective Learning and Imitationin Multiagent SocietiesHonglin Bao1,2 , Qiqige Wuyun1 and Wolfgang Banzhaf1,2 ,12Department of Computer Science and EngineeringNSF Beacon Center for the Study of Evolution in ActionMichigan State University, East Lansing, MI ow to facilitate the evolution of cooperation is a key question in multi-agent systems and game-theoretical situations.Individual reinforcement learners often fail to learn coordinated behavior. Using an evolutionary approach for selection can produce optimal behavior but may require significantcomputational efforts. Social imitation of behavior causesweak coordination in a society. Our goal in this paper is toimprove the behavior of agents with reduced computationaleffort by combining evolutionary techniques, collective learning, and social imitation techniques. We designed a geneticalgorithm based cooperation framework equipped with thesetechniques in order to solve particular coordination games incomplex multi-agent networks. In this framework, offspringagents inherit more successful behavior selected from gameplaying parent agents, and all agents in the network improvetheir performance through collective reinforcement learningand social imitation. Experiments are carried out to test theproposed framework and compare the performance with previous work. Experimental results show that the frameworkis more effective for the evolution of cooperation in complex multi-agent social systems than either evolutionary, reinforcement learning or imitation system on their own.1. IntroductionA multiagent system (MAS) which consists of multiple interacting intelligent agents and their environment, is a computerized system for solving problems that are difficult orimpossible for an individual agent to solve. Cooperationwhich has a long history in the application of game theory(Axelrod and Hamilton, 1981) assumes great importance inthe field of multiagent system. In multiagent societies, cooperation represents an interaction among agents that can beevolutionarily advantageous to improve the performance ofindividual agents or the overall behavior of the society theybelong to. Therefore, one of the main goals in multiagentsocieties is to achieve efficient cooperation among agents tojointly solve tasks or to maximize a utility function.In order to realize such cooperation, some techniquesdeveloped in the field of machine learning have been introduced into various multiagent systems (Kapetanakis andKudenko, 2002). Machine learning has been proven tobe a popular approach to solve multiagent system problems because of the inherent complexity of these problems.Among machine learning techniques, reinforcement learning has gained much attention in the field of multiagentsystems since it learns by trial-and-error interaction withits dynamic environment and can be used easily. However, several new challenges arise for reinforcement learning in multiagent systems. Foremost among these is that theperformance of reinforcement learning is unsatisfactory inmany real world applications. The learning algorithm maynot converge to an optimal action combination. Some researchers showed that an adaptive strategy, called evolutionary reinforcement learning, which combines reinforcementlearning with a genetic algorithm, could reach better performance than either strategy alone (Ackley and Littman,1991). Some new forms of learning, e.g., observational,imitational, and communication-based learning (Taylor, etal. 2006, Savarimuthu, et al. 2011), also significantly promote information proliferation (Dittrich and Banzhaf, 2002)in more complex environments and can be used to solvecomplex distributed multiagent problems better than purereinforcement learning approaches. Furthermore, ensemble methods are used to combine the advantages of multiplelearning algorithms to obtain better performance than whatcould be obtained from any of them alone (Polikar, 2006).More recently, Yu et al. (Yu, et al. 2017) studied the roleof reinforcement learning, collective decision making, social structure, and information diffusion in the process ofthe evolution of cooperation in the networked society.Although previous work provided a strong basis to studythe mechanisms behind the evolution of cooperation, existing work in this area has drawbacks. Individual reinforcement learners often fail to develop global coordinated behavior and can be trapped in local sub-optimal dilemmas.Using an evolutionary approach for strategy selection canproduce optimal behavior but may require significant computational efforts. Behavior imitation always causes weaklocal coordination in a society, leading to local interactionsbetween agents. This study is significantly different fromother frameworks for the evolution of cooperation in previ-

ous studies, because of the hybrid policy of decision makingof agents. Here we design a genetic algorithm based cooperation framework, which takes into account evolutionaryselection, collective learning, and imitation, in order to solvesome particular non-cooperative games in complex multiagent networks, overcome previous shortcomings, and produce an acceptable tradeoff in convergence rate and computation effort.The final decision of an agent is influenced by three kindsof processes:1) Evolutionary Selection (with inheritance and mutation): A population of agents plays a game with their neighbors (i.e., the agents which are directly connected with thefocal agent) on the network for several iterations. The offspring generation will be reproduced from the parent generation according to the cumulative payoff distribution, andthe most successful agents will pass on action to their offspring. Mutation will occur with a small probability duringthe inheritance process to create novelty.2) Collective Learning: Agents on the network improveon their parents’ actions and their original actions through acollective reinforcement learning algorithm with explorationand exploitation.3) Imitation: Agents update the cumulative payoff, compare their cumulative payoff to neighbors, and adopt the actions of more successful agents as their own actions with aparticular probability.These three processes interact with each other, and cancause significant influence on the evolution of cooperationin the entire society.The remainder of the paper is organized as follows. Section 2 introduces multiagent societies and the evolution ofcooperation. Section 3 describes the proposed framework inmultiagent societies. Section 4 presents experimental studies. Finally, Section 5 concludes the paper with some directions for future research.hood of a node, ρ is the re-wiring probability to indicate theevolvability of small-world network, and N is the numberof nodes. We use SFNk,γ to represent a scale-free network,in which the probability that a node has k neighbors roughlyequals to k γ . N is the number of nodes.In this paper, we adopt the “Rules of the Road Game”, atypical coordination game as an example to study the evolution of cooperation (Young, 1996). Consider two carriagesmeeting on a narrow road from opposite directions, havingno context to decide on which side of the road to pass theother. If they choose differently, it will cause a head-on collision between them, and they receive a negative payoff. Onlyif they choose the same way, they can avoid a collision andreceive positive payoff. To abstract from this realistic situation to virtual multiagent societies, agents are striving toestablish a convention/law of coordinated action by choosing from an action space without any central controller. Thepayoff matrix is shown in Table 1.Table 1: Payoff matrix of an n-action 2-player coordinationgame.Action 1Action 2.Action nAction 11,1-1,-1.-1,-1Action 2-1,-11,1.-1,-1.Action n-1,-1-1,-1.1,1There are multiple Nash-equilibria in this diagonal situation. Both of two players choose the same action, i.e., coordinated action. However, even purely rational players cannotchoose the specific coordinated action without negotiationbecause they have no information to differentiate betweenstrictly the same multiple equilibria. In realist, people cansurvive such social dilemma because there are laws or socialnorms for them to refer to. Our goal in this paper, is to trainagents of a virtual society to choose the cooperative actionwithout upper level steering and regulation.2. Multiagent Societies and the Evolution ofCooperation3. The Proposed FrameworkThis section gives a description of multiagent societies andthe evolution of cooperation.Definition 1. A Multiagent Society can be representedas a networked undirected graph G (E, R), where E {e1 , ., en } is a set of entities in the society (agents), andR E E represents a set of relationships, each of whichconnects two agents.Definition 2. Given a multiagent society (E, R), theNeighbors of agent i, denoted as N (i), are a set of agents sothat N (i) {ej hei , ej i R} with hei , ej i symbolizing aconnection.This paper adopts two typical topologies to represent amultiagent society, small-world networks and scale-free networks (Yu, et al. 2017). We use SWNk,ρ to represent a smallworld network, where k is the average size of the neighbor-The overall proposed cooperation framework is shown in Algorithm 1. It constitutes a genetic algorithm (GA) based cooperation framework for MAS with collective decision making, learning and imitation to facilitate the evolution of cooperation used in some particular coordination games. Thisframework is set in a network structure such as a smallworld network or a scale-free network. A population ofagents plays the coordination game with their neighbors repeatedly and simultaneously in the network for several generations. Offspring generation io will be reproduced fromparent generation ip according to their cumulative payoffEi distribution. The most successful agents pass on behavior to their offspring io , and mutation will change thisbehavior with a small probability η during inheritance, described in Subsection 3.1. The society information regarding

Algorithm 1: The proposed cooperation framework1 Initialize multiagent network and parameters;2 for each step t (t 1,.,T) do3for each agent i (i 1,.,n0 ) do4for each neighbor j N (i) of agent i do5Agent i plays the game with neighbor agent678910111213141516171819202122j and receives corresponding payoff rij ;endAgent i calculates the cumulative payoff Ei ;Offspring generation io will be reproduced fromparent generation ip according to Ei ;endfor each parent agent ip (ip 1,.,n0 ) doParent ip passes on behavior to the offspring io ;Mutation will change it with a small probabilityη during inheritance;endThe society information regarding nodes and edgeswill be updated;for each agent i in a new network dofor each neighbor j N (i) of agent i doAgent i improves the behavior with acollective learning method with explorationand exploitation regarding neighbor j;Agent i and neighbor j update the00cumulative payoff Ei and Ej ;Agent i imitates the action of neighboragent j with a probability W ;endendendnodes and edges will be updated regularly. Then agents willimprove their actions (including inherited action and original action) through a collective reinforcement learning algorithm with exploration and exploitation, described in Subsection 3.2. This will often cause later generations to converge to optimal behavior in the coordination game (McGlohon and Sen, 2005). After collective reinforcement learning,there is an imitation phase. Agents update and compare theircumulative payoffs with neighbors, and imitate their neighbors’ actions with a probability W , more detail in Subsection 3.3.3.1. Selection, Inheritance and MutationThis subsection describes the process of payoff-distributionbased reproduction (i.e., selection), inheritance, and mutation.Definition 3. Given a multiagent society (E, R), the Action Space of this society, denoted as Na , is a set of actions available to choose from for all agents, so that Na {a0 , a1 , ., aτ }. τ is the number of available actions.In Algorithm 1, we first initialize the multiagent networkand parameters. Each agent will take an action from actionspace Na chosen randomly. Agent i plays the game withneighbor agent j repeatedly and receives a correspondingpayoff rij according to Table 1. Agent i calculates their cumulative payoff Ei . When agents are chosen to reproduce,their fitness is based on the relative cumulative payoff distribution Pi shown in Equation 1 (McGlohon and Sen, 2005).Pi E(i)/n0XE(j)(1)j 1The probability θi of agent i being chosen to reproduce(i.e., fitness function) is shown in Equation 2. Pi 1/n0 Piθi 0if E(i) 0 if E(i) 0 if E(i) 0 n0PE(j) 0,j 1n0PE(j) 0,j 1n0PE(j) 0.(2)j 1Pn0E(j) 0 is complex.The situation for E(i) 0 j 1Weset E astheabsolutevalueofEii . For E(i) 0 Pn0j 1 E(j) 0, the probability θi of agent i being chosento reproduce is given in Equation 3. n0P E(j) , Pi if E(i) j 1(3)θi n0P E(j) . 0 if E(i) j 1Equation 2 and 3 are inspired by win-stay, lose-shift, asimple but insightful social strategy (Nowak and Sigmund,1993). Here winning means a positive payoff, and loosingmeans a negative payoff. Winning individuals in a globallosing environment should be given more chance to reproduce. Ordinary individuals just reproduce the ordinary number of offspring. Furthermore, loosing individuals should bepunished in a positive society. We use fitness proportionateselection. Notice that there is no crossover or recombinationin our model. Offspring io will be reproduced from parentsip according to the fitness function. Notice:1)PIf the cumulative payoff of the entire population is 0,n0i.e., j 1E(j) 0, we will reinitialize the experiment;2) If θi 1, we set θi 1.After reproducing offspring based on fitness, parents simply pass on their behaviors to offspring. In this process,mutation will change the behavior of offspring with a smallprobability η. In this case a random behavior will be chosenrather than the inherited behavior. We set η 1% (McGlohon and Sen, 2005).3.2. Collective LearningAs shown in Algorithm 2, collective learning is proposedto improve the behavior (both inherited and original) in anextending network. All agents in the society interact repeatedly and simultaneously with their neighbors. In each timestep, an agent uses a reinforcement learning algorithm tochoose a best-response action for each neighbor. The bestresponse actions for all neighbors are then aggregated into

an overall action using collective voting methods, which willbe described in details in 3.2.1. Local and global explorationand exploitation will be discussed in 3.2.2. The agent thenplays the overall action with all of its neighbors and receivesa corresponding payoff according to Table 1. The learninginformation for each neighbor is updated by the overall action and the corresponding payoff. The entire process of thisalgorithmic framework is shown in Figure 1. Here we justfocus on the neighbors of agent i.Algorithm 2: The collective learning framework1 for each step t (t 1,.,T) do2for each agent i (i 1,.,n) do3for each neighbor j N (i) of agent i do4Agent i has a Q function for each of its56789101112neighbours j;Agent i chooses a best-response actionai j regarding neighbor j using aQ-learning algorithm;//Local exploration;endAgent i aggregates all the actions ai j into anoverall action ai using ensemble learningmethods;//Global exploration;endfor each agent i (i 1,.,n) doAgent i plays action ai with its neighbors and013receives corresponding payoff rij for eachinteraction;Agent i updates learning information towards0each neighbor using action-payoff pair (ai , rij );14end15 endFigure 1: The entire process of our proposed framework.Agent i first plays the game and receives payoff ri1 andri2 from two neighbors, respectively. After reproduction, agent i interacts with new neighbors, and choosesthe best response action-reward pair {ai 1 , Q1 (s, a)} and{ai 2 , Q2 (s, a)}. Then agent i aggregates ai 1 and ai 2into an overall action ai . Agent i keeps action ai to play withneighbors and receives payoff ri1 0 and ri2 0 . The cumulativepayoff Ei , E10 , and E20 of agent i and neighbors is updated.Agent i imitates neighbors according to the new cumulativepayoff.3.2.1. Collective Decision MakingAfter reproduction, in this new extending society, allagents first interact with their neighbors. We adopt awidely used reinforcement learning algorithm, Q-learning,to model this interaction. Its one-step updating rule is givenby Equation 4. Here α (0, 1] is a learning rate, andλ [0, 1) is a discount factor.Q(s, a) Q(s, a) α[R(s, a) λ maxQ(s0 , a0 ) Q(s, a)]0a(4)As shown in Equation 4, an agent has a set of states anda set of actions. An agent performs an action a, transitionsfrom state s to another new state s0 and receives immediatereward R(s, a). Q(s, a) is the expected reward of choosing action a in state s at time step t. During the interaction, agents want to maximize the expected discounted reward Q(s0 , a0 ) to make decisions in the new state s0 at timestep t 1. The Q-function is learned during an agent’s lifetime inherited to choose a best-response action based on theQ-value regularly.Each agent needs to aggregate all the best-response actions regarding its neighbors into an overall action. Thisis inspired by the opinion aggregation process in that people usually have seek for the suggestions from many otherpeople before making a final decision. The opinion aggregation process can be realized by an ensemble learning methodwhich combines multiple single-learning algorithms to obtain better performance than what could be obtained fromany of them alone (Polikar, 2006).The foremost method of collective voting is inspired by asimple political principle, majority rule. Consider that in asimple society (e.g., a undirected simple graph which represents the multiagent network we adopt in this paper), humanbeings are more keen to decide as the majority of their neighbors. So in this paper, when agents make final decisions,they consider the action which quantitatively dominates inthe best-response action pool. More complex and realistic methods to make a final decision consider the weight ofeach neighbors, such as performance-based weighted voting method and structure-based weighted voting method.

For structure-based weighted voting, the weight of eachneighbor is related to the degree of each neighbor. The focalagent will give higher weight to a neighbor with more connections. For performance-based weighted voting, the focalagents will consider previous interaction experience and willgive higher weight to neighbors they trust. More detailed description of these collective voting methods can be found in(Yu, et al. 2017). In this study, we adopt majority voting asthe opinion aggregation method.3.2.2. Exploration and ExploitationFor pure greedy-learning, agents can be trapped easily inlocal sub-optima, and thus fail to learn the optimal behavior.During learning, an agent needs to strike a balance betweenexploitation of learnt knowledge and the exploration of unexplored environments in order to try more actions, escapefrom local sub-optima, and learn optimal behavior. In thispaper, we propose Simulated-annealing Exploration fordealing with exploitation and exploration during learning.Simulated Annealing (SA) is a non-linear technique forapproximating the global optimum of a given function. Weadopt an SA and combine it with traditional exploration.One step of SA exploration is given by Equation 5.µt µ0 / lg(1 t)(5)In Equation 5, µt is the exploration rate in the tth roundof simulation, and µ0 is the initial exploration rate. At thebeginning (t is small), exploration should be given higherweight to explore the unknown environment. As the algorithm continues (t increases), the probability of exploitation(i.e., 1 µt ) increases determining that the agent will focusmore on exploitation of learnt knowledge.In Algorithm 2, during the interaction with neighbors,agents need to find a best-response action regarding eachneighbor with a Q-learning method. At each time step t, regarding each neighbor j, agent i chooses the best-responseaction with the highest Q-value with a probability of 1 µt(i.e., exploitation), or chooses an action randomly with aprobability of µt (i.e., exploration). This occurs in the process of local interaction with neighbors. We call this processLocal SA Exploration. When agents use specific ensemblemethods to aggregate all the best-response actions into anoverall action, agents choose the overall action under ensemble methods with a probability of 1 µt (i.e., exploitation),or choose an action randomly with a probability of µt (i.e.,exploration). This occurs in the process of overall aggregation. We call this process Global SA Exploration. A smallaverage exploration rate (such as 10%) is kept throughout toconserve a small probability to explore.3.3. Social Learning and ImitationSocial learning theory is connected with social behaviorand learning and proposes that new behavior can be ob-tained by observing and imitating others’ behavior (Banduraand Walters, 1977). In real life, people not only can learnthrough their individual trial-and-error experiences (i.e., individual Q-learning to determine best-response actions), butalso seek suggestions or advice from others in a society (asmentioned in opinion aggregation in 3.2.1). Furthermore,they can also learn from the information directly providedby others through communication, observation, and imitation (Polikar, 2006).We are inspired by social learning theory to add an imitation process after learning to promote the evolution of cooperation. After reproduction and learning, there is a new population with better performance in multiagent societies. Inevery time step, when agent i updates the cumulative payoff0Ei , agent i in this new population adopts neighbor agent j’sbehavior, replacing its heritable behavior, with a probabilityW . Following Szabó and Tőke (Szabó and Tőke, 1998), weset:W 011 e (Ej0 Ei0 )/K(6)0Here, Ei and Ej are the cumulative payoff of agent i andneighbor j after updating. K represents some noise which isintroduced to consider irrational choices. For K 0 agenti adopts neighbor j’s strategy if Ej0 Ei0 . Here we setK 0.1.4. Experimental StudiesThe purpose of this experiment is to study the evolution ofcooperation in the proposed framework. The performancestandards are the asymptotic percentage of cooperative actions (i.e., how many agents in the society can reach a finalconsensus, e.g., choose a specific action as coordinated action from action space) and convergence time (i.e., the timeneeded to reach such a consensus). We want to produce anacceptable trade off in both of them.4.1. Experimental SettingsWe use the Watts-Strogatz model (Watts and Strogatz, 1998)to generate a small-world network, and use the BarabasiAlbert model (Albert and Barabasi, 2002) to generate ascale-free network. In order to use the Barabasi-Albertmodel, we start with 2 agents and add a new agent with1 edge to the network at every time step. Because of there-wiring probability ρ, this approach generates a scale-freenetwork following a power law distribution with an exponent γ 3. We set the maximum number of edges to1,000,000 for network evolution. Mutation rate η in inheritance is 0.01. Individual Q-learning rate α is 0.1. Average exploration rate in SA exploration is 0.1. The initializedSA exploration rate µ0 is 0.144. Noise in imitation is set to0.1. In this study, unless stated otherwise, we use the smallworld network as the default network topology because it

can evolve into many kinds of networks, and local SA exploration as the exploration mode. All results are averagedover 100 independent time step.4.2. Results and AnalysisInfluence of action spaces Here, we vary action space in12,0.8the set Na {2, 10, 20} in network SW100to study itsinfluence on the evolution of cooperation. According to Table 1, only when two agents choose the same action they willreceive a payoff of 1. Otherwise, they receive a payoff of -1.Results in Figure 2 show that a larger number of availableactions causes a delayed convergence of coordinated action.This is the result of learning and imitation regarding neighbors. Because of a larger number of actions, agents needmore local interactions to learn an optimal behavior regarding neighbors and choose the best behavior among this largeaction pool to imitate neighbors. It may produce more varied local distributed sub-coordination which emerges fromvaried local interaction among agents and their neighbors,leading to diversity across the society. It thus takes a longertime for agents to overcome this diversity and achieve a final coordination, and thus the evolution of cooperation isprolonged in the entire society.Figure 2: Influence of number of actions.The Influence of Single Mechanism Broadly, given thatfour very different mechanisms, i.e., genetic algorithm(GA), reinforcement learning (RL), collective decision making (CDM), and imitation, are being used, we want to givesome forms of direct comparison of what each mechanismcontributes to the dynamics and convergence properties inorder to understand the role that each mechanism plays inthis system and how they interact.We fix the action space to Na 10. The influence on theevolution of cooperation under different mechanism combinations is shown in Figure 3. Without GA situation meansthat only collective learning and imitation occur in a fixed,Figure 3: Evolutionary dynamics under different combinations of mechanism.static agent society; without CDM situation means that after choosing the best-response actions from neighbors, thefocal agent simply determines one action randomly as theoverall action; without imitation situation means that onlyevolutionary selection and collective learning occur in anextending agent society. From Figure 3, we can draw theseconclusions:1) Collective decision making (opinion aggregation) andimitation will significantly facilitate the evolution of cooperation, especially collective decision making.2) Evolutionary selection does cause influence both on theconvergence speed and convergence rate, but not as dramaticas collective decision making or imitation.Notice that in Figure 3, we do not show the evolutionary dynamics in this system without reinforcement learning(RL), i.e., the focus agent simply aggregates the original action or inherited action of their neighbors into an overall action without any RL-based improving. Since we could notget any convergence curves in 100 generations during experiments. We can say:3) In this system, reinforcement learning to make betterdecisions is the most important step to promote the evolution of cooperation. It is dominated by one of these fourmodalities and contributes most to the rate at which cooperation emerges.Comparison of mutation and two types of explorationAs shown in Subsection 3.1 and 3.2.2, we should test the single influence of mutation, local SA exploration, and globalSA exploration and compare them.We test the situation under 4-action space, i.e., action0,., action 3 respectively. Figure 4 shows the asymptoticpercentage of cooperative actions (action 0) adopted by theagents when cooperation evolves in the entire society. Initially, each agent randomly chooses an action from actionspace, so there are about 25% of all agents to choose each

action respectively. As our framework moves on, the number of agents who choose action 0 as the cooperative action finally reaches more than 90% in the situation with SAexploration (both local and global). This result means thatmore and more agents have reached a consensus on that action 0 should be the cooperative action. From Figure 4, wecan see that the fraction of cooperators in the society usingcollective learning with local SA exploration mode is almost100% which means that almost all the agents have reached aconsensus on which action should be the cooperative action.The framework works in the entire society.completely converged state but can not show their roles onpromoting cooperation clearly. The reason we guess is that,in this system, collective decision making causes the dramatic influence on convergence speed, as shown in Figure3. So the weak influence of mutation and exploration on theconvergence speed can not be found very dramatically.Comparison with Previous Work We mainly comparethe performance of our model with (Yu, et al. 2017). Asshown in Figure 5, we follow the previous parameter settings(Na 10), our framework has better performance than previous study. We additionally test other situations with different action space, the results show the same trends. It indicates that our model works for the evolution of cooperationin the entire society. It is indeed effective for the robust evolution through combining evolutionary selection, individuallearning, collective voting, and social imitation.Figure 4: Fraction of cooperators under different explorationand mutation methods.We further study Figure 4 and we can draw these conclusions:1) Local exploration is better than global exploration.The fraction of cooperators using collective learning withthe global exploration mode is much lower than that usingcollective learning with the local exploration mode. This isbecause agents explore the environment with a probabilityof 0.1. However, as agents using local exploration to explore the environment locally (i.e., choosing irrational actionduring local interaction) and aggregate to an overall actioncollectively, the randomness caused by the exploration canbe removed. In global exploration, agents explore globallywhen they a

nal action) through a collective reinforcement learning algo-rithm with exploration and exploitation, described in Sub-section 3.2. This will often cause later generations to con-verge to optimal behavior in the coordination game (McGlo-hon and Sen, 2005). After collective reinforcement learning, there is an imitation phase.

Related Documents:

Chapter 4-Evolution Biodiversity Part I Origins of life Evolution Chemical evolution biological evolution Evidence for evolution Fossils DNA Evolution by Natural Selection genetic variability and mutation natural selection heritability differential reproduct

The Genetic Code and DNA The genetic code is found in a acid called DNA. DNA stands for . DNA is the genetic material that is passed from parent to and affects the of the offspring. The Discovery of the Genetic Code FRIEDRICH MIESCHER Friedrich Miescher discovered in white blood . The Discovery of the Genetic Code MAURICE WILKINS

Chapter 17 Section 2: Genetic Change Population Size and Evolution 5, 6 . . Population Size and Evolution Genetic drift is a strong force in small . All populations have genetic variation. Individuals tend to

5. EVOLUTION AS A POPULATION-GENETIC PROCESS 5 April 2020 With knowledge on rates of mutation, recombination, and random genetic drift in hand, we now consider how the magnitudes of these population-genetic features dictate the paths that are open vs. closed to ev

Evolution 2250e and Evolution 3250e are equipped with a 2500 VApower supply. The Evolution 402e and Evolution 600e are equipped with a 4400 VA power supply, and the Evolution 403e and Evolution 900e house 6000 VA power supplies. Internal high-current line conditioning circuitry filters RF noise on the AC mains, as well as

genetic algorithms, namely, representation, genetic operators, fitness evaluation, and selection. We discuss several advanced genetic algorithms that have proved to be efficient in solving difficult design problems. We then give an overview of applications of genetic algorithms to different domains of engineering design.

An Introduction to Genetic Genealogy Overview Genetic Genealogy using genetic analysis as a genealogical tool relies on two special types of DNA (one for direct male line and one for direct female line) Some of my experiences with genetic genealogy Pike Surname DNA Project started in summer of 2004 currently has 24 participants (2 from Newfoundland)

Bean Bunny Evolution Bean Bunny Evolution Modeling Gene Frequency Change (Evolution) in a Population by Natural Selection In this activity, you will examine natural selection in a small population of wild rabbits. Evolution, on a genetic level, is a change in