Chapter 1 Past, Present And Future Trends In The Use Of .

2y ago
112 Views
2 Downloads
1.16 MB
30 Pages
Last View : 23d ago
Last Download : 3m ago
Upload by : Nixon Dill
Transcription

Chapter 1Past, Present and Future Trends in the Useof Computers in Fisheries ResearchBernard A. Megrey and Erlend MoksnessI think it’s fair to say that personal computers have become themost empowering tool we’ve ever created. They’re tools ofcommunication, they’re tools of creativity, and they can beshaped by their user.Bill Gates, Co-founder, Microsoft CorporationLong before Apple, one of our engineers came to me with thesuggestion that Intel ought to build a computer for the home.And I asked him, ‘What the heck would anyone want a computerfor in his home?’ It seemed ridiculous!Gordon Moore, Past President and CEO, Intel Corporation1.1 IntroductionTwelve years ago in 1996, when we prepared the first edition of Computers inFisheries Research, we began with the claim ‘‘The nature of scientific computinghas changed dramatically over the past couple of decades’’. We believe thisstatement remains valid even since 1996. As Heraclitus said in the 4th centuryB.C., ‘‘Nothing is permanent, but change!’’ The appearance of the personalcomputer in the early 1980s changed forever the landscape of computing.Today’s scientific computing environment is still changing, often at breathtaking speed.In our earlier edition, we stated that fisheries science as a discipline wasslow to adopt personal computers on a wide-scale with use being well behindthat in the business world. Pre-1996, computers were scarce and it was common for more than one user to share a machine, which was usually placed in apublic area. Today, in many modern fisheries laboratories, it is common forscientists to use multiple computers in their personal offices, a desktopB.A. Megrey (*)U.S. Department of Commerce, National Oceanic and Atmospheric Administration,National Marine Fisheries Service; Alaska Fisheries Science Center, 7600 Sand PointWay NE, BIN C15700, Seattle, WA 98115, USAB.A. Megrey, E. Moksness (eds.), Computers in Fisheries Research, 2nd ed.,DOI 10.1007/978-1-4020-8636-6 1, Ó Springer ScienceþBusiness Media B.V. 20091

2B.A. Megrey and E. Moksnesspersonal computer and a portable laptop is often the minimum configuration.Similarly, in many lab offices, there are several computers, each dedicated to aspecific computational task such as large scale simulations. We feel thatbecause of improvements in computational performance and advances inportability and miniaturization, the use of computers and computer applications to support fisheries and resource management activities is still rapidlyexpanding as well as the diversity of research areas in which they are applied.The important role computers play in contemporary fisheries research isunequivocal. The trends we describe, which continue to take place throughoutthe world-wide fisheries research community, produce significant gains inwork productivity, increase our basic understanding of natural systems, helpfisheries professionals detect patterns and develop working hypotheses, provide critical tools to rationally manage scarce natural resources, increase ourability to organize, retrieve, and document data and data sources, and ingeneral encourage clearer thinking and more thoughtful analysis of fisheriesproblems. One can only wonder what advances and discoveries well knowntheorists and fisheries luminaries such as Ludwig von Bertalanffy, and William Ricker, or Ray Beverton and Sidney Holt would have made if they hadhad access to a laptop computer.The objective of this book is to provide a vehicle for fisheries professionals tokeep abreast of recent and potential future developments in the application ofcomputers in their specific area of research and to familiarize them withadvances in new technology and new application areas. We hope to accomplishthis by comparing where we find ourselves today compared to when the firstedition was published in 1996. Hopefully, this comparison will help explain whycomputational tools and hardware are so important for managing our naturalresources. As in the previous edition, we hope to achieve the objective by havingexperts from around the world present overview papers on topic areas thatrepresent current and future trends in the application of computer technologyto fisheries research. Our aim is to provide critical reviews on the latest, mostsignificant developments in selected topic areas that are at the cutting edge ofthe application of computers in fisheries and their application to the conservation and management of aquatic resources. In many cases, these are the sameauthors who contributed to the first edition, so the decade of perspective theyprovide is unique and insightful.Many of the topics in this book cover areas that were predicted in 1989 to beimportant in the future (Walters 1989) and continue to be at the forefront ofapplications that drive our science forward: image processing, stock assessment,simulation and games, and networking. The chapters that follow update theseareas as well as introduce several new chapter topic areas. While we recognizethe challenge of attempting to present up to date information given the rapidpace of change in computers and the long time lines for publishing books, wehope that the chapters in this book taken together, can be valuable where theysuggest emerging trends and future directions that impact the role computersare likely to serve in fisheries research.

1Past, Present and Future Trends in the Use of Computers31.2 Hardware AdvancesIt is difficult not to marvel at how quickly computer technology advances. Thecurrent typical desktop or laptop computer, compared to the original monochrome 8 KB random access memory (RAM), 4 MHz 8088 microcomputer orthe original Apple II, has improved several orders of magnitude in many areas.The most notable of these hardware advances are processing capability,color graphics resolution and display technology, hard disk storage, and theamount of RAM. The most remarkable thing is that since 1982, the cost of ahigh-end microcomputer system has remained in the neighborhood of US3,000. This statement was true in 1982, at the printing of the last edition ofthis book in 1996, and it holds true today.1.2.1 CPUs and RAMWhile we can recognize that computer technology changes quickly, this statement does not seem to adequately describe what sometimes seems to be thebreakneck pace of improvements in the heart of any electronic computingengine, the central processing unit (CPU). The transistor, invented at BellLabs in 1947, is the fundamental electronic component of the CPU chip. Higherperformance CPUs require more logic circuitry, and this is reflected in steadilyrising transistor densities. Simply put, the number of transistors in a CPU is arough measure of its computational power which is usually measured in floatingpoint mathematical operations per second (FLOPS). The more transistors thereare in the CPU, or silicon engine, the more work it can do.Trends in transistor density over time, reveal that density typically doublesapproximately every year and a half according to a well know axiom known asMoore’s Law. This proposition, suggested by Intel co-founder Gordon Moore(Moore 1965), was part observation and part marketing prophesy. In 1965Moore, then director of R&D at Fairchild Semiconductor, the first large-scaleproducer of commercial integrated circuits, wrote an internal paper in which hedrew a line though five points representing the number of components perintegrated circuit for minimum cost for the components developed between1959 and 1964 (Source: ine/1965-Moore.html, accessed 12 January 2008). The prediction arisingfrom this observation became a self-fulfilling prophecy that emerged as one ofthe driving principals of the semiconductor industry. As it related to computerCPUs (one type of integrated circuit), Moore’s Law states that the number oftransistors packed into a CPU doubles every 18–24 months.Figure 1.1 supports this claim. In 1979, the 8088 CPU had 29,000 transistors.In 1997, the Pentium II had 7.5 million transistors, in 2000 the Pentium 4 had420 million, and the trend continues so that in 2007, the Dual-Core Itanium 2processor has 1.7 billion transistors. In addition to transistor density, data

4B.A. Megrey and E. MoksnessIntel 400410000000000Intel 8008Intel 80801000000000Intel 8088log(Number of Transistors)100000000Intel 80286Intel 8038610000000Intel 80486Pentium1000000AMD K5Pentium II100000AMD K6Pentium III10000AMD K6-IIIAMD K71000Pentium 4Itanium100AMD K8Itanium 210Core 2 Duo11970 1974 1978 1982 1986 1990 1994 1998 2002 2006 2010YearCore 2 QuadG80POWER6Dual-Core Itanium 2Fig. 1.1 Trends in the number of transistors placed on various CPU chips. Note the y-axis is onthe log scale (Source: essorHistory.pdf,accessed 12 January 2008)handling capabilities (i.e. progressing from manipulating 8, to 16, to 32, to 64 bitsof information per instruction), ever increasing clock speeds (Fig. 1.2), and thenumber of instructions executed per second, continue to improve.The remarkable thing is that while the number of transistors per CPU hasincreased more than 1,000 times over the past 26 years, and another 1,000 timessince 1996, performance (measured with millions of instructions per second,MIPS) has increased more than 10,000 times since the introduction of the 8088(Source: http://www.jcmit.com/cpu-performance.htm, accessed 12 January 2008).Scientific analysts, who use large databases, scientific visualization applications, statistics, and simulation modeling need as many MIPS as they can get.The more powerful computing platforms described above will enable us toperform analyses that we could not perform earlier (see Chapters 8, 11 and 12).In the original edition we predicted that ‘‘Three years from now CPU’s willbe four times faster than they are today and multi-processor designs should becommonplace.’’ This prediction has generally proven to be true. CPU performance has continued to increase according to Moore’s Law for the last 40 years,but this trend may not hold up in the near future. To achieve higher transistordensities requires the manufacturing technology (photolithography) to buildthe transistor in smaller and smaller physical spaces. The process architecture of

1Past, Present and Future Trends in the Use of Computers5Maximum Intel CPU Clock Speed (GHz)4.03.53.02.52.01.51.00.50.01993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006YearFig. 1.2 Trends in CPU clock speed (Source: http://wi-fizzle.com/compsci/cpu speed Page 3.png,accessed 12 January 2008)CPUs in the early 1970s used a 10 micrometer (mm, 10 6m) photolithographymask. The newest chips use a 45 nanometer (nm, 10 9m) mask. As a consequence of these advances, the cost per unit of performance as measured ingigaflops has dramatically declined (Fig. 1.3).log(Cost per GFLOP USD) 100,000.00 10,000.00 1,000.00 100.00 10.00 1.00 0.101996199820002002Year200420062008Fig. 1.3 Trends in the cost ( USD) per gigaflop (109 floating point instructions s–1) of CPUperformance. Note y-axis is on the log scale (Source: http://en.wikipedia.org/wiki/Teraflop,accessed 12 January 2008)

6B.A. Megrey and E. MoksnessManufacturing technology appears to be reaching its limits in terms of howdense silicon chips can be manufactured – in other words, how many transistorscan fit onto CPU chips and how fast their internal clocks can be run. As statedrecently in the BBC News, ‘‘The industry now believes that we are approachingthe limits of what classical technology – classical being as refined over the last 40years – can do.’’ (Source: stm,accessed 12 January 2008). There is a problem with making microprocessorcircuitry smaller. Power leaks, the unwanted leakage of electricity or electronsbetween circuits packed ever closer together, take place. Overheating becomes aproblem as processor architecture gets ever smaller and clock speeds increase.Traditional processors have one processing engine on a chip. One methodused to increase performance through higher transistor densities, withoutincreasing clock speed, is to put more than one CPU on a chip and to allowthem to independently operate on different tasks (called threads). Theseadvanced chips are called multiple-core processors. A dual-core processorsqueezes two CPU engines onto a single chip. Quad-core processors have fourengines. Multiple-core chips are all 64-bit meaning that they can work through64 bits of data per instruction. That is twice rate of the current standard 32-bitprocessor. A dual-core processor theoretically doubles your computing powersince a dual-core processor can handle two threads of data simultaneously. Theresult is there is less waiting for tasks to complete. A quad-core chip can handlefour threads of data.Progress marches on. Intel announced in February 2007 that it had aprototype CPU that contains 80 processor cores and is capable of 1 teraflop(1012 floating point operations per second) of processing capacity. The potentialuses of a desktop fingernail-sized 80-core chip with supercomputer-like performance will open unimaginable opportunities (Source: 070204comp.htm, accessed 12 January 2008).As if multiple core CPUs were not powerful enough, new products beingdeveloped will feature ‘‘dynamically scalable’’ architecture, meaning that virtually every part of the processor – including cores, cache, threads, interfaces,and power – can be dynamically allocated based on performance, power andthermal requirements (Source: orts/article.php/3668756, accessed 12 January 2008). Supercomputers maysoon be the same size as a laptop if IBM brings to the market silicon nanophotonics. In this new technology, wires on a chip are replaced with pulses of lighton tiny optical fibers for quicker and more power-efficient data transfersbetween processor cores on a chip. This new technology is about 100 timesfaster, consumes one-tenth as much power, and generates less heat IBM-researchers-build-supercomputeron-a-chip 1.html, accessed 12 January 2008).Multi-core processors pack a lot of power. There is just one problem: mostsoftware programs are lagging behind hardware improvements. To get the mostout of a 64-bit processor, you need an operating system and applicationprograms that support it. Unfortunately, as of the time of this writing, most

1Past, Present and Future Trends in the Use of Computers7software applications and operating systems are not written to take advantageof the power made available with multiple cores. Slowly this will change.Currently there are 64-bit versions of Linux, Solaris, and Windows XP, andVista. However, 64-bit versions of most device drivers are not available, so fortoday’s uses, a 64-bit operating system can become frustrating due to a lack ofavailable drivers.Another current developing trend is building high performance computingenvironments using computer clusters, which are groups of loosely coupledcomputers, typically connected together through fast local area networks.A cluster works together so that multiple processors can be used as thoughthey are a single computer. Clusters are usually deployed to improve performance over that provided by a single computer, while typically being much lessexpensive than single computers of comparable speed or availability.Beowulf is a design for high-performance parallel computing clusters usinginexpensive personal computer hardware. It was originally developed byNASA’s Thomas Sterling and Donald Becker. The name comes from themain character in the Old English epic poem Beowulf.A Beowulf cluster of workstations is a group of usually identical PC computers, configured into a multi-computer architecture, running a Open SourceUnix-like operating system, such as BSD (http://www.freebsd.org/, accessed12 January 2008), Linux (http://www.linux.org/, accessed 12 January 2008) orSolaris (http://www.sun.com/software/solaris/index.jsp?cid 921933, accessed12 January 2008). They are joined into a small network and have libraries andprograms installed that allow processing to be shared among them. The servernode controls the whole cluster and serves files to the client nodes. It is also thecluster’s console and gateway to the outside world. Large Beowulf machinesmight have more than one server node, and possibly other nodes dedicated toparticular tasks, for example consoles or monitoring stations. Nodes are configured and controlled by the server node, and do only what they are told to doin a disk-less client configuration.There is no particular piece of software that defines a cluster as a Beowulf.Commonly used parallel processing libraries include Message Passing Interface;(MPI, http://www-unix.mcs.anl.gov/mpi/, accessed 12 January 2008) and ParallelVirtual Machine, (PVM, http://www.csm.ornl.gov/pvm/, accessed 12 January2008). Both of these permit the programmer to divide a task among a group ofnetworked computers, and recollect the results of processing. Software mustbe revised to take advantage of the cluster. Specifically, it must be capable ofperforming multiple independent parallel operations that can be distributedamong the available processors. Microsoft also distributes a Windows ComputeCluster Server 2003 (Source: ault.aspx, accessed 12 January 2008) to facilitate building a high-performancecomputing resource based on Microsoft’s Windows platforms.One of the main differences between Beowulf and a cluster of workstations isthat Beowulf behaves more like a single machine rather than many workstations. In most cases client nodes do not have keyboards or monitors, and are

8B.A. Megrey and E. Moksnessaccessed only via remote login or through remote terminals. Beowulf nodes canbe thought of as a CPU memory package which can be plugged into thecluster, just like a CPU or memory module can be plugged into a motherboard.(Source: http://en.wikipedia.org/wiki/Beowulf (computing), accessed 12 January2008). Beowulf systems are now deployed worldwide, chiefly in support ofscientific computing and their use in fisheries applications is increasing. Typicalconfigurations consist of multiple machines built on AMD’s Opteron 64-bitand/or Athlon X2 64-bit processors.Memory is the most readily accessible large-volume storage available to theCPU. We expect that standard RAM configurations will continue to increase asoperating systems and application software become more full-featured anddemanding of RAM. For example, the ‘‘recommended’’ configuration forWindows Vista Home Premium Edition and Apple’s new Leopard operatingsystems is 2 GB of RAM, 1 GB to hold the operating system leaving 1 GB fordata and application code. In the previous edition, we predicted that in 3–5years (1999–2001) 64–256 megabytes (MB) of Dynamic RAM will be availableand machines with 64 MB of RAM will be typical. This prediction was incredibly inaccurate. Over the years, advances in semiconductor fabrication technology have made gigabyte memory configurations not only a reality, butcommonplace.Not all RAM performs equally. Newer types, called double data rate RAM(DDR) decrease the time in takes for the CPU to communicate with memory,thus speeding up computer execution. DDR comes in several flavors. DDR hasbeen around since 2000 and is sometimes called DDR1. DDR2 was introducedin 2003. It took a while for DDR2 to reach widespread use, but you can find it inmost new computers today. DDR3 began appearing in mid-2007. RAM simplyholds data for the processor. However, there is a cache between the processorand the RAM: the L2 cache. The processor sends data to this cache. When thecache overflows, data are sent to the RAM. The RAM sends data back to the L2cache when the processor needs it. DDR RAM t

420 million, and the trend continues so that in 2007, the Dual-Core Itanium 2 processor has 1.7 billion transistors. In addition to transistor density, dat

Related Documents:

Part One: Heir of Ash Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26 Chapter 27 Chapter 28 Chapter 29 Chapter 30 .

TO KILL A MOCKINGBIRD. Contents Dedication Epigraph Part One Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Part Two Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18. Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26

DEDICATION PART ONE Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 PART TWO Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 .

About the husband’s secret. Dedication Epigraph Pandora Monday Chapter One Chapter Two Chapter Three Chapter Four Chapter Five Tuesday Chapter Six Chapter Seven. Chapter Eight Chapter Nine Chapter Ten Chapter Eleven Chapter Twelve Chapter Thirteen Chapter Fourteen Chapter Fifteen Chapter Sixteen Chapter Seventeen Chapter Eighteen

18.4 35 18.5 35 I Solutions to Applying the Concepts Questions II Answers to End-of-chapter Conceptual Questions Chapter 1 37 Chapter 2 38 Chapter 3 39 Chapter 4 40 Chapter 5 43 Chapter 6 45 Chapter 7 46 Chapter 8 47 Chapter 9 50 Chapter 10 52 Chapter 11 55 Chapter 12 56 Chapter 13 57 Chapter 14 61 Chapter 15 62 Chapter 16 63 Chapter 17 65 .

HUNTER. Special thanks to Kate Cary. Contents Cover Title Page Prologue Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter

Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 . Within was a room as familiar to her as her home back in Oparium. A large desk was situated i

The passive gerund can have two forms : present and past. The present form is made up of being the past partiilicipleof themainverb,and the past form ismadeup of having been the past participle of the main verb. Present: being the past participle Past: having been the past participle