Cover Page - New Jersey Institute Of Technology

2y ago
16 Views
2 Downloads
1.24 MB
25 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Giovanna Wyche
Transcription

On Wide Area Network Optimization 2012 IEEE. Personal use of this material is permitted. Permission from IEEEmust be obtained for all other uses, in any current or future media, includingreprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, orreuse of any copyrighted component of this work in other works.This material is presented to ensure timely dissemination of scholarly andtechnical work. Copyright and all rights therein are retained by authors or byother copyright holders. All persons copying this information are expected toadhere to the terms and constraints invoked by each author's copyright. In mostcases, these works may not be reposted without the explicit permission of thecopyright holder.Citation:Y. Zhang, N. Ansari, M. Wu, and H. Yu, “On Wide Area Network Optimization,”IEEE Communications Surveys and Tutorials, Vol. 14, No.4, pp. 1090‐1113, FourthQuarter p?arnumber 6042388

IEEE COMMUNICATIONS SURVEYS & TUTORIALS, ACCEPTED FOR PUBLICATION1On Wide Area Network OptimizationYan Zhang, Nirwan Ansari, Mingquan Wu, and Heather YuAbstract—Applications, deployed over a wide area network(WAN) which may connect across metropolitan, regional ornational boundaries, suffer performance degradation owing tounavoidable natural characteristics of WANs such as high latencyand high packet loss rate. WAN optimization, also known as WANacceleration, aims to accelerate a broad range of applicationsand protocols over a WAN. In this paper, we provide a surveyon the state of the art of WAN optimization or WAN accelerationtechniques, and illustrate how these acceleration techniques canimprove application performance, mitigate the impact of latencyand loss, and minimize bandwidth consumption. We begin byreviewing the obstacles in efficiently delivering applications overa WAN. Furthermore, we provide a comprehensive survey of themost recent content delivery acceleration techniques in WANsfrom the networking and optimization point of view. Finally, wediscuss major WAN optimization techniques which have beenincorporated in widely deployed WAN acceleration products multiple optimization techniques are leveraged by a single WANaccelerator to improve application performance in general.Index Terms—Wide area network (WAN), WAN acceleration,WAN optimization, compression, data deduplication, caching,prefetching, protocol optimization.I. I NTRODUCTIONTODAY’S IT organizations tend to deploy their infrastructures geographically over a wide area network (WAN)to increase productivity, support global collaboration andminimize costs, thus constituting to today’s WAN-centeredenvironments. As compared to a local area network (LAN),a WAN is a telecommunication network that covers a broadarea; WAN may connect across metropolitan, regional, and/ornational boundaries. Traditional LAN-oriented infrastructuresare insufficient to support global collaboration with highapplication performance and low costs. Deploying applicationsover WANs inevitably incurs performance degradation owingto the intrinsic nature of WANs such as high latency andhigh packet loss rate. As reported in [1], the WAN throughputdegrades greatly with the increase of transmission distance andpacket loss rate. Given a commonly used maximum windowsize of 64 KB in the original TCP protocol and 45 Mbpsbandwidth, the effective TCP throughput of one flow over asource-to-destination distance of 1000 miles is only around30% of the total bandwidth. With the source-to-destinationdistance of 100 miles, the effective TCP throughput degradesfrom 97% to 32% and 18% of the whole 45 Mbps bandwidthManuscript received 05 May 2011; revised 16 August and 03 September2011.Y. Zhang and N. Ansari are with the Advanced Networking Lab., Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, 07102 USA (e-mail: {yz45, nirwan.ansari}@njit.edu).M. Wu and H. Yu are with Huawei Technologies, USA (e-mail:{Mingquan.Wu, heatheryu}@huawei.com).Digital Object Identifier 10.1109/SURV.2011.092311.00071when the packet loss rate increases from 0.1% to 3% and 5%,respectively.Many factors, not normally encountered in LANs, canquickly lead to performance degradation of applications whichare run across a WAN. All of these barriers can be categorizedinto four classes [2]: network and transport barriers, application and protocol barriers, operating system barriers, andhardware barriers. As compared to LANs, the available bandwidth in WANs is rather limited, which directly affects theapplication throughput over a WAN. Another obvious barrierin WANs is the high latency introduced by long transmissiondistance, protocol translation, and network congestions. Thehigh latency in a WAN is a major factor causing the longapplication response time. Congestion causes packet loss andretransmissions, and leads to erratic behavior of the transportlayer protocol, such as transmission control protocol (TCP).Most of the existing protocols are not designed for WANenvironments; therefore, several protocols do not perform wellunder the WAN condition. Furthermore, hosts also impacton the application performance, including operating systems,which host applications; and hardware platforms, which hostoperating systems.The need for speedup over WANs spurs on applicationperformance improvement over WANs. The 8-second rule[3] related to a web server’s response time specifies thatusers may not likely wait for a web page if the load-timeof the web page exceeds eight seconds. According to an Ecommerce web site performance study by Akamai in 2006[4], this 8-second rule for e-commerce web sites is halvedto four seconds, and in its follow-up report in 2009 [5], anew 2-second rule was indicated. These reports showed thatpoor site performance was ranked second among factors fordissatisfaction and site abandonment. Therefore, there is a direneed to enhance application performance over WANs.WAN optimization, also commonly referred to as WANacceleration, describes the idea of enhancing application performance over WANs. WAN acceleration aims to providehigh-performance access to remote data such as files andvideos. A variety of WAN acceleration techniques have beenproposed. Some focus on maximizing bandwidth utilization,others address latency, and still others address protocol inefficiency which hinders the effective delivery of packetsacross the WAN. The most common techniques, employedby WAN optimization to maximize application performanceacross the WAN, include compression [6–10], data deduplication [11–24], caching [25–39], prefetching [40–62], andprotocol optimization [63–81]. Compression is very importantto reduce the amount of bandwidth consumed on a linkduring transfer across the WAN, and it can also reduce thetransit time for given data to traverse over the WAN byc 2011 IEEE1553-877X/11/ 25.00

2IEEE COMMUNICATIONS SURVEYS & TUTORIALS, ACCEPTED FOR PUBLICATIONreducing the amount of transmitted data. Data deduplicationis another data reduction technique and a derivative of datacompression. It identifies duplicate data elements, such as anentire file and data block, and eliminates both intra-file andinter-file data redundancy, and hence reduces the data to betransferred or stored. Caching is considered to be an effectiveapproach to reduce network traffic and application responsetime by storing copies of frequently requested content in alocal cache, a proxy server cache close to the end user, oreven within the Internet. Prefetching (or proactive caching)is aimed at overcoming the limitations of passive caching byproactively and speculatively retrieving a resource into a cachein the anticipation of subsequent demand requests. Severalprotocols such as common Internet file system (CIFS) [82],(also known as Server Message Block (SMB) [83]), and Messaging Application Programming Interface (MAPI) [84] arechatty in nature, requiring hundreds of control messages for arelatively simple data transfer, because they are not designedfor WAN environments. Protocol optimization capitalizes onin-depth protocol knowledge to improve inefficient protocolsby making them more tolerant to high latency in the WANenvironment. Some other acceleration techniques, such as loadbalancing, routing optimization, and application proxies, canalso improve application performance.With the dramatic increase of applications developed overWANs, many companies, such as Cisco, Blue Coat, RiverbedTechnology, and Silver Peak Systems, have been marketingWAN acceleration products for various applications. In general, typical WAN acceleration products leverage multipleoptimization techniques to improve application throughput,mitigate the impact of latency and loss, and minimize bandwidth consumption. For example, Cisco Wide Area Application Services (WAAS) appliance employs data compression, deduplication, TCP optimization, secure sockets layer(SSL) optimization, CIFS acceleration, HyperText TransferProtocol (HTTP) acceleration, MAPI acceleration, and NFSacceleration techniques to improve application performance.The WAN optimization appliance market was estimated to be 1 billion in 2008 [85]. Gartner, a technology research firm,estimated the compound annual growth rate of the applicationacceleration market will be 13.1% between 2010 and 2015[86], and forecasted that the application acceleration marketwill grow to 5.5 billion in 2015 [86].Although WAN acceleration techniques have been deployedfor several years and there are many WAN acceleration products in the market, many new challenges to content deliveryover WANs are emerging as the scale of information dataand network sizes is growing rapidly, and many companieshave been working on WAN acceleration techniques, such asGoogle’s web acceleration SPDY project [87]. Several WANacceleration techniques have been implemented in SPDY, suchas HTTP header compression, request prioritization, streammultiplexing, and HTTP server push and serve hint. A SPDYcapable web server can respond to both HTTP and SPDYrequests efficiently, and at the client side, a modified GoogleChrome client can use HTTP or SPDY for web access.The SPDY protocol specification, source code, SPDY proxyexamples and their lab tests are detailed in the SPDY webpage [87]. As reported in the SPDY tests, up to 64% of pagedownload time reduction can be observed.WAN acceleration or WAN optimization has been studiedfor several years, but there does not exist, to the best ofour knowledge, a comprehensive survey/tutorial like this one.There is coverage on bits and pieces on certain aspects ofWAN optimization such as data compression, which has beenwidely studied and reported in several books or survey papers,but few works [2, 39] have discussed WAN optimization orWAN acceleration as a whole. Reference [2] emphasizes onapplication-specific acceleration and content delivery networkswhile Reference [39] focuses on dynamic web content generation and delivery acceleration techniques. In this paper,we will survey state-of-the-art WAN optimization techniquesand illustrate how these acceleration techniques can improveapplication performance over a WAN, mitigate the impact oflatency and loss, and minimize bandwidth consumption. Theremainder of the paper is organized as follows. We presentthe obstacles to content delivery over a WAN in Section II. Inorder to overcome these challenges in the WAN, many WANacceleration techniques have been proposed and developed,such as compression, data deduplication, caching, prefetching,and protocol optimization. We detail most commonly usedWAN optimization techniques in Section III. Since tremendousefforts have been made in protocol optimization to improveapplication performance over WAN, we dedicate one sectionto discuss the protocol optimization techniques over WAN inSection IV, including HTTP optimization, TCP optimization,CIFS optimization, MAPI optimization, session layer optimization, and SSL acceleration. Furthermore, we present sometypical WAN acceleration products along with the major WANoptimization techniques incorporated in these accelerationproducts in Section V; multiple optimization techniques arenormally employed by a single WAN accelerator to improveapplication performance. Finally, Section VI concludes thepaper.II. O BSTACLES TO C ONTENT D ELIVERY OVER A WANThe performance degradation occurs when applications aredeployed over a WAN owing to its unavoidable intrinsiccharacteristics. The obstacles to content delivery over a WANcan be categorized into four classes [2]: network and transportbarriers, application and protocol barriers, operating systembarriers, and hardware barriers.A. Network and Transport BarriersNetwork characteristics, such as available bandwidth, latency, packet loss rate and congestion, impact the applicationperformance. Figure 1 summarizes the network and transportbarriers to the application performance in a WAN.1) Limited Bandwidth: The available bandwidth is generally much higher in a LAN environment than that in a WANenvironment, thus creating a bandwidth disparity betweenthese two dramatically different networks. The limited bandwidth impacts the capability of an application to provide highthroughput. Furthermore, oversubscription or aggregation isgenerally higher in a WAN than that in a LAN. Therefore, eventhough the clients and servers may connect to the edge routerswith high-speed links, the overall application performance

ZHANG et al.: ON WIDE AREA NETWORK estion3capacity, disk storage, and file system, can improve the overallapplication performance. A poorly tuned application serverwill have a negative effect on the application’s performanceand functionality across the WAN. In this survey, we focus onthe networking and protocol impacts on the application performance over WAN. A detailed discussion on the performancebarriers caused by operating systems and hardware platforms,and guidance as to what aspects of the system should beexamined for a better level of application performance canbe found in [2].III. WAN O PTIMIZATION T ECHNIQUESBandwidthDisparityInefficiencies ofTransport ProtocolFig. 1.WAN.Network and transport barriers to application performance over aover a WAN is throttled by network oversubscription andbandwidth disparity because only a small number of requestscan be received by the server, and the server can only transmita small amount of data at a time in responding to theclients’ requests. Protocol overhead, such as packet header andacknowledgement packets, consumes a noticeable amount ofnetwork capacity, hence further compromising the applicationperformance.2) High Latency: The latency introduced by transmissiondistance, protocol translation, and congestion is high in theWAN environment, and high latency is the major cause forlong application response time over a WAN.3) Congestion and High Packet Loss Rate: Congestioncauses packet loss and retransmission, and leads to erraticbehaviors of transport layer protocols that may seriouslydeteriorate the application performance.B. Application and Protocol BarriersThe application performance is constantly impacted bythe limitations and barriers of the protocols, which are notdesigned for WAN environments in general. Many protocolsdo not perform well under WAN conditions such as longtransmission path, high network latency, network congestion,and limited available bandwidth. Several protocols such asCIFS and MAPI are chatty in nature, requiring hundreds ofcontrol messages for a relatively simple data transfer. Someother popular protocols, e.g., Hypertext Transfer Protocol(HTTP) and TCP, also experience low efficiency over a WAN.A detailed discussion on protocol barriers and optimizationsin a WAN will be presented in Section IV.C. Operating System and Hardware BarriersThe hosts, including their operating systems, which host theapplications; and hardware platforms, which host operatingsystems, also impact the application performance. Properselection of the application hosts’ hardware and operatingsystem components, including central processing unit, cacheWAN acceleration technologies aim, over a WAN, to accelerate a broad range of applications and protocols, mitigate the impact of latency and loss, and minimize bandwidth consumption. The most common techniques employedby WAN optimization to maximize application performanceacross the WAN include compression, data deduplication,caching, prefetching, and protocol optimization. We discussthe following most commonly used WAN optimization techniques.A. CompressionCompression is very important to minimize the amount ofbandwidth consumed on a link during transfer across the WANin which bandwidth is quite limited. It can improve bandwidthutilization efficiency, thereby reducing bandwidth congestion;it can also reduce the amount of transit time for given data totraverse the WAN by reducing the transmitted data. Therefore,compression substantially optimizes data transmission overthe network. A comparative study between various text filecompression techniques is reported in [6]. A survey on XMLcompression is presented in [7]. Another survey on losslessimage compression methods is presented in [8]. A survey onimage and video compression is covered in [9].HTTP [88, 89] is the most popular application-layer protocol in the Internet. HTTP compression is very importantto enhance the performance of HTTP applications. HTTPcompression techniques can be categorized into two schemes:HTTP Protocol Aware Compression (HPAC) and HTTP BiStream Compression (HBSC) schemes. By exploiting thecharacteristics of the HTTP protocol, HPAC jointly uses threedifferent encoding schemes, namely, Stationary Binary Encoding (SBE), Dynamic Binary Encoding (DBE), and HeaderDelta Encoding (HDE), to perform compression. SBE cancompress a significant amount of ASCII text present in themessage, including all header segments except request-URI(Uniform Resource Identifier) and header field values, intoa few bytes. The compressed information is static, and doesnot need to be exchanged between the compressor and decompressor. All those segments of the HTTP header thatcannot be compressed by SBE will be compressed by DBE.HDE is developed based on the observation that HTTP headersdo not change much for an HTTP transaction and a responsemessage does not change much from a server to a client.Hence, tremendous information can be compressed by sendingonly the changes of a new header from a reference. HBSC is analgorithm-agnostic framework that supports any compression

4IEEE COMMUNICATIONS SURVEYS & TUTORIALS, ACCEPTED FOR PUBLICATIONTABLE IC OMPRESSION S AVINGS FOR D IFFERENT W EB S ITEC ATEGORIES [90]Web Site TypeText File OnlyOverall (Graphics)High-Tech Company79%35%Newspaper79%40%Web . HBSC maintains two independent contexts for theHTTP header and HTTP body of a TCP connection in eachdirection to avoid the problem of context thrashing. Thesetwo independent contexts are created when the first messageappears for a new TCP connection, and are deleted until thisTCP connection finishes; thus, the inter-message redundancycan be detected and removed since the same context is usedto compress HTTP messages at one TCP connection. HBSCalso pre-populates the compression context for HTTP headerswith text strings in the first message over a TCP connection,and detects the compressibility of an HTTP body basedon the information in the HTTP header to further improvethe compression performance. HTTP header compression hasbeen implemented in Google’s SPDY project [87] - accordingto their results, HTTP header compression resulted in an about88% reduction in the size of HTTP request headers and anabout 85% reduction in the size of HTTP response headers. Adetailed description of a set of methods developed for HTTPcompression and their test results can be found in [10].The compression performance for web applications dependson the mix of traffic in the WAN such as text files, video andimages. According to Reference [90], compression can save75 percent of the text file content and save 37 percent of theoverall file content including graphics. Their performance ofcompression is investigated based on four different web sitecategories, including high technology companies, newspaperweb sites, web directories, and sports. For each category, 5web sites are examined. Table I lists the precentage of bytessavings after compression for different investigated web sitecategories.B. Data DeduplicationData deduplication, also called redundancy elimination [11,12], is another data reduction technique and a derivative ofdata compression. Data compression reduces the file size byeliminating redundant data contained in a document, whiledata deduplication identifies duplicate data elements, such asan entire file [13, 14] and data block [15–23], and eliminatesboth intra-file and inter-file data redundancy, hence reducingthe data to be transferred or stored. When multiple instancesof the same data element are detected, only one single copy ofthe data element is transferred or stored. The redundant dataelement is replaced with a reference or pointer to the uniquedata copy. Based on the algorithm granularity, data deduplication algorithms can be classified into three categories:whole file hashing[13, 14], sub-file hashing[15–23], and deltaencoding [24]. Traditional data de-duplication operates atthe application layer, such as object caching, to eliminateredundant data transfers. With the rapid growth of networktraffic in the Internet, data redundancy elimination techniquesoperating on individual packets have been deployed recently[15–20] based on different chunking and sampling methods.The main idea of packet-level redundancy elimination is toidentify and eliminate redundant chunks across packets. Alarge scale trace-driven study on the efficiency of packet-levelredundancy elimination has been reported in [91]. This studyshowed that packet-level redundancy elimination techniquescan obtain average bandwidth savings of 15-60% when deployed at access links of the service providers or betweenrouters. Experimental evaluations on various data redundancyelimination technologies are presented in [11, 18, 91].C. CachingCaching is considered to be an effective approach to reducenetwork traffic and application response time. Based on thelocation of caches, they can be deployed at the client side,proxy side, and server side. Owing to the limited capacity ofa single cache, caches can also work cooperatively to servea large number of clients. Cooperative caching can be set uphierarchically, distributively, or in a hybrid mode. From thetype of the cached objects, caches can be classified as functioncaching and content caching. A hierarchical classification ofcaching solutions is shown in Figure 2.1) Location of cache: Client side caches are placed veryclose to and even at the clients. All the popular web browsers,including Microsoft Internet Explorer and Mozilla Firefox, usepart of the storage space on client computers to keep recordsof recently accessed web content for later reference to reducethe bandwidth used for web traffic and user perceived latency.Several solutions [25–27] have been proposed to employclient cache cooperation to improve client-side caching efficiency. Squirrel [25], a decentralized, peer-to-peer web cache,was proposed to enable web browsers on client computers toshare their local caches to form an efficient and scalable webcache. In Squirrel, each participating node runs an instance ofSquirrel, and thus web browsers will issue their requests to theSquirrel proxy running on the same machine. If the requestedobject is un-cacheable, the request will be forwarded to theorigin server directly. Otherwise, the Squirrel proxy will checkthe local cache. If the local cache does not have the requestedobject, Squirrel will forward the request to some other nodein the network. Squirrel uses a self-organizing peer-to-peerrouting algorithm, called Pastry, to map the requested objectURL as a key to a node in the network to which the requestwill be forwarded. One drawback of this approach is that itneglects the diverse availabilities and capabilities among clientmachines. The whole system performance might be affectedby some low capacity intermediate nodes since it takes severalhops before an object request is served. Figure 3 illustrates anexample of the Squirrel requesting and responding procedure.Client s issues a request to client j with 2 hops routing throughclient i. If the requested object is present in the browsercache of client j, the requested object will be forwarded back

ZHANG et al.: ON WIDE AREA NETWORK OPTIMIZATION5CacheLocation of ingHierarchicalFig. 2.Type of Cached ching classification hierarchy.to client s directly through path A. Otherwise, client j willforward the request to the origin server, and the origin serverwill respond the request through path B.Xiao et al. [26] proposed a peer-to-peer Web documentsharing technique, called browsers-aware proxy server, whichconnects to a group of networked clients and maintains abrowser index file of objects contained in all client browsercaches. A simple illustration of the organization of a browsersaware proxy server is shown in Figure 4. If a cache miss in itslocal browser cache occurs, a request will be generated to theproxy server, and the browsers-aware proxy server will checkits proxy cache first. If it is not present in the proxy cache, theproxy server will look up the browser index file attempting tofind it in other client’s browser cache. If such a hit is found ina client, this client will forward the requested object directlyto the requesting client; otherwise, the proxy server will sendthe request to an upper level proxy or the origin server. Thebrowser-aware proxy server suffers from the scalability issuesince all the clients are connected to the centralized proxyserver.Xu et al. [27] proposed a cooperative hierarchical clientcache technique. A large virtual cache is generated fromcontribution of local caches of each client. Based on the clientcapability, the clients are divided into the super-clients and theordinary clients, and the system workload will be distributedamong these relatively high capacity super-clients. Unlike thebrowsers-aware proxy server scheme, the super-clients areonly responsible for maintaining the location information ofthe cached files; this provides high scalability because cachingand data lookup operations are distributed across all clientsand super-clients. Hence, such a hierarchical web cachingstructure reduces the workloads on the dedicated server inbrowsers-aware proxy server scheme [26] and also relievesthe weak client problem in Squirrel [25].Contrary to the client-side caching, server-end caches areplaced very close to the origin servers. Server-end cachingcan reduce server load and improve response time especiallywhen the client stress is high. For edge/proxy side caching[28], caches are placed between the client and the server.According to the results reported in [29], local proxy cachingcould reduce user perceived latency by at best 26%.2) Cooperative Caching: Owing to the limited capacity ofsingle caches, multiple caches can share and coordinate thecache states to build a cache network serving a large number ofusers. Cooperative caching architectures can be classified intothree major categories [30]: hierarchical cooperative caching[31], distributive cooperative caching [32–37], and hybridcooperative caching [38].In the hierarchical caching architecture, caches can beplaced at different network levels, including client, institutional, regional and national level from bottom to top in thehierarchy. Therefore, it is consistent with present Internetarchitecture. If a request cannot be satisfied by lower levelcaches, it will be redirected to upper level caches. If it cannotbe served by any cache level, the national cache will contactthe origin server directly. When the content is found, it travelsdown the hierarchy, leaving a copy at each of the intermediatecaches. One obvious drawback of the hierarchical cachingsystem is that multiple copies of the same document are storedat different cache levels. Each cache level introduces additionaldelays, thus yielding poor response times. Higher level cachesmay also experience congestion and have long queuing delaysto serve a large number of requests.In distributed caching systems, there are only institutionalcaches at the edge of the network. The distributed cachingsystem requires some mechanisms to cooperate these institutional caches to serve each other’s cache misses. Severalmechanisms have been proposed so far, including the querybased approach, content list based approach, and hash functionbased approach. Each of them has its own drawbacks. Aquery-based approach such as Inter Cache Protocol (ICP) [32]can be used to retrieve the document which is not presentat the local cache from other institutional caches. However,this method may increase the bandwidth consumption andthe user perceived latency because a cache have to poll allcooperating caches and wait for all of them to answer. Acontent list of each institutional cache, such as cache digest[33] and summary cache [36], can help to avoid the needfor queries/polls. In order to distribute content lists more efficiently and scalably, a hierarchical infrastructure of immediatenodes is set up in general, but this infrastructure does not storeany document copies. A hash function [35] can be used to mapa client request into a certain cache, and so there is only onesingle copy of a docu

width consumption. For example, Cisco Wide Area Appli-cation Services (WAAS) appliance employs data compres-sion, deduplication, TCP optimization, secure sockets layer (SSL) optimization, CIFS acceleration, HyperText Transfer Protocol (HTTP) acceleration, MAPI acceleration, and NFS acceleration techniques to improve application performance.

Related Documents:

Double-Master-Degree Program, University of Passau & New Jersey Institute of Technology 15 Contact –New Jersey Institute of Technology Program Coordinator and Academic Coordinator Prof. Layek Abdel Malek Department of Mechanical and Industrial Engineering New Jersey Institute of Technology University Height

Jul 01, 2019 · New Jersey Is An Equal Opportunity Employer State of New Jersey NEW JERSEY STATE PAROLE BOARD P.O. BOX 862 PHILIP D. MURPHY TRENTON, NEW JERSEY 08625 Governor TELEPHONE NUMBER: (609) 292-0845 SHEILA Y. OLIVER SAMUEL J. PLUMERI, JR. Lt. Governor

A Publication of New Jersey - The New Jersey Society of Architects New Jersey 2007 Design Awards Winners AIA New Jersey's Michael Graves Lifetime Achievement Award J. Robert Hillier, FAIA 12 19 Member News New Licensee Distinguished Service Award David DelVecchio, AIA Architect of the Year Hugh Boyd, FAIA Architectural Firm of the Year NK .

The Cover Page Manual . This manual provides instructions on how to properly format the cover page and provides examples as well. Manual Sections: Section 1: The Cover Page (p.2-5) Section 2: Cover Page Help – Correct Degree Titles and Academic Units (p.6-14) Section 3: Cover Page Examples (p.15-18) A. Sample Cover Page for Master’s Thesis

Figure 1-10 New Jersey Rail System–Post Conrail 1-18 Figure 1-11 New Jersey Rail System–The Aldene Plan 1-21 Figure 1-12 New Jersey Transit Rail System 1-24 Figure 1-13 U.S. Greenhouse Gas Emissions in 2009 by Economic Sector 1-26 Figure 2-1 New Jersey Rail System Ownership 2

Society of Interventional Pain Physicians, American Academy of Family Physicians, New Jersey Society of Gastroenterology and Endoscopy, New Jersey Chapter of the American College of Cardiology, the New Jersey Rheumatology Association, the New Jersey Academy of Otolaryngology - Head & Neck Surgery and the New Jersey Academy of Facial Plastic .

White, M. Campo, M. Kaplan, J. Herb, and L. Auermuller. New Jersey's Rising Seas and Changing Coastal Storms: Report of the 2019 Science and Technical Advisory Panel. Rutgers, The State University of New Jersey. Prepared for the New Jersey Department of Environmental Protection. Trenton, New Jersey.

transact insurance in New Jersey. 17. Drive New Jersey is and was, at all times relevant to this lawsuit, an Ohio corporation with its principal place of business in Ohio. Drive New Jersey is authorized to transact insurance in New Jersey. FACTUAL ALLEGATIONS A. The Progressive Policy 18.