Windows Azure Storage: A Highly Available Cloud Storage Service With .

1y ago
4 Views
2 Downloads
803.86 KB
15 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Genevieve Webb
Transcription

Windows Azure Storage: A Highly AvailableCloud Storage Service with Strong ConsistencyBrad Calder, Ju Wang, Aaron Ogus, Niranjan Nilakantan, Arild Skjolsvold, Sam McKelvie, Yikang Xu,Shashwat Srivastav, Jiesheng Wu, Huseyin Simitci, Jaidev Haridas, Chakravarthy Uddaraju,Hemal Khatri, Andrew Edwards, Vaman Bedekar, Shane Mainali, Rafay Abbasi, Arpit Agarwal,Mian Fahim ul Haq, Muhammad Ikram ul Haq, Deepali Bhardwaj, Sowmya Dayanand,Anitha Adusumilli, Marvin McNett, Sriram Sankaran, Kavitha Manivannan, Leonidas RigasMicrosoftworkflow for many applications. A common usage pattern we seeis incoming and outgoing data being shipped via Blobs, Queuesproviding the overall workflow for processing the Blobs, andintermediate service state and final results being kept in Tables orBlobs.AbstractWindows Azure Storage (WAS) is a cloud storage system thatprovides customers the ability to store seemingly limitlessamounts of data for any duration of time. WAS customers haveaccess to their data from anywhere at any time and only pay forwhat they use and store. In WAS, data is stored durably usingboth local and geographic replication to facilitate disasterrecovery. Currently, WAS storage comes in the form of Blobs(files), Tables (structured storage), and Queues (messagedelivery). In this paper, we describe the WAS architecture, globalnamespace, and data model, as well as its resource provisioning,load balancing, and replication systems.An example of this pattern is an ingestion engine service built onWindows Azure to provide near real-time Facebook and Twittersearch. This service is one part of a larger data processingpipeline that provides publically searchable content (via oursearch engine, Bing) within 15 seconds of a Facebook or Twitteruser’s posting or status update. Facebook and Twitter send theraw public content to WAS (e.g., user postings, user statusupdates, etc.) to be made publically searchable. This content isstored in WAS Blobs. The ingestion engine annotates this datawith user auth, spam, and adult scores; content classification; andclassification for language and named entities. In addition, theengine crawls and expands the links in the data. Whileprocessing, the ingestion engine accesses WAS Tables at highrates and stores the results back into Blobs. These Blobs are thenfolded into the Bing search engine to make the content publicallysearchable. The ingestion engine uses Queues to manage the flowof work, the indexing jobs, and the timing of folding the resultsinto the search engine. As of this writing, the ingestion engine forFacebook and Twitter keeps around 350TB of data in WAS(before replication). In terms of transactions, the ingestion enginehas a peak traffic load of around 40,000 transactions per secondand does between two to three billion transactions per day (seeSection 7 for discussion of additional workload profiles).Categories and Subject DescriptorsD.4.2 [Operating Systems]: Storage Management—Secondarystorage;D.4.3 uted file systems; D.4.5 [OperatingSystems]: Reliability—Fault tolerance; D.4.7 [OperatingSystems]: Organization and Design—Distributed systems; D.4.8[Operating Systems]: Performance—MeasurementsGeneral TermsAlgorithms, Design, Management, Measurement, Performance,Reliability.KeywordsCloud storage, distributed storage systems, Windows Azure.1. IntroductionIn the process of building WAS, feedback from potential internaland external customers drove many design decisions. Some keydesign features resulting from this feedback include:Windows Azure Storage (WAS) is a scalable cloud storagesystem that has been in production since November 2008. It isused inside Microsoft for applications such as social networkingsearch, serving video, music and game content, managing medicalrecords, and more. In addition, there are thousands of customersoutside Microsoft using WAS, and anyone can sign up over theInternet to use the system.Strong Consistency – Many customers want strong consistency:especially enterprise customers moving their line of businessapplications to the cloud. They also want the ability to performconditional reads, writes, and deletes for optimistic concurrencycontrol [12] on the strongly consistent data. For this, WASprovides three properties that the CAP theorem [2] claims aredifficult to achieve at the same time: strong consistency, highavailability, and partition tolerance (see Section 8).WAS provides cloud storage in the form of Blobs (user files),Tables (structured storage), and Queues (message delivery).These three data abstractions provide the overall storage andGlobal and Scalable Namespace/Storage – For ease of use,WAS implements a global namespace that allows data to be storedand accessed in a consistent manner from any location in theworld. Since a major goal of WAS is to enable storage of massiveamounts of data, this global namespace must be able to addressexabytes of data and beyond. We discuss our global namespacedesign in detail in Section 2.Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires priorspecific permission and/or a fee.SOSP '11, October 23-26, 2011, Cascais, Portugal.Copyright 2011 ACM 978-1-4503-0977-6/11/10 . 10.00.143

Disaster Recovery – WAS stores customer data across multipledata centers hundreds of miles apart from each other. Thisredundancy provides essential data recovery protection againstdisasters such as earthquakes, wild fires, tornados, nuclear reactormeltdown, etc.primary key that consists of two properties: the PartitionName andthe ObjectName. This distinction allows applications usingTables to group rows into the same partition to perform atomictransactions across them. For Queues, the queue name is thePartitionName and each message has an ObjectName to uniquelyidentify it within the queue.Multi-tenancy and Cost of Storage – To reduce storage cost,many customers are served from the same shared storageinfrastructure. WAS combines the workloads of many differentcustomers with varying resource needs together so thatsignificantly less storage needs to be provisioned at any one pointin time than if those services were run on their own dedicatedhardware.3. High Level ArchitectureHere we present a high level discussion of the WAS architectureand how it fits into the Windows Azure Cloud Platform.3.1 Windows Azure Cloud PlatformThe Windows Azure Cloud platform runs many cloud servicesacross different data centers and different geographic regions.The Windows Azure Fabric Controller is a resource provisioningand management layer that provides resource allocation,deployment/upgrade, and management for cloud services on theWindows Azure platform. WAS is one such service running ontop of the Fabric Controller.We describe these design features in more detail in the followingsections. The remainder of this paper is organized as follows.Section 2 describes the global namespace used to access the WASBlob, Table, and Queue data abstractions. Section 3 provides ahigh level overview of the WAS architecture and its three layers:Stream, Partition, and Front-End layers. Section 4 describes thestream layer, and Section 5 describes the partition layer. Section6 shows the throughput experienced by Windows Azureapplications accessing Blobs and Tables. Section 7 describessome internal Microsoft workloads using WAS. Section 8discusses design choices and lessons learned. Section 9 presentsrelated work, and Section 10 summarizes the paper.The Fabric Controller provides node management, networkconfiguration, health monitoring, starting/stopping of serviceinstances, and service deployment for the WAS system. Inaddition, WAS retrieves network topology information, physicallayout of the clusters, and hardware configuration of the storagenodes from the Fabric Controller. WAS is responsible formanaging the replication and data placement across the disks andload balancing the data and application traffic within the storagecluster.2. Global Partitioned NamespaceA key goal of our storage system is to provide a single globalnamespace that allows clients to address all of their storage in thecloud and scale to arbitrary amounts of storage needed over time.To provide this capability we leverage DNS as part of the storagenamespace and break the storage namespace into three parts: anaccount name, a partition name, and an object name. As a result,all data is accessible via a URI of the form:3.2 WAS Architectural ComponentsAn important feature of WAS is the ability to store and provideaccess to an immense amount of storage (exabytes and beyond).We currently have 70 petabytes of raw storage in production andare in the process of provisioning a few hundred more petabytesof raw storage based on customer demand for 2012.http(s)://AccountName. service 1.core.windows.net/PartitionName/ObjectNameThe WAS production system consists of Storage Stamps and theLocation Service (shown in Figure 1).The AccountName is the customer selected account name foraccessing storage and is part of the DNS host name. TheAccountName DNS translation is used to locate the primarystorage cluster and data center where the data is stored. Thisprimary location is where all requests go to reach the data for thataccount. An application may use multiple AccountNames to storeits data across different .net/DNS LookupAccess Blobs,Tables and Queuesfor accountPartition LayerFront-EndsInter-StampReplicationPartition LayerStream LayerIntra-Stamp ReplicationStream LayerIntra-Stamp ReplicationStorage StampStorage StampStorage Stamps – A storage stamp is a cluster of N racks ofstorage nodes, where each rack is built out as a separate faultdomain with redundant networking and power. Clusters typicallyrange from 10 to 20 racks with 18 disk-heavy storage nodes perrack. Our first generation storage stamps hold approximately 2PBof raw storage each. Our next generation stamps hold up to 30PBof raw storage each. service specifies the service type, which can be blob, table, or queue.APIs for Windows Azure Blobs, Tables, and Queues can be 355.aspxVIPFigure 1: High-level architectureThis naming approach enables WAS to flexibly support its threedata abstractions2. For Blobs, the full blob name is thePartitionName. For Tables, each entity (row) in the table has a2DNSFront-EndsWhen a PartitionName holds many objects, the ObjectNameidentifies individual objects within that partition. The systemsupports atomic transactions across objects with the samePartitionName value. The ObjectName is optional since, for sometypes of data, the PartitionName uniquely identifies the objectwithin the account.1Account ManagementVIPIn conjunction with the AccountName, the PartitionName locatesthe data once a request reaches the storage cluster. ThePartitionName is used to scale out access to the data acrossstorage nodes based on traffic needs.LocationServicehere:144

To provide low cost cloud storage, we need to keep the storageprovisioned in production as highly utilized as possible. Our goalis to keep a storage stamp around 70% utilized in terms ofcapacity, transactions, and bandwidth. We try to avoid goingabove 80% because we want to keep 20% in reserve for (a) diskshort stroking to gain better seek time and higher throughput byutilizing the outer tracks of the disks and (b) to continue providingstorage capacity and availability in the presence of a rack failurewithin a stamp. When a storage stamp reaches 70% utilization,the location service migrates accounts to different stamps usinginter-stamp replication (see Section 3.4).Partition Layer – The partition layer is built for (a) managingand understanding higher level data abstractions (Blob, Table,Queue), (b) providing a scalable object namespace, (c) providingtransaction ordering and strong consistency for objects, (d) storingobject data on top of the stream layer, and (e) caching object datato reduce disk I/O.Another responsibility of this layer is to achieve scalability bypartitioning all of the data objects within a stamp. As describedearlier, all objects have a PartitionName; they are broken downinto disjointed ranges based on the PartitionName values andserved by different partition servers. This layer manages whichpartition server is serving what PartitionName ranges for Blobs,Tables, and Queues. In addition, it provides automatic loadbalancing of PartitionNames across the partition servers to meetthe traffic needs of the objects.Location Service (LS) – The location service manages all thestorage stamps. It is also responsible for managing the accountnamespace across all stamps. The LS allocates accounts to storagestamps and manages them across the storage stamps for disasterrecovery and load balancing. The location service itself isdistributed across two geographic locations for its own disasterrecovery.Front-End (FE) layer – The Front-End (FE) layer consists of aset of stateless servers that take incoming requests. Uponreceiving a request, an FE looks up the AccountName,authenticates and authorizes the request, then routes the request toa partition server in the partition layer (based on thePartitionName). The system maintains a Partition Map that keepstrack of the PartitionName ranges and which partition server isserving which PartitionNames. The FE servers cache the PartitionMap and use it to determine which partition server to forwardeach request to. The FE servers also stream large objects directlyfrom the stream layer and cache frequently accessed data forefficiency.WAS provides storage from multiple locations in each of the threegeographic regions: North America, Europe, and Asia. Eachlocation is a data center with one or more buildings in thatlocation, and each location holds multiple storage stamps. Toprovision additional capacity, the LS has the ability to easily addnew regions, new locations to a region, or new stamps to alocation. Therefore, to increase the amount of storage, we deployone or more storage stamps in the desired location’s data centerand add them to the LS. The LS can then allocate new storageaccounts to those new stamps for customers as well as loadbalance (migrate) existing storage accounts from older stamps tothe new stamps.3.4 Two Replication EnginesBefore describing the stream and partition layers in detail, we firstgive a brief overview of the two replication engines in our systemand their separate responsibilities.Figure 1 shows the location service with two storage stamps andthe layers within the storage stamps. The LS tracks the resourcesused by each storage stamp in production across all locations.When an application requests a new account for storing data, itspecifies the location affinity for the storage (e.g., US North).The LS then chooses a storage stamp within that location as theprimary stamp for the account using heuristics based on the loadinformation across all stamps (which considers the fullness of thestamps and other metrics such as network and transactionutilization). The LS then stores the account metadata informationin the chosen storage stamp, which tells the stamp to start takingtraffic for the assigned account. The LS then updates DNS tName.service.core.windows.net/ to that storagestamp’s virtual IP (VIP, an IP address the storage stamp exposesfor external traffic).Intra-Stamp Replication (stream layer) – This system providessynchronous replication and is focused on making sure all thedata written into a stamp is kept durable within that stamp. Itkeeps enough replicas of the data across different nodes indifferent fault domains to keep data durable within the stamp inthe face of disk, node, and rack failures. Intra-stamp replicationis done completely by the stream layer and is on the critical pathof the customer’s write requests. Once a transaction has beenreplicated successfully with intra-stamp replication, success canbe returned back to the customer.Inter-Stamp Replication (partition layer) – This systemprovides asynchronous replication and is focused on replicatingdata across stamps. Inter-stamp replication is done in thebackground and is off the critical path of the customer’s request.This replication is at the object level, where either the wholeobject is replicated or recent delta changes are replicated for agiven account. Inter-stamp replication is used for (a) keeping acopy of an account’s data in two locations for disaster recoveryand (b) migrating an account’s data between stamps. Inter-stampreplication is configured for an account by the location serviceand performed by the partition layer.3.3 Three Layers within a Storage StampAlso shown in Figure 1 are the three layers within a storagestamp. From bottom up these are:Stream Layer – This layer stores the bits on disk and is in chargeof distributing and replicating the data across many servers tokeep data durable within a storage stamp. The stream layer can bethought of as a distributed file system layer within a stamp. Itunderstands files, called “streams” (which are ordered lists oflarge storage chunks called “extents”), how to store them, how toreplicate them, etc., but it does not understand higher level objectconstructs or their semantics. The data is stored in the streamlayer, but it is accessible from the partition layer. In fact, partitionservers (daemon processes in the partition layer) and streamservers are co-located on each storage node in a stamp.Inter-stamp replication is focused on replicating objects and thetransactions applied to those objects, whereas intra-stampreplication is focused on replicating blocks of disk storage that areused to make up the objects.We separated replication into intra-stamp and inter-stamp at thesetwo different layers for the following reasons. Intra-stampreplication provides durability against hardware failures, whichoccur frequently in large scale systems, whereas inter-stampreplication provides geo-redundancy against geo-disasters, which145

are rare. It is crucial to provide intra-stamp replication with lowlatency, since that is on the critical path of user requests; whereasthe focus of inter-stamp replication is optimal use of networkbandwidth between stamps while achieving an acceptable level ofreplication delay. They are different problems addressed by thetwo replication schemes.and consists of a sequence of blocks. The target extent size usedby the partition layer is 1GB. To store small objects, the partitionlayer appends many of them to the same extent and even in thesame block; to store large TB-sized objects (Blobs), the object isbroken up over many extents by the partition layer. The partitionlayer keeps track of what streams, extents, and byte offsets in theextents in which objects are stored as part of its index.Another reason for creating these two separate replication layersis the namespace each of these two layers has to maintain.Performing intra-stamp replication at the stream layer allows theamount of information that needs to be maintained to be scopedby the size of a single storage stamp. This focus allows all of themeta-state for intra-stamp replication to be cached in memory forperformance (see Section 4), enabling WAS to provide fastreplication with strong consistency by quickly committingtransactions within a single stamp for customer requests. Incontrast, the partition layer combined with the location servicecontrols and understands the global object namespace acrossstamps, allowing it to efficiently replicate and maintain objectstate across data centers.Streams – Every stream has a name in the hierarchical namespacemaintained at the stream layer, and a stream looks like a big file tothe partition layer. Streams are appended to and can be randomlyread from. A stream is an ordered list of pointers to extentswhich is maintained by the Stream Manager. When the extents areconcatenated together they represent the full contiguous addressspace in which the stream can be read in the order they wereadded to the stream. A new stream can be constructed byconcatenating extents from existing streams, which is a fastoperation since it just updates a list of pointers. Only the lastextent in the stream can be appended to. All of the prior extentsin the stream are immutable.4.1 Stream Manager and Extent Nodes4. Stream LayerThe two main architecture components of the stream layer are theStream Manager (SM) and Extent Node (EN) (shown in Figure 3).The stream layer provides an internal interface used only by thepartition layer. It provides a file system like namespace and API,except that all writes are append-only. It allows clients (thepartition layer) to open, close, delete, rename, read, append to, andconcatenate these large files, which are called streams. A streamis an ordered list of extent pointers, and an extent is a sequence ofappend blocks.Pointer to Extent E3B11 B12 . B1xB21 B22 . B2yB31 B32 . B3zExtent E1 - SealedExtent E2 - SealedExtent E3 - SealedSMSMSMB. Allocate extentreplica set1write27ENackStream //fooPointer to Extent E2A. Create extentPartitionLayer/ClientFigure 2 shows stream “//foo”, which contains (pointers to) fourextents (E1, E2, E3, and E4). Each extent contains a set of blocksthat were appended to it. E1, E2 and E3 are sealed extents. Itmeans that they can no longer be appended to; only the last extentin a stream (E4) can be appended to. If an application reads thedata of the stream from beginning to end, it would get the blockcontents of the extents in the order of E1, E2, E3 and E4.Pointer to Extent E1Stream NENENPointer to Extent E4B41 B42 B43Figure 3: Stream Layer ArchitectureExtent E4 - UnsealedStream Manager (SM) – The SM keeps track of the streamnamespace, what extents are in each stream, and the extentallocation across the Extent Nodes (EN). The SM is a standardPaxos cluster [13] as used in prior storage systems [3], and is offthe critical path of client requests. The SM is responsible for (a)maintaining the stream namespace and state of all active streamsand extents, (b) monitoring the health of the ENs, (c) creating andassigning extents to ENs, (d) performing the lazy re-replication ofextent replicas that are lost due to hardware failures orunavailability, (e) garbage collecting extents that are no longerpointed to by any stream, and (f) scheduling the erasure coding ofextent data according to stream policy (see Section 4.4).Figure 2: Example stream with four extentsIn more detail these data concepts are:Block – This is the minimum unit of data for writing and reading.A block can be up to N bytes (e.g. 4MB). Data is written(appended) as one or more concatenated blocks to an extent,where blocks do not have to be the same size. The client does anappend in terms of blocks and controls the size of each block. Aclient read gives an offset to a stream or extent, and the streamlayer reads as many blocks as needed at the offset to fulfill thelength of the read. When performing a read, the entire contents ofa block are read. This is because the stream layer stores itschecksum validation at the block level, one checksum per block.The whole block is read to perform the checksum validation, andit is checked on every block read. In addition, all blocks in thesystem are validated against their checksums once every few daysto check for data integrity issues.The SM periodically polls (syncs) the state of the ENs and whatextents they store. If the SM discovers that an extent is replicatedon fewer than the expected number of ENs, a re-replication of theextent will lazily be created by the SM to regain the desired levelof replication. For extent replica placement, the SM randomlychooses ENs across different fault domains, so that they are storedon nodes that will not have correlated failures due to power,network, or being on the same rack.Extent – Extents are the unit of replication in the stream layer,and the default replication policy is to keep three replicas within astorage stamp for an extent. Each extent is stored in an NTFS file146

The SM does not know anything about blocks, just streams andextents. The SM is off the critical path of client requests and doesnot track each block append, since the total number of blocks canbe huge and the SM cannot scale to track those. Since the streamand extent state is only tracked within a single stamp, the amountof state can be kept small enough to fit in the SM’s memory. Theonly client of the stream layer is the partition layer, and thepartition layer and stream layer are co-designed so that they willnot use more than 50 million extents and no more than 100,000streams for a single storage stamp given our current stamp sizes.This parameterization can comfortably fit into 32GB of memoryfor the SM.of the partition layer providing strong consistency is built uponthe following guarantees from the stream layer:1. Once a record is appended and acknowledged back to theclient, any later reads of that record from any replica will see thesame data (the data is immutable).2. Once an extent is sealed, any reads from any sealed replica willalways see the same contents of the extent.The data center, Fabric Controller, and WAS have securitymechanisms in place to guard against malicious adversaries, sothe stream replication does not deal with such threats. Weconsider faults ranging from disk and node errors to powerfailures, network issues, bit-flip and random hardware failures, aswell as software bugs. These faults can cause data corruption;checksums are used to detect such corruption. The rest of thesection discusses the intra-stamp replication scheme within thiscontext.Extent Nodes (EN) – Each extent node maintains the storage fora set of extent replicas assigned to it by the SM. An EN has Ndisks attached, which it completely controls for storing extentreplicas and their blocks. An EN knows nothing about streams,and only deals with extents and blocks. Internally on an ENserver, every extent on disk is a file, which holds data blocks andtheir checksums, and an index which maps extent offsets to blocksand their file location. Each extent node contains a view about theextents it owns and where the peer replicas are for a given extent.This view is a cache kept by the EN of the global state the SMkeeps. ENs only talk to other ENs to replicate block writes(appends) sent by a client, or to create additional copies of anexisting replica when told to by the SM. When an extent is nolonger referenced by any stream, the SM garbage collects theextent and notifies the ENs to reclaim the space.4.3.1 Replication FlowAs shown in Figure 3, when a stream is first created (step A), theSM assigns three replicas for the first extent (one primary and twosecondary) to three extent nodes (step B), which are chosen by theSM to randomly spread the replicas across different fault andupgrade domains while considering extent node usage (for loadbalancing). In addition, the SM decides which replica will be theprimary for the extent. Writes to an extent are always performedfrom the client to the primary EN, and the primary EN is in chargeof coordinating the write to two secondary ENs. The primary ENand the location of the three replicas never change for an extentwhile it is being appended to (while the extent is unsealed).Therefore, no leases are used to represent the primary EN for anextent, since the primary is always fixed while an extent isunsealed.4.2 Append Operation and Sealed ExtentStreams can only be appended to; existing data cannot bemodified. The append operations are atomic: either the entire datablock is appended, or nothing is. Multiple blocks can beappended at once, as a single atomic “multi-block append”operation. The minimum read size from a stream is a singleblock. The “multi-block append” operation allows us to write alarge amount of sequential data in a single append and to laterperform small reads. The contract used between the client(partition layer) and the stream layer is that the multi-blockappend will occur atomically, and if the client never hears backfor a request (due to failure) the client should retry the request (orseal the extent). This contract implies that the client needs toexpect the same block to be appended more than once in face oftimeouts and correctly deal with processing duplicate records. Thepartition layer deals with duplicate records in two ways (seeSection 5 for details on the partition layer streams). For themetadata and commit log streams, all of the transactions writtenhave a sequence number and duplicate records will have the samesequence number. For the row data and blob data streams, forduplicate writes, only the last write will be pointed to by theRangePartition data structures, so the prior duplicate writes willhave no references and will be garbage collected later.When the SM allocates the extent, the extent information is sentback to the client, which then knows which ENs hold the threereplicas and which one is the primary. This state is now part ofthe stream’s metadata information held in the SM and cached onthe client. When the last extent in the stream that is beingappended to becomes sealed, the same process repeats. The SMthen allocates another extent, which now becomes the last extentin the stream, and all new appends now go to the new last extentfor the stream.For an extent, every append is replicated three times across theextent’s replicas. A client sends all write requests to the primaryEN, but it can read from any replica, even for unsealed extents.The append is sent to the primary EN for the

Cloud storage, distributed storage systems, Windows Azure. 1. Introduction Windows Azure Storage (WAS) is a scalable cloud storage system that has been in production since November 2008. It is used inside Microsoft for applications such as social networking search, serving video, music and game content, managing medical records, and more.

Related Documents:

Introducing Windows Azure Mobile Services Windows Azure Mobile Services is a Windows Azure service offering designed to make it easy to create highly-functional mobile apps using Windows Azure. Mobile Services brings together a set of Windows Azure services that enable backend capabilities for your apps. Mobile Services provides the

The Windows The Windows Universe Universe Windows 3.1 Windows for Workgroups Windows 95 Windows 98 Windows 2000 1990 Today Business Consumer Windows Me Windows NT 3.51 Windows NT 4 Windows XP Pro/Home. 8 Windows XP Flavors Windows XP Professional Windows XP Home Windows 2003 Server

The goal of Microsoft's Windows Azure is to provide this. Part of the larger Azure Services Platform, Windows Azure is a platform for running Windows applications and storing data in the cloud. Figure 1 illustrates this idea. Figure 1: Windows Azure applications run in Microsoft data centers and are accessed via the Internet.

AutoCAD 2000 HDI 1.x.x Windows 95, 98, Me Windows NT4 Windows 2000 AutoCAD 2000i HDI 2.x.x Windows 95, 98, Me Windows NT4 Windows 2000 AutoCAD 2002 HDI 3.x.x Windows 98, Me Windows NT4 Windows 2000 Windows XP (with Autodesk update) AutoCAD 2004 HDI 4.x.x Windows NT4 Windows 2000 Windows XP AutoCAD 2005 HDI 5.x.x Windows 2000 Windows XP

AZURE TAGGING BEST PRACTICES Adding tags to your Azure resources is very simple and can be done using Azure Portal, Azure PowerShell, CLI, or ARM JSON templates. You can tag any resources in Azure, and using this service is free. The tagging is done on the Azure platform level and does not impact the performance of the resource in any way.

DE LAS UNIDADES PROGRAMA CURRICULAR UNIDAD 2 - Introduccion a los servicios de azure - Los servicios de Azure - Cómo crear un App Service en Azure - Administrar App Service con Azure Cloud Shell Azure UNIDAD 3 - Introduccion al Modulo - Regiones y centros de datos en azure - Zonas Geograficas en

Resource Manager and the Azure portal through Azure Arc to facilitate resource management at a global level. This also means a single vendor for support and billing. Save time and resources with regular and consistent feature and security updates. Access Azure hybrid services such as Azure Security Center, Azure Backup, and Azure site recovery.

disampaikan melalui media ini mempunyai ciri-ciri multimedia seperti teks, grafik, animasi, simulasi, audio dan video. Ia juga harus menyediakan kemudahan untuk ‘discussion group’ serta membolehkan bimbingan dijalankan dalam talian ‘on line’ (Learnframe, 2001) PERKEMBANGAN ‘E-LEARNING ‘ DI MALAYSIA Perkembangan ‘e-learning’ di Malaysia masih baru jika dibandingkan dengan negara .