Distributed Database Systems - UiO

1y ago
567.33 KB
58 Pages
Last View : 1d ago
Last Download : 5m ago
Upload by : River Barajas

Distributed Database SystemsVera GoebelDepartment of InformaticsUniversity of Oslo20111

Contents Review: Layered DBMS Architecture Distributed DBMS Architectures– DDBMS Taxonomy Client/Server Models Key Problems of Distributed DBMS– Distributed data modeling– Distributed query processing & optimization– Distributed transaction management Concurrency Control Recovery2

ApplicationsData Model / View ManagementTransaction ManagementFunctional LayersInterfaceof a DBMSControl Semantic Integrity Control / AuthorizationCompilationExecutionQuery Processing and OptimizationStorage StructuresData AccessBuffer ManagementConsistencyConcurrency Control / LoggingDatabase3

Dependencies among DBMS componentsTransactionMgmtAccess PathMgmtSortingcomponentLockcomponentLog component(with savepoint mgmt)CentralComponentsSystem BufferMgmtIndicates a dependency4

Centralized DBS logically integrated physically centralizedT1T2T3networkT4DBST5Traditionally: one large mainframe DBMS n “stupid” terminals5

Distributed DBS Data logically integrated (i.e., access based on one schema) Data physically distributed among multiple database nodes Processing is distributed among multiple database nodesT1T3T2DBS1networkDBS2DBS3Why a Distributed DBS?Performance via parallel execution- more users- quick responseMore data volume- max disks on multiple nodesTraditionally: m mainframes for the DBMSs n terminals6

DBMS Implementation AlternativesDistributionDistributedDBMS DBMSHomogeneousDistributed Homog.Federated DBMSDistributedMulti-DBMSClient/Server DistributionDistributedHeterogeneous DBMSDistributed Heterog.Federated DBMSCentralizedDBMS DBMSHomogeneousCentralizedHeterogeneous DBMSDistributed Heterog.Multi-DBMSCentralized Homog.Federated DBMSCentralized Heterog.Federated DBMSCentralized Homog.Multi-DBMSAutonomyCentralized Heterog.Multi-DBMSHeterogeneity7

Common DBMS Architectural ConfigurationsNo AutonomyN1N3N2N4Fully integrated nodesComplete cooperation on:- Data Schema- Transaction MgmtFully aware of all othernodes in the DDBSFederatedMultiD1GTM & SMD3D2D4Independent DBMSsImplement some“cooperation functions”:- Transaction Mgmt- Schema mappingAware of other DBSsin the federationD1D2D3D4Fully independent DBMSsIndependent, globalmanager tries tocoordinate the DBMSsusing only existingDBMS servicesUnaware of GTM andother DBSs in the MultiDBS8

Parallel Database System Platforms(Non-autonomous Distributed DBMS)Shared EverythingProc1 Proc2Shared Nothing. ProcNFast Interconnection NetworkMem1Mem2.MemMDisk 1.Disk pTypically a special-purpose hardwareconfiguration, but this can be emulatedin software.Proc1.Fast InterconnectionNetworkMemNMem1Disk 1ProcN.Disk NCan be a special-purpose hardwareconfiguration, or a network of workstations.9

Distributed DBMS Advantages:––––Improved performanceEfficiencyExtensibility (addition of new nodes)Transparency of distribution Storage of data Query execution– Autonomy of individual nodes Problems:––––Complexity of design and implementationData consistencySafetyFailure recovery10

Client/Server Database SystemsThe “Simple” Case of DistributedDatabase Systems?11

Client/Server Environmentsdata (object) server n smart clients (workstations) objects stored and administered on server objects processed (accessed and modified) onworkstations [sometimes on the server too] CPU-time intensive applications on workstations– GUIs, design tools Use client system local storage capabilities combination with distributed DBS services distributed server architecture12

Clients with Centralized Server Architectureworkstations.local communicationnetworkDataServerinterfacedatabase functions.Disk 1databaseDisk n13

Clients with Distributed Server Architectureworkstations.local communication networkData Server 1Data Server minterfaceinterfacedistributed DBMS.distributed DBMSlocal data mgnt. functionslocal data mgnt. functions.Disk 1databaseDisk nDisk 1databaseDisk p14

Clients with Data Server Approach“3-tier client/server”.user interfaceApplicationServerquery parsingdata server interfacecommunicationchannelDataServerapplication server interfacedatabase functions.Disk 1databaseDisk n15

In the Client/Server DBMS Architecture,how are the DB services organized?There are several architectural options!16

Client/Server Architecturesserver processRelationalclient QLclient processApplicationserver processobject/page cachemanagementobjects,pages, orfilesDBMS17

Object Server Architectureserver processclient processLog/LockManagerApplicationObject ManagerObjectCacheobjectsobject referencesqueriesmethod callslockslog ge CacheManagerPageCacheStorage Allocationand I/ODatabase18

Object Server Architecture - summary Unit of transfer: object(s)Server understands the object concept and can execute methodsmost DBMS functionality is replicated on client(s) and server(s)Advantages:- server and client can run methods, workload can be balanced- simplifies concurrency control design (centralized in server)- implementation of object-level locking- low cost for enforcing “constraints” on objects Disadvantages/problems:- remote procedure calls (RPC) for object references- complex server design- client cache consistency problems- page-level locking, objects get copied multiple times, largeobjects19

Page Server Architectureclient processApplicationObjectCacheObject Managerserver processFile/IndexManagerPage CacheManagerpagespage referencesLog/Lock Page CachelocksManagerManagerlog recordsPageCacheStorage Allocationand I/OPageCachedatabase20

Page Server Architecture - summary unit of transfer: page(s) server deals only with pages (understands no object semantics) server functionality: storage/retrieval of page(s), concurrencycontrol, recovery advantages:- most DBMS functionality at client- page transfer - server overhead minimized- more clients can be supported- object clustering improves performance considerably disadvantages/problems:- method execution only on clients- object-level locking- object clustering- large client buffer pool required21

File Server Architectureclient processApplicationObjectCacheObject ManagerFile/IndexManagerPageCacheserver processlockslog recordsLog/LockManagerpagespage referencesSpaceAllocationNFSPage CacheManagerdatabase22

File Server Architecture - summary unit of transfer: page(s) simplification of page server, clients use remote file system (e.g.,NFS) to read and write DB page(s) directly server functionality: I/O handling, concurrency control, recovery advantages:- same as for page server- NFS: user-level context switches can be avoided- NFS widely-used - stable SW, will be improved disadvantages/problems:- same as for page server- NFS write are slow- read operations bypass the server - no request combination- object clustering- coordination of disk space allocation23

Cache Consistency in Client/Server tectionBasedAlgorithmsAsynchronousDeferredClient sends 1 msgper lock to server;Client waits;Server replies withACK or NACK.Client sends 1 msgper lock to the server;Client continues;Server invalidatescached copies atother clients.Client sends all writelock requests to theserver at commit time;Client waits;Server replies whenall cached copies arefreed.Client sends objectstatus query to serverfor each access;Client waits;Server replies.Client sends 1 msgper lock to the server;Client continues;After commit, theserver sends updatesto all cached copies.Client sends all writelock requests to theserver at commit time;Client waits;Server replies basedon W-W conflicts only.Best Performing Algorithms24

Comparison of the 3 Client/Server ArchitecturesPage & File Server Simple server design Complex client design Fine grainedconcurrency controldifficult Very sensitive toclient buffer pool sizeand clusteringObject Server Complex server design “Relatively” simple clientdesign Fine-grained concurrencycontrol Reduces data movement,relatively insensitive toclustering Sensitive to client bufferpool sizeConclusions: No clear winner Depends on object size and application’s object access pattern File server ruled out by poor NFS performance25

Problems in Distributed DBMS ServicesDistributed Database DesignDistributed Directory/Catalogue MgmtDistributed Query Processing and OptimizationDistributed Transaction Mgmt– Distributed Concurreny Control– Distributed Deadlock Mgmt– Distributed Recovery Mgmtdirectory managementquery processingdistributed DB designconcurrency control(lock)influencesreliability (log)transactionmanagementdeadlock management26

Distributed Storage in Relational DBMSs horizontal fragmentation: distribution of “rows”, selection vertical fragmentation: distribution of “columns”, projection hybrid fragmentation: ”projected columns” from ”selected rows” allocation: which fragment is assigned to which node? replication: multiple copies at different nodes, how many copies? Design factors:– Most frequent query access patterns– Available distributed query processing algorithms Evaluation Criteria– Cost metrics for: network traffic, query processing, transaction mgmt– A system-wide goal: Maximize throughput or minimize latency27

Distributed Storage in OODBMSs Must fragment, allocate, and replicate object data among nodes Complicating Factors:–––––Encapsulation hides object structureObject methods (bound to class not instance)Users dynamically create new classesComplex objectsEffects on garbage collection algorithm and performance Approaches:– Store objects in relations and use RDBMS strategies to distribute data All objects are stored in binary relations [OID, attr-value] One relation per class attribute Use nested relations to store complex (multi-class) objects– Use object semantics Evaluation Criteria:– Affinity metric (maximize)– Cost metric (minimize)28

Horizontal Partitioning using Object Semantics Divide instances of a class into multiple groups– Based on selected attribute valuesNode1: Attr A 100Attr A 100– Based on subclass designation: Node2Class CNode1: Subcl XSubcl Y : Node2 Fragment an object’s complex attributesfrom it’s simple attributes Fragment an object’s attributes based ontypical method invocation sequence– Keep sequentially referenced attributes together29

Vertical Partitioning using Object Semantics Fragment object attributes based on theclass hierarchyInstances of Class XInstances of Subclass subXInstances of Subclass subsubXFragment #1Fragment #2Fragment #3Breaks object encapsulation?30

Path Partitioning using Object Semantics Use the object composition graph for complexobjects A path partition is the set of objectscorresponding to instance variables in thesubtree rooted at the composite object– Typically represented in a ”structure index”CompositeObject OID{SubC#1-OID, SubC#2-OID, SubC#n-OID}31

More Issues in OODBMS FragmentationLocal Object DataGood performance;LocalMethods Issue: Replicate methods so theyare local to all instances?RemoteMethodsSend data to remote site, execute,and return resultOR fetch the method and execute;Issues: Time/cost for twotransfers?Ability to execute locally?Remote Object DataFetch the remote data and executethe methods;Issue: Time/cost for data transfer?Send additional input values viaRPC, execute and return result;Issues: RPC time?Execution load on remote node? Replication Options––––ObjectsClasses (collections of objects)MethodsClass/Type specifications32

Distributed Directory and Catalogue Management Directory Information:– Description and location of records/objects Size, special data properties (e.g.,executable, DB type,user-defined type, etc.) Fragmentation scheme– Definitions for views, integrity constraints Options for organizing the directory:––––Issues: bottleneck, unreliableCentralizedIssues: consistency, storage overheadFully replicatedIssues: complicated access protocol, consistencyPartitionede.g., zoned,Combination of partitioned and replicatedreplicated zoned33

Distributed Query Processing and Optimization Construction and execution of query plans, query optimization Goals: maximize parallelism (response time optimization)minimize network data transfer (throughput optimization) Basic Approaches to distributed query processing:Pipelining – functional decompositionRel ANode 1Select A.x 100Node 2Processing TimelinesRel BJoin A and B on yParallelism – data decompositionFrag A.1Frag A.2Node 1Select A.x 100Node 2Node 1:Node 2:Processing TimelinesNode 3Node 1:Node 2:Node 3:Union34

Creating the Distributed Query Processing Plan Factors to be considered:- distribution of data- communication costs- lack of sufficient locally available information 4 processing steps:(1) query decomposition(2) data localization(3) global optimization(4) local optimizationcontrol site (uses global information)local sites (use local information)35

Generic Layered Scheme for Planning a Dist. Querycalculus query on distributed objectscontrol sitequery decompositionGlobalschemaalgebraic query on distributed objectsdata localizationFragmentschemafragment queryglobal optimizationStatistics onfragmentslocal sitesoptimized fragment querywith communication operationslocal optimizationThis approach wasinitially designed fordistributed relationalDBMSs.It also applies todistributed OODBMSs,Multi-DBMSs, anddistributed MultiDBMSs.Localschemaoptimized local queries36

Distributed Query Processing Plans in OODBMSs Two forms of queries– Explicit queries written in OQL– Object navigation/traversal}Reduce to logicalalgebraic expressions Planning and Optimizing– Additional rewriting rules for sets of objects Union, Intersection, and Selection– Cost function Should consider object size, structure, location, indexes, etc. Breaks object encapsulation to obtain this info? Objects can ”reflect” their access cost estimate– Volatile object databases can invalidate optimizations Dynamic Plan Selection (compile N plans; select 1 at runtime) Periodic replan during long-running queries37

Distributed Query Optimization information needed for optimization (fragment statistics):- size of objects, image sizes of attributes- transfer costs- workload among nodes- physical data layout- access path, indexes, clustering information- properties of the result (objects) - formulas for estimating thecardinalities of operation results execution cost is expressed as a weighted combination of I/O,CPU, and communication costs (mostly dominant).total-cost CCPU * #insts CI/O * #I/Os CMSG * #msgs CTR * #bytesOptimization Goals: response time of single transaction or system throughput38

Distributed Transaction Managment Transaction Management (TM) in centralized DBS– Achieves transaction ACID properties by using: concurrency control (CC) recovery (logging) TM in DDBS– Achieves transaction ACID properties by using:Transaction ManagerConcurrency Control(Isolation)Recovery Protocol(Durability)Log ManagerBuffer ManagerCommit/Abort Protocol(Atomicity)Replica Control Protocol(Mutual Consistency)Strong algorithmicdependencies39

Classification of Concurrency Control ApproachesIsolationCC ses of pessimistictransaction executionvalidatephases of optimistictransaction 0

Two-Phase-Locking Approach (2PL)Obtain LockNumber of locks2PL Lock GraphIsolationRelease LockBEGINLOCKPOINTENDTransactionDurationNumber of locksObtain LockRelease LockStrict 2PL Lock GraphBEGINENDPeriod of data item useTransactionDuration41

Communication Structure of Centralized 2PLIsolationData Processors atparticipating sitesCoordinating TMCentral Site LM1Lock Request2Lock Granted3Operation4End of Operation5Release Locks42

Communication Structure of Distributed 2PLIsolationData Processors atparticipating sitesParticipating LMsCoordinating TM1Lock Request2Operation3End of Operation4Release Locks43

Distributed Deadlock ManagementIsolationExampleSite Xwaits for T1 xto release LcT2 xT1 xholds lock Lcholds lock LbT1 x needs awaits for T1on site yto completeSite YT2 y needs bwaits for T2on site xto completewaits for T2 yto release LdT1 yholds lock LaDistributed Waits-For Graph- requires many messagesto update lock status- combine the “wait-for”tracing message with lockstatus update messages- still an expensive algorithmT2 yholds lock Ld44

Communication Structure of Centralized 2P Commit ProtocolCoordinating TMAtomicityParticipating Sites1PrepareMake local commit/abort decisionWrite log entry2Vote-Commit or Vote-AbortCount the votesIf (missing any votesor any Vote-Abort)Then Global-AbortElse Global-CommitWho participates?Depends on the CC alg.3Global-Abort or Global-CommitWrite log entry4ACKOther communicationstructures arepossible:– Linear– Distributed45

State Transitions for 2P Commit ProtocolCoordinating TMParticipating SitesInitial1PrepareAdvantages:- Preserves atomicity- All processes are“synchronous withinone state transition”InitialWait2Vote-Commit3or Vote-AbortReadyGlobal-Abort Disadvantages:- Many, many messages- If failure occurs,the 2PC protocol blocks!Attempted solutions for theblocking problem:1) Termination Protocol2) 3-Phase Commit3) Quorum 3P Commit46

Failures in a Distributed SystemTypes of Failure:––––Transaction failureNode failureMedia failureNetwork failure Partitions each containing 1 or more sitesWho addresses the problem?Issues to be addressed:– How to continue service– How to maintain ACID propertieswhile providing continued service– How to ensure ACID properties afterrecovery from the failure(s)Termination ProtocolsModified Concurrency Control& Commit/Abort ProtocolsRecovery Protocols,Termination Protocols,& Replica Control Protocols47

Termination ProtocolsCoordinating TMAtomicity, ConsistencyParticipating SitesInitialTimeout states:1Coordinator: wait, commit, abortParticipant: initial, readyPrepareInitialWait23Use timeouts to detectpotential failures that couldblock protocol progressCoordinator Termination Protocol:Vote-Commitor Vote-AbortWait – Send global-abortCommit or Abort – BLOCKED!ReadyGlobal-Abort orGlobal-CommitParticipant Termination Protocol:CommitAbort4ACKCommitAbortReady – Query the coordinatorIf timeoutthen query other participants;If global-abort global-committhen proceed and terminateelse BLOCKED!48

Replica Control ProtocolsConsistencyNumber of locks Update propagation of committed write operationsSTRICT UPDATELAZY UPDATEENDBEGINObtain LocksPeriod of data use2-Phase CommitReleaseLocksCOMMIT POINT49

Strict Replica Control ProtocolConsistency Read-One-Write-All (ROWA) Part of the Concurrency Control Protocoland the 2-Phase Commit Protocol– CC locks all copies– 2PC propagates the updated values with 2PCmessages (or an update propagation phase isinserted between the wait and commit statesfor those nodes holding an updateable value).50

Lazy Replica Control ProtocolConsistency Propagates updates from a primary node. Concurrency Control algorithm locks theprimary copy node (same node as theprimary lock node). To preserve single copy semantics, must ensurethat a transaction reads a current copy.– Changes the CC algorithm for read-locks– Adds an extra communication cost for reading data Extended transaction models may not requiresingle copy semantics.51

Recovery in Distributed SystemsAtomicity, DurabilitySelect COMMIT or ABORT (or blocked) for each interrupted subtransactionCommit Approaches:Redo – use the undo/redo log to perform all the write operations againRetry – use the transaction log to redo the entire subtransaction (R W)Abort Approaches:Undo – use the undo/redo log to backout all the writes that were actuallyperformedCompensation – use the transaction log to select and execute ”reverse”subtransactions that semantically undo the write operations.Implementation requires knowledge of:– Buffer manager algorithms for writing updated datafrom volatile storage buffers to persistent storage– Concurrency Control Algorithm– Commit/Abort Protocols– Replica Control Protocol52

Network Partitions in Distributed SystemsPartition #2N2N1N3N8N4networkPartition #1N7N6N5Partition #3Issues: Termination of interrupted transactions Partition integration upon recovery from a network failure– Data availability while failure is ongoing53

Data Availability in Partitioned Networks Concurrency Control model impacts data availability. ROWA – data replicated in multiple partitions is notavailable for reading or writing. Primary Copy Node CC – can execute transactions ifthe primary copy node for all of the read-set and allof the write-set are in the client’s partition.Availability is still very limited . . . We need a new idea!54

QuorumsACID Quorum – a special type of majority Use quorums in the Concurrency Control,Commit/Abort, Termination, and Recovery Protocols– CC uses read-quorum & write-quorum– C/A, Term, & Recov use commit-quorum & abort-quorum Advantages:– More transactions can be executed during site failure andnetwork failure (and still retain ACID properties) Disadvantages:– Many messages are required to establish a quorum– Necessity for a read-quorum slows down read operations– Not quite sufficient (failures are not “clean”)55

Read-Quorums and Write-QuorumsIsolation The Concurrency Control Algorithm serializes validtransactions in a partition. It must obtain– A read-quorum for each read operation– A write-quorum for each write operation Let N total number of nodes in the system Define the size of the read-quorum Nr and the sizeof the write-quorum Nw as follows:– Nr Nw N– Nw (N/2)Simple Example:N 8Nr 4Nw 5 When failures occur, it is possible to have a validread quorum and no valid write quorum56

Commit-Quorums and Abort-QuorumsACID The Commit/Abort Protocol requires votes from allparticipants to commit or abort a transaction.– Commit a transaction if the accessible nodes can form a commitquorum– Abort a transaction if the accessible nodes can form an abort-quorum– Elect a new coordinator node (if necessary)– Try to form a commit-quorum before attemptingto form an abort-quorum Let N total number of nodes in the system Define the size of the commit-quorum Nc and the size ofabort-quorum Na as follows:Simple Examples:– Na Nc N; 0 Na, Nc NN 7N 7Nc 4Nc 5Na 4Na 357

Conclusions Nearly all commerical relational database systems offer someform of distribution– Client/server at a minimum Only a few commercial object-oriented database systemssupport distribution beyond N clients and 1 server Future research directions:– Distributed data architecture and placement schemes (app-influenced)– Distributed databases as part of Internet applications– Continued service during disconnection or failure Mobile systems accessing databases Relaxed transaction models for semantically-correct continued operation58

Distributed Database Design Distributed Directory/Catalogue Mgmt Distributed Query Processing and Optimization Distributed Transaction Mgmt -Distributed Concurreny Control -Distributed Deadlock Mgmt -Distributed Recovery Mgmt influences query processing directory management distributed DB design reliability (log) concurrency control (lock)

Related Documents:

Synthesis and functionalization Page 20 Isoreticular synthesis: using a linker with the same geometry but different length, a crystal structure having the same connectivity (topology) but larger cages is formed. BET surface area UiO-66 1052 m2-g 1 UiO-67 2105 m2 g-1 UiO-67 is less stable than UiO-66 Chavan et al. PCCP 2012

www.apollon.uio.no ABONNEMENT (GRATIS): apollon@admin.uio.no ABONNEMENTSANSVARLIG: Kristin Kjølstad 228 55 796 ADRESSE: Apollon, Postboks 1076 Blindern, 0316 Oslo NUMMER 2/2021 31. årgang ISSN 0803-6926 Apollon redigeres etter Redaktørplakaten. ANSVARLIG REDAKTØR: Trine Nickelsen trine.nickelsen@apollon.uio.no 22 85 41 33 / 948 .

Distributed Database Cont 12 A distributed database (DDB) is a collection of multiple, logically interrelated databases distributed over a computer network. In a distributed database system, the database is stored on several computers. Data management is decentralized but act as if they are centralized. A distributed database system consists of loosely coupled

Distributed databases One particular approach to database decentralization is commonly called distributed database systems. In this ap proach, a single logical database schema is defined, which describes all the data in the database system; the physical realization of the database is then distributed among the computers of a network.

This paper focuses on one of these technologies, the distributed databases. We define a distributed database as a collection of multiple, logically interrelated databases distributed over a computer network. Therefore, a Distributed database system is based on the union of a database system and computer network technologies. [ 1]

What is a Distributed Database System? A distributed database (DDB) is a collection of multiple, logically interrelated databases distributed over a computer network. A distributed database management system (D-DBMS) is the software that manages the DDB and provides an access mechanism that makes this distribution transparent to the users.

Database Applications and SQL 12 The DBMS 15 The Database 16 Personal Versus Enterprise-Class Database Systems 18 What Is Microsoft Access? 18 What Is an Enterprise-Class Database System? 19 Database Design 21 Database Design from Existing Data 21 Database Design for New Systems Development 23 Database Redesign 23

American Revolution in Europe working to negotiate assistance from France, Spain, and the Netherlands. Foreign Assistance French ultimately provided critical military and financial assistance Spain and the Netherlands provided primarily financial assistance to the American cause. A comparison of the resources held by the British and by the colonies: The population of the thirteen colonies .