Quick-and-Easy Deployment Of A Ceph Storage Cluster

2y ago
17 Views
2 Downloads
726.48 KB
57 Pages
Last View : 2d ago
Last Download : 2m ago
Upload by : Kian Swinton
Transcription

Quick-and-Easy Deploymentof a Ceph Storage Clusterwith SLESWith a look at SUSE Studio, Manager and Build ServiceJan KalcicFlavio CastelliSales Engineerjkalcic@suse.comSenior Software Engineerfcastelli@suse.com

AgendaCeph IntroductionSystem Provisioning with SLESSystem Provisioning with SUMa2

AgendaCeph IntroductionSUSE StudioSystem Provisioning with SLESSUSE ManagerSystem Provisioning with SUMa3

Ceph Introduction

What is Ceph Open-source software-defined storage‒ 5It delivers object, block, and file storage in one unified systemIt runs on commodity hardware‒To provide an infinitely scalable Ceph Storage Cluster‒Where nodes communicate with each other to replicate andredistribute data dynamicallyIt is based upon RADOS‒Reliable, Autonomic, Distributed Object Store‒Self-healing, self-managing, intelligent storage nodes

Ceph ComponentsMonitorCeph Storage ClusterObject Storage Device (OSD)Ceph Metadata Server (MDS)Ceph Block Device (RBD)Ceph ClientsCeph Object Storage (RGW)Ceph FilesystemCustom implementation6

Ceph Storage Cluster Ceph Monitor‒ Ceph Object Storage Device (OSD)‒ It interacts with a logical disk (e.g. LUN) to store data (i.e.handle the read/write operations on the storage disks).Ceph Metadata Server (MDS)‒7It maintains a master copy of the cluster map (i.e. clustermembers, state, changes, and overall health of the cluster)It provides the Ceph Filesystem service. Purpose is to storefilesystem metadata (directories, file ownership, accessmodes, etc) in high-availability Ceph Metadata Servers

Architectural Overview8

Architectural Overview9

Deployment Overview All Ceph clusters require:‒at least one monitor‒at least as many OSDs as copies of an object stored on theclusterBootstrapping the initial monitor is the first step‒ 10This also sets important criteria for the cluster, (i.e. number ofreplicas for pools, number of placement groups per OSD,heartbeat intervals, etc.)Add further Monitors and OSDs to expand the cluster

Monitor Bootstrapping On mon node, create “/etc/ceph/ceph.conf”[global]fsid a7f64266-0894-4f1e-a635-d0aeaca0e993mon initial members node1mon host 192.168.0.1public network 192.168.0.0/24auth cluster required cephxauth service required cephxauth client required cephxosd journal size 1024filestore xattr use omap trueosd pool default size 2osd pool default min size 1osd pool default pg num 333osd pool default pgp num 333osd crush chooseleaf type 1 Add UUID: uuidgen Add monAdd IP.Create a keyring for your cluster and generate amonitor secret key.ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --capmon 'allow *'11

Monitor Bootstrapping (cont) Generate an administrator keyring, generate aclient.admin user and add the user to the keyringceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -nclient.admin --set-uid 0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' Add the client.admin key to the ceph.mon.keyring.ceph-authtool /tmp/ceph.mon.keyring g Generate a monitor map using the hostname(s), hostIP address(es) and the FSID. Save it as/tmp/monmap:‒12monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1ea635-d0aeaca0e993 /tmp/monmap

Monitor Bootstrapping (cont) Create a default data directory (or directories) on themonitor host(s).sudo mkdir /var/lib/ceph/mon/ceph-node1 Populate the monitor daemon(s) with the monitor mapand keyring.ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring/tmp/ceph.mon.keyring Start the monitor(s).sudo /etc/init.d/ceph start mon.node1 Verify that the monitor is running:ceph -s13

Adding OSDs Once you have your initial monitor(s) running, youshould add OSDs‒ 14Your cluster cannot reach an active clean state until

7 Ceph Storage Cluster Ceph Monitor ‒It maintains a master copy of the cluster map (i.e. cluster members, state, changes, and overall health of the cluster) Ceph Object Storage Device (OSD) ‒It interacts with a logical disk (e.g. LUN) to store data (i.e. handle the read/write operations on

Related Documents:

06/99 gen. EASY 620-DC-TC EASY 618-AC-RC u 4Functionsu 5 "easy" at a glance u 6Mountingu 6 ff. Connecting "easy" u 12 EASY 6. status display u 14, 23 ff. Circuit diagram elements u 16 System menu u 20 Menu languages u 22 Startup behaviour u 36 Text display (markers) u 44 Available memory cards u 44 EASY-SOFT u 45 Technical data u

2 Agenda ACS Installation & Deployment Overview of Installation & Deployment strategy The philosophy behind our strategy Current deployment methods for containerised and non-containerised environment Where we are heading with regard to improving the installation and deployment experience Demo-Ansible installation Demo -Helm deployment

SAS Deployment Wizard immediately above. Using this Guide This User's Guide is a high-level document providing support information for the SAS Deployment Wizard and its processes. It is our expectation that this document, combined with the Help information available from each SAS Deployment Wi

Katon, J., et al. (2017). "Deployment and Adverse Pregnancy Outcomes: Primary Findings and Methodological Considerations." Matern Child Health J 21(2): 376 386. non-deployed efore deployment (reference) Preterm birth during deployment after deployment non-deployed efore deployment (referenc

make continuous deployment viable and present ob-servations from operating in a continuous deployment environment. In doing so, our aim is to help software development organizations better understand key is-sues they will be confronted with when implementing continuous deployment. Section 2 provides background on continuous deployment.

HERE Maps & Location Services Serverless Functions Deployment Guide Step 8: Deployment should have started, and you will be able to see in the notification tabs deployment in progress. Once deployment is complete, you should receive the notification of the same and be able to see new resources in the resources section of your account.

Deploying a containerized Web App to Azure Kubernetes Cluster through HELM Chart using DevOps CI/CD Prerequisites -Web API application Code -Docker Container Image -HEML Chart deployment files ( Deployment, Service and so on ) AKS Deployment Architecture This Deployment uses Helm to Create Pods, Services to communicate each pod, Deploy

for each. Often, a deployment to a test environment is much faster, and easier, than a production release. Since test environ-ment deployments are more frequent, the smaller savings per deployment are still important. Calculate an expected savings for each type of deployment. Many find that a deployment to a test lab happens about a day