One mouth ago i read a news about of issue of beta 1 release of XenServer Dundee.
This release is based on Centos 7.
*Installing ceph rbd on one node with XenServer Dundee Beta 1.
*Creating image and mapping to node.
*Activating of mapped image in XenCenter.
Hardware configuration.
Node: One hdd for OS, on HDD for ceph storage, 2xXeon56XX, 24 GB MEM, 2x1Gbit/s Ethernet
Management Server: One HDD, 8 GB MEM, 1Gbit/s Ethernet, Windows 2012 R2
Network configuration.
Doesn't metter. Only one ethernet port were used.
I am using 192.168.5.119/23 for node and 192.168.4.197 for XenServer
Instalating XenCenter and Xenserver Simply install Windows 2012 R2 OS to manager XenCenter server. Configure Network with IP address and than install XenCenter
Also install XenServer to first hdd with using follow ISO and configure Network with IP address. Hosname is xenserver-test.
Set up repository Centos-Base, Centos-Updates and Centos-Extras by using real base links.
Set up repository:
cat << EOT > /etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages for Citrix baseurl=http://download.ceph.com/rpm/el7/x86_64/ enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm/el7/noarch/ enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc EOT
Import gpgkey:
rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
yum install ceph-common ceph ceph-deploy ntp -y
Temporary edit /etc/centos-release for deploying because it is required by ceph-deploy. See Supported distributions:
cp /etc/centos-release /etc/centos-release.old echo "CentOS Linux release 7.1.1503 (Core)" > /etc/centos-release
Deploing monitor:
cd /etc/ceph ceph-deploy new xenserver-test ceph-deploy mon create-initial
Check for running monitors:
ceph -s cluster 37a9abb2-c7ba-45e9-aaf2-6486d7099819 health HEALTH_ERR 64 pgs stuck inactive 64 pgs stuck unclean no osds monmap e1: 1 mons at {xenserver-test=192.168.5.119:6789/0} election epoch 2, quorum 0 xenserver-test osdmap e1: 0 osds: 0 up, 0 in pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating
cd /etc/ceph
ceph-deploy gatherkeys xenserver-test ceph-deploy disk zap xenserver-test:sdb ceph-deploy osd prepare xenserver-test:sdb
Recreating of rbd volume.
ceph osd pool delete rbd rbd –yes-i-really-really-mean-it
ceph osd pool create rbd 128 ceph osd pool set rbd min_size 1 ceph osd pool set rbd size 1
Check for all osd running:
ceph -s cluster 37a9abb2-c7ba-45e9-aaf2-6486d7099819 health HEALTH_OK monmap e1: 1 mons at {xenserver-test=192.168.5.119:6789/0} election epoch 2, quorum 0 xenserver-test osdmap e12: 1 osds: 1 up, 1 in pgmap v19: 128 pgs, 1 pools, 0 bytes data, 0 objects 36268 kB used, 413 GB / 413 GB avail 128 active+clean
Edit back /etc/centos-ralease file:
cp /etc/centos-release.old /etc/centos-release
Create 50 GB image in rbd pool:
rbd -p rbd create testimage --size 50000
Mapping image to host:
rbd map rbd/testimage --id admin --key AQBSJlNWtX/BHxAAcJ/yNe31rXjzmbX+Uxikug==
Key were taken from /etc/ceph.client.admin.keyring.
Folloing device appeared
/dev/rbd1