Introduction to Ceph
According to Wikipedia “Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph’s main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available”
More information pertaining to Ceph can be found here.
Lab Buildout
In my homelab I am building out a small Ceph cluster for testing and learning purposes. My small cluster consists or 4 virtual machines as shown below. I plan to use this cluster primarily as a backend for OpenStack.
Monitor Servers | |
---|---|
Count | 1 |
CPU | 2 |
Memory (GB) | 2 |
Primary Disk (GB) | 16 |
OSD Servers | |
---|---|
Count | 3 |
CPU | 2 |
Memory (GB) | 2 |
Primary Disk (GB) | 16 |
OSD Disk (GB) | 10 |
OSD Disk (GB) | 10 |
OSD Disk (GB) | 10 |
SSD Journal (GB) | 6 |
Troubleshooting OSD Creation
On my monitor server which is also serving as my Admin node, I run the following command to remove all partitioning on all disks that I intend to use for Ceph.
# for disk in sdb sdc sdd sdd; do ceph-deploy disk zap osd01:/dev/$disk; done
#ceph-deploy osd prepare osd01:sdb:sde osd01:sdc:sde osd01:sdd:sde
# /etc/init.d/ceph stop
# umount /var/lib/ceph/osd/ceph-7 /var/lib/ceph/osd/ceph-8 /var/lib/ceph/osd/ceph-9
# for disk in sdb sdc sdd sde; do partx -a /dev/$disk; done
Additional Troubleshooting
[root@osd02 ceph]# /etc/init.d/ceph status
=== osd.3 ===
osd.3: not running.
=== osd.13 ===
osd.13: running {“version”:”0.94.1″}
=== osd.14 ===
osd.14: running {“version”:”0.94.1″}
[root@osd02 ceph]# /etc/init.d/ceph stop
=== osd.14 ===
Stopping Ceph osd.14 on osd02…kill 224396…kill 224396…done
=== osd.13 ===
Stopping Ceph osd.13 on osd02…kill 223838…kill 223838…done
=== osd.3 ===
Stopping Ceph osd.3 on osd02…done
[root@osd02 ceph]# umount /var/lib/ceph/osd/ceph-3
[root@osd02 ceph]# ceph-disk -v prepare –fs-type xfs –cluster ceph — /dev/sdb /dev/sde
[root@osd02 ceph]# fdisk -l /dev/sde
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.Disk /dev/sde: 6442 MB, 6442450944 bytes, 12582912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt# Start End Size Type Name
1 2048 4098047 2G unknown ceph journal
2 4098048 8194047 2G unknown ceph journal
3 8194048 12290047 2G unknown ceph journal
[root@osd02 ceph]# /etc/init.d/ceph status
=== osd.13 ===
osd.13: running {“version”:”0.94.1″}
=== osd.14 ===
osd.14: running {“version”:”0.94.1″}
=== osd.18 ===
osd.18: running {“version”:”0.94.1″}