CephAdm Deployment (using Ubuntu-Ceph Images)
Cephadm is a utility used to deploy and maintain containerised ceph. The downside being the default images used by cephadm are not based on Ubuntu. In this Article, we look into how we can use cephadm to deploy containerised ceph based on jammy-quincy images.
NOTE: At the time of writing this document, (under testing) ubuntu-ceph images were hosted here.
Prepare a Host
For demonstration we picked an instance with following specifications, additionally 3 cinder-volumes were also added for the OSD deployment which will follow later:
+-------+------+-----------+-------+ | RAM | Disk | Ephemeral | VCPUs | +-------+------+-----------+-------+ | 4096 | 40 | 40 | 2 | +-------+------+-----------+-------+
Install CephAdm and Bootstrap
sudo apt install cephadm
Bootstrap: we are using custom images hosted at docker hub, if desired any other image can also be used.
Complete Output here
Note: Either Podman or Docker are required to be available on the host.
sudo cephadm --image utkarshhere/ceph:quincy-jammy bootstrap --mon-ip <mon_ip> NOTE: <mon_ip>: should be the IP of the first host of the cluster. (as per ceph docs)
Verify Shell Access and currently running services
sudo cephadm shell root@host:/# ceph orch ls
Check available devices for OSD addition and provision them
ceph orch device ls
ceph orch apply osd --all-available-devices
Note: using --all-available-devices will automatically span any storage device added to Host in future into an OSD. If this is not desired specific storage device can also be specified.
For Non-HA Deployment (Failure Domain needs to be changed)
The default failure domain for a cephadm deployment is “Host”, thus if you deploy a single host ceph system the PGs would not be active until the failure domain is changed to “OSD”. The process to do that is as follows:
- Dump current crush rules and check if the failure domain is set to Host.
root@host :/# ceph osd crush rule dump
- Create a new default rule with OSD as failure domain.
root@host :/# ceph osd crush rule create-replicated replicated_rule_osd default osd
- Change crush_rule for existing pools and delete the older default rule. This will prevent the older rule from being set as the default rule for pools added in future.
root@host :/# ceph osd pool ls root@host :/# ceph osd pool set <pool> crush_rule replicated_rule_osd
- Delete old rule:
root@host :/# ceph osd crush rule rm replicated_rule”
Verification of Cluster (using RGW)
Add RGW service
root@host :/# ceph orch apply rgw <name>
Note: User is required to configure the zone (created by cephadm) for endpoints, and user through which the zone can be accessed.
- Configure Zone for endpoints:
root@host :/# radosgw-admin zone modify --rgw-zone default --endpoints http://<fqdn>:80
- Create a new user (the credentials of this user will be used for access)
root@host :/# radosgw-admin user create --uid=master-client --display-name=master
- Use any s3 client to perform IO
For demonstration we used aws cli