Using CephAdm to deploy custom ubuntu-ceph images in a containerised manner

CephAdm Deployment (using Ubuntu-Ceph Images)

Overview


Cephadm is a utility used to deploy and maintain containerised ceph. The downside being the default images used by cephadm are not based on Ubuntu. In this Article, we look into how we can use cephadm to deploy containerised ceph based on jammy-quincy images.

NOTE: At the time of writing this document, (under testing) ubuntu-ceph images were hosted here.

Procedure


Prepare a Host


For demonstration we picked an instance with following specifications, additionally 3 cinder-volumes were also added for the OSD deployment which will follow later:


+-------+------+-----------+-------+

| RAM | Disk | Ephemeral | VCPUs |

+-------+------+-----------+-------+

| 4096 | 40 | 40 | 2 |

+-------+------+-----------+-------+

Install CephAdm and Bootstrap


sudo apt install cephadm

Bootstrap: we are using custom images hosted at docker hub, if desired any other image can also be used.

Complete Output here

Note: Either Podman or Docker are required to be available on the host.


sudo cephadm --image utkarshhere/ceph:quincy-jammy bootstrap --mon-ip <mon_ip>

NOTE: <mon_ip>: should be the IP of the first host of the cluster. (as per ceph docs)

Verify Shell Access and currently running services



sudo cephadm shell

root@host:/# ceph orch ls

Check available devices for OSD addition and provision them



ceph orch device ls

ceph orch apply osd --all-available-devices

Note: using --all-available-devices will automatically span any storage device added to Host in future into an OSD. If this is not desired specific storage device can also be specified.

For Non-HA Deployment (Failure Domain needs to be changed)


The default failure domain for a cephadm deployment is “Host”, thus if you deploy a single host ceph system the PGs would not be active until the failure domain is changed to “OSD”. The process to do that is as follows:

  1. Dump current crush rules and check if the failure domain is set to Host.

root@host :/# ceph osd crush rule dump

  1. Create a new default rule with OSD as failure domain.

root@host :/# ceph osd crush rule create-replicated replicated_rule_osd default osd

  1. Change crush_rule for existing pools and delete the older default rule. This will prevent the older rule from being set as the default rule for pools added in future.

root@host :/# ceph osd pool ls

root@host :/# ceph osd pool set <pool> crush_rule replicated_rule_osd

  1. Delete old rule:

root@host :/# ceph osd crush rule rm replicated_rule”

Verification of Cluster (using RGW)


Add RGW service


root@host :/# ceph orch apply rgw <name>

Note: User is required to configure the zone (created by cephadm) for endpoints, and user through which the zone can be accessed.

  1. Configure Zone for endpoints:

root@host :/# radosgw-admin zone modify --rgw-zone default --endpoints http://<fqdn>:80

  1. Create a new user (the credentials of this user will be used for access)

root@host :/# radosgw-admin user create --uid=master-client --display-name=master

  1. Use any s3 client to perform IO

For demonstration we used aws cli

JFTR., at the beginning you’ll also need to install cephadm itself:

apt install cephadm

1 Like

Would be great if there were Arm64 based containers available.