File storage

Charmed Ceph supports different types of access to file storage: CephFS and NFS.

The Ceph File System, or CephFS, is a POSIX-compliant file system built on top of Ceph’s distributed object store, RADOS.

NFS Ganesha is an NFS server (refer to Sharing File Systems with NFS ) that runs in a user address space instead of as part of the operating system kernel and is used to present CephFS shares via NFS.

The ceph-fs charm deploys the metadata server daemon (MDS), which is the underlying management software for CephFS. The charm is deployed within the context of an existing Charmed Ceph cluster.

Highly available CephFS is achieved by deploying multiple MDS servers (i.e. multiple ceph-fs application units).

The ceph-nfs charm deploys nfs-ganesha, the software used to serve NFS. The charm is deployed alongside CephFS.

CephFS deployment

To deploy a three-node MDS cluster in a pre-existing Ceph cluster:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 ceph-fs
juju add-relation ceph-fs:ceph-mds ceph-mon:mds

Here the three ceph-fs units are containerised, where new containers are placed on existing machines 0, 1, and 2.

CephFS is now fully set up.

CephFS client usage

This section will provide optional instructions for verifying the CephFS service by setting up a simple client environment. Deploy the client using the steps provided in the Client setup appendix.

Note:

These instructions are based upon the native capabilities of the Linux kernel (v.4.x).
A kernel driver will allow the client to mount CephFS as a regular file system.

An example deployment will have a juju status output similar to the following:

Model  Controller     Cloud/Region     Version  SLA          Timestamp
ceph   my-controller  my-maas/default  2.8.1    unsupported  19:34:16Z

App          Version  Status  Scale  Charm     Store       Rev  OS      Notes
ceph-fs      15.2.3   active      3  ceph-fs   jujucharms   24  ubuntu  
ceph-mon     15.2.3   active      3  ceph-mon  jujucharms   49  ubuntu  
ceph-osd     15.2.3   active      3  ceph-osd  jujucharms  304  ubuntu  
ceph-client  20.04    active      1  ubuntu    jujucharms   15  ubuntu  

Unit            Workload  Agent  Machine  Public address  Ports  Message
ceph-client/0*  active    idle   3        10.0.0.240             ready
ceph-fs/0       active    idle   0/lxd/0  10.0.0.245             Unit is ready
ceph-fs/1       active    idle   1/lxd/0  10.0.0.246             Unit is ready
ceph-fs/2*      active    idle   2/lxd/0  10.0.0.241             Unit is ready
ceph-mon/0      active    idle   0/lxd/1  10.0.0.247             Unit is ready and clustered
ceph-mon/1      active    idle   1/lxd/1  10.0.0.242             Unit is ready and clustered
ceph-mon/2*     active    idle   2/lxd/1  10.0.0.249             Unit is ready and clustered
ceph-osd/0      active    idle   0        10.0.0.229             Unit is ready (2 OSD)
ceph-osd/1*     active    idle   1        10.0.0.230             Unit is ready (2 OSD)
ceph-osd/2      active    idle   2        10.0.0.252             Unit is ready (2 OSD)

The client host is represented by the ceph-client/0 unit.

Verify that the filesystem name set up by the ceph-fs charm is ‘ceph-fs’:

juju ssh ceph-mon/0 "sudo ceph fs ls"

Output:

name: ceph-fs, metadata pool: ceph-fs_metadata, data pools: [ceph-fs_data ]

Create a CephFS user (‘test’) with read/write permissions at the root of the ‘ceph-fs’ filesystem, collect the user’s keyring file, and transfer it to the client:

juju ssh ceph-mon/0 "sudo ceph fs authorize ceph-fs client.test / rw" \
  | tee ceph.client.test.keyring

juju scp ceph.client.test.keyring ceph-client/0:

Connect to the client:

juju ssh ceph-client/0

From the CephFS client,

Configure the client using the keyring file and set up the correct permissions:

sudo mv ~ubuntu/ceph.client.test.keyring /etc/ceph
sudo chmod 600 /etc/ceph/ceph.client.test.keyring
sudo chown root: /etc/ceph/ceph.client.test.keyring
Note:

The key installed on a client host authorises access to the CephFS filesystem for the host itself, and not to a particular user.

Mount the CephFS filesystem and create a test file:

sudo mkdir /mnt/cephfs
sudo mount -t ceph :/ /mnt/cephfs -o name=test
sudo mkdir /mnt/cephfs/work
sudo chown ubuntu: /mnt/cephfs/work
touch /mnt/cephfs/work/test

CephNFS deployment

Note:

The ceph-nfs charm is currently in tech-preview.

To deploy a three-node CephNFS cluster in a pre-existing (Quincy) Ceph cluster:

juju deploy --channel quincy/beta -n 3 --to lxd:0,lxd:1,lxd:2 --config  vip=10.0.0.101 ceph-nfs
juju deploy hacluster
juju add-relation ceph-nfs hacluster
juju add-relation ceph-nfs:ceph-client ceph-mon:client

Here the three ceph-nfs units are containerised, where new containers are placed on existing machines 0, 1, and 2.

CephFS is now fully set up.

CephNFS client usage

This section will provide optional instructions for verifying the CephNFS service by setting up a simple client environment. The only client side requirement is the inclusion of the nfs-common package.

An example deployment will have a juju status output similar to the following:

Model  Controller     Cloud/Region     Version  SLA          Timestamp
ceph   my-controller  my-maas/default  2.8.1    unsupported  19:34:16Z

App        Version  Status  Scale  Charm      Channel       Rev  Exposed  Message
ceph-client  20.04    active      2  ubuntu     stable         18  no       
ceph-fs    16.2.7   active      2  ceph-fs    pacific/edge   47  no       Unit is ready
ceph-mon   16.2.7   active      3  ceph-mon   pacific/edge   93  no       Unit is ready and clustered
ceph-nfs            active      2  ceph-nfs                   0  no       Unit is ready
ceph-osd   16.2.7   active      3  ceph-osd   pacific/edge  528  no       Unit is ready (2 OSD)
hacluster           active      2  hacluster  2.0.3/edge     83  no       Unit is ready and clustered

Unit            Workload  Agent  Machine  Public address  Ports  Message
ceph-client/0*  active    idle   3        10.0.0.240             ready
ceph-fs/0*      active    idle   0        10.0.0.229           Unit is ready
ceph-fs/1       active    idle   1        10.0.0.211           Unit is ready
ceph-mon/0*     active    idle   2        10.0.0.85            Unit is ready and clustered
ceph-mon/1      active    idle   3        10.0.0.124           Unit is ready and clustered
ceph-mon/2      active    idle   4        10.0.0.221           Unit is ready and clustered
ceph-nfs/0      active    idle   5        10.0.0.143           Unit is ready
  hacluster/1   active    idle            10.0.0.143           Unit is ready and clustered
ceph-nfs/1*     active    idle   6        10.0.0.99            Unit is ready
  hacluster/0*  active    idle            10.0.0.99            Unit is ready and clustered
ceph-osd/0*     active    idle   7        10.0.0.149           Unit is ready (2 OSD)
ceph-osd/1      active    idle   8        10.0.0.38            Unit is ready (2 OSD)
ceph-osd/2      active    idle   9        10.0.0.100           Unit is ready (2 OSD)

The client host is represented by the ceph-client/0 unit.

Create an NFS share on the leader ceph-nfs unit:

    juju run-action --wait ceph-nfs/1 create-share name=test-share allowed-ips=10.0.0.240 size=10

Output:

unit-ceph-nfs-1:
  UnitId: ceph-nfs/1
  id: "18"
  results:
    ip: 10.0.0.101
    message: Share created
    path: /volumes/_nogroup/test-share/b524fc68-7811-4e0d-82a8-889318d010c6
  status: completed
  timing:
    completed: 2022-04-22 07:24:52 +0000 UTC
    enqueued: 2022-04-22 07:24:46 +0000 UTC
    started: 2022-04-22 07:24:48 +0000 UTC

Mount the NFS filesystem and create a file:

sudo mkdir /mnt/ceph_nfs
sudo mount -t nfs -o nfsvers=4.1,proto=tcp 10.0.0.201:/volumes/_nogroup/test-share/b524fc68-7811-4e0d-82a8-889318d010c6  /mnt/ceph_nfs
sudo mkdir /mnt/ceph_nfs/work
sudo chown ubuntu: /mnt/ceph_nfs/work
touch /mnt/ceph_nfs/work/test

The following section doesn’t work as of today because ceph-nfs is only available in beta channels.

$ juju info ceph-nfs
...
channels: |
  latest/stable:     –
  latest/candidate:  –
  latest/beta:       1  2022-05-12  (1)  6MB
  latest/edge:       1  2022-05-12  (1)  6MB
  quincy/stable:     –
  quincy/candidate:  –
  quincy/beta:       1  2022-05-12  (1)  6MB
  quincy/edge:       1  2022-05-12  (1)  6MB

The command requires an explicit channel such as --channel quincy/beta or --channel beta.

$ juju deploy ceph-nfs
ERROR selecting releases: no charm or bundle matching channel or platform; suggestions: beta with focal, jammy, edge with focal, jammy, quincy/edge with focal, jammy, quincy/beta with focal, jammy

Thank you. I’ll dig into this.

Updated.

20 characters