Deploying NetApp Trident with Charmed Kubernetes

Deploying NetApp Trident with Charmed Kubernetes


Duration: 3:00

Trident is an open-source project for application container persistent storage maintained by NetApp. It has been implemented as an external provisioner controller that runs as a pod itself, monitoring volumes and completely automating the provisioning process. It’s designed to work in such a way that your users can take advantage of the underlying capabilities of your storage infrastructure without having to know anything about it.

What you’ll learn

  • Deploy and configure NetApp OnTap on AWS
  • Access your NetApp OnTap Instance
  • Deploy NetApp Trident
  • Test NetApp storage

What you’ll need

  • Account credentials for AWS
  • A healthy Charmed Kubernetes cluster running on AWS

ⓘ If you do not have a Charmed Kubernetes cluster, you can refer to the following tutorial to spin up one in minutes. Charmed Kubernetes is a production-grade Kubernetes offering from Canonical which is fully compliant with the upstream project.

Prepare your cluster

Duration: 2:00

Make sure that your Kubernetes cluster is running and kubectl config is in ~/.kube/config.

juju scp kubernetes-master/0:config ~/.kube/config

We also want to enable privileged mode:

juju config kubernetes-master allow-privileged=true

Deploy NetApp OnTap

Duration: 5:00

For the purposes of this tutorial, the integration has been tested using NetApp OnTap through AWS. However, the instructions for deploying Trident with OnTap should be very similar for physical devices with SolidFire (Element), ONTAP (AFF/FAS/Select/Cloud), and SANtricity (E/EF-Series).

Launch OnTAP Cloud Manager

To deploy OnTAP we will use the NetApp Cloud Manager, which simplifies the management of NetApp OnTAP and other storage products on the public cloud. You can also just deploy OnTAP devices and configure them manually using SSH or scripts if you are familiar with the OnTAP commands.
Login to AWS console. Select the correct region for where you deployed your cluster with Juju, in my case it was us-east-1. To check which region is your controller on:

juju show-controller k8s | grep region

Go to EC2, make sure you have a private key setup, if not select ‘Create Key Pair’. And put the downloaded file in your .ssh directory.

Then hit ‘Launch instance’, go to AWS Marketplace, find the 'Cloud Manager - Manual Installation without access keys’ and hit ‘Select’:

Choose a flavor and hit ‘Review and Launch’. We will use the default configurations for now. Hit ‘Launch’ again and make sure you pick the right SSH keypair:

Once the cloud manager is deployed, you can access the web interface using the public IP address. Log in or create an account, put in a site name and you should see this:

Create an IAM user

The next step will require us to have an IAM user for our Cloud Manager. Go to AWS IAM, hit ‘Users’, ‘Add user’, input user name, and choose ‘Programmatic access’ type and then hit ‘Next’:

We will need to define the permissions for our Cloud Manager. Fortunately, NetApp already prepared policy documents for AWS, Azure, and GCP. All we need to do is choose ‘Attach existing policies’ directly. Hit ‘Create policy’, go to JSON tab and paste the Standard Region policy for Cloud Manager from the AWS Marketplace. Then hit ‘Review policy’ and it should look like this:

After the policy is created you can choose it in the permissions for the user:

Hit ‘Next’ two times and your new user should look like this:

Hit ‘Create user’ and do not forget to download and save the security credentials.

Configure NetApp OnTap

Duration: 10:00

Subscribe to NetApp Cloud Manager automation platform

In order to continue, we will also need to subscribe to ‘Cloud Manager - Deploy & Manage NetApp Cloud Data Services’ on AWS which will turn on the billing for all the services provided by NetApp. Click ‘Subscribe’, then ‘Set up your account’:

You will be redirected to the NetApp website. Name your subscription and save:

Create a new working environment

Now we can return to the Cloud Manager. Hit the ‘Create Cloud Volume ONTAP’ button. The cloud manager will automatically create and configure an OnTAP instance for us to use with Kubernetes. This will require you to use an AWS Secret Key and Access Key that we saved in the previous step:

Put in your details and hit the checkbox to show that you have verified the IAM policy is associated with your user. If you have errors with your IAM credentials it is likely that the account you have used does not have the correct permissions to provision the OnTAP storage devices. Then put in storage details, I used the Working Environment Name CDK (cluster name) and the username admin. I would recommend setting a strong username/password. After that, hit ‘Continue’:

Disable additional Cloud Compliance, Backup to S3 and Monitoring services, and hit ‘Continue’. Next, we select a location. My cluster is in us-east-1, Northern Virginia, so I will select that. Make sure you select the same subnet as your Kubernetes cluster so your instances have network connectivity to the OnTAP instances you deploy. Also make sure you use an SSH Key Pair on your account you have full access to use, as we will need to jump onto the OnTAP instance to check some information on the device after deployment:

The next option allows you to enable encryption out of the box for your storage. This is quite useful, especially if you need to meet GDPR requirements. However, for this configuration, we do not need to enable the encryption, so let’s select the ‘None’ option:

The next stage asks you which licensing option would you like to use with the Cloud Manager and your support account credentials. I chose Pay-As-You-Go and skipped the second part as we do not need a support account.
The next section asks you which package you’d like to configure. For this tutorial choose the POC/Small workloads package, but for production, you may want to consider one of the other package types. You should choose a package depending on your workload, for example, the highest performance production workloads package has high performance and can handle up to 36TB of storage. It is also possible to create a custom package based on your own requirements:

Now you need to create the volume. You can set the details of the volume, its name, size, protocol, let’s name the volume ‘cdkontap’ with a size of 200GB and NFS protocol, the other options leave with default:

In the next section choose Storage Efficiency which enables thin provisioning, deduplication, and compression. After that, you will be given an overview of all the options you have entered, once you are happy, hit the ‘Go’ button to start the provisioning of the storage. Make sure you tick both of the boxes before you press it:

Provisioning and automatic configuring of the ONTAP storage will take up to 25 minutes, so go grab a coffee and come back. Eventually, you should side a screen like this:

Accessing your NetApp OnTap Instance

Duration: 5:00

Your storage should now be accessible but by default, it is deployed onto a private IP range, you will need to SSH into the storage using an existing machine that has an interface on that network or temporarily exposes the management interface to the internet using a public IP address. Be careful when doing this, it is not a good idea to directly expose your services to the internet, especially if they contain production data. This could potentially breach GDPR.
If you want to assign a public IP to your newly provisioned storage, head on over to AWS, go to EC2, and find the region you provisioned your Kubernetes cluster and storage into. Once your storage has been deployed, you should see a machine named after whatever you called the storage at the start of these steps. For me, its called ‘cdkstack’:

Right-click the instance with the name of your storage back-end, hit ‘Networking’, and then go to ‘Manage IP addresses’. From here, you can see that the instance has two networking interfaces. One is for Management of the storage which we will connect to through SSH, the other interface is used for Cluster Management, Inter-cluster Communication, and for the Data Network, but it has 4 IP addresses and we are not sure which one is used for which purpose.
Copy management interface name. In My case it was ‘eni-0baaf082efafbc501’:

eth0: eni-0baaf082efafbc501 - Interface for Node Management -

Click on ‘Allocate an Elastic IP’, you will be redirected to EC2 Elastic IP addresses, click ‘Allocate’. Then choose a new IP address, hit ‘Actions’, and ‘Associate Elastic IP address’. Choose ‘Network interface’ type, insert the name of the interface, and allocate:

During a default deploy, OnTAP has several interfaces with different IP addresses for different purposes. We need to find out which IP addresses are responsible for management LIF, data LIF and also check the svm name. This information will be required in the next steps.

Once you are able to SSH to the machine, you should be created with the NetApp OnTap CLI. Check network interfaces:

ssh admin@<your-instance-ip>
cdkstack::> network interface show

You should see an output like this:

            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
                         up/up   cdkstack-01   e0a     true
            cluster-mgmt up/up    cdkstack-01   e0b     true
            intercluster up/up    cdkstack-01   e0b     true
            iscsi        up/up   cdkstack-01   e0b     true
                         up/up   cdkstack-01   e0b     true
                         up/up   cdkstack-01   e0b     true
6 entries were displayed.

Also, check volumes for svm name:

cdkstack::> volume show

Your output should be similar:

Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
          vol0         aggr0_cdkstack_01 
                                    online     RW      73.71GB    63.97GB    8%
          cdkontap     aggr1        online     RW        200GB    190.0GB    0%
                       aggr1        online     RW          1GB    960.1MB    1%
3 entries were displayed.

We can see that:

  • Management interface:
    • cdkstack-01_mgmt1
    • IP -
  • Data LIF:
    • svm_cdkstack_data_lif
    • IP -
  • Svm name:
    • svm_cdkstack

Deploying NetApp Trident

Duration: 10:00

Now we’ve provisioned our volume and we have a Kubernetes cluster, we can start using Trident. Trident allows you to configure and consume storage with Kubernetes in a similar way to helm. By default, the Netapp CloudManager will provision our storage on private networks, so it will not be internet accessible. Therefore, we must deploy Trident onto one of the nodes in our cluster. Using Juju, we can SSH to one of the kubernetes-workers or the kubernetes-master server to utilize Trident:

juju ssh kubernetes-worker/0

Trident itself is shipped as a CLI tool. To install the latest version, grab the latest release from their GitHub page or use the example command below:

tar -xf trident-installer-20.04.0.tar.gz

Once extracted, we are ready to utilize Trident. If we go into the Trident directory, the most important file inside here is the tridentctl command-line tool, this tool is used to both install trident and manage it. Before we install it, we need to create a json file and place it into the setup directory. This file contains information about our storage mechanism which trident uses to configure itself.

cd trident-installer/
vim backend.json

The file we need to create is called backend.json. Inside the directory is a folder called sample-input. This includes all a bunch of sample files for configuring things like PVC/PV, storage classes, and the various forms of storage Trident can utilize, like OnTap. Our example is based on the file backend-ontap-nas.json file:

    "version": 1,
    "storageDriverName": "ontap-nas",
    "backendName": "cdkstack",
    "managementLIF": "",
    "dataLIF": "",
    "svm": "svm_cdkstack",
    "username": "admin",
    "password": "Kubernetes123"

Let’s examine the fields in this file and find where the values come from:

  • version: this value should always be 1.
  • storageDriverName: describes the type of ontap-nas you wish to use.
  • backendName: the name we set earlier for the working environment - cdk
  • managementLIF: an IP address of a logical interface for management of the storage
  • dataLIF: an IP address of a logical interface for data
  • svm: the name of our storage virtual machine, in this case, it is svm_ appended to the backendName. We can also check this by SSHing into the OnTap device.
  • Username and password: credentials we set earlier in the environment creation steps

The managementLIF should be the IP of the first interface on the OnTAP instance, the same IP you use to SSH to the instance in order to use the CLI. The Data LIF can be found using the CLI tool but it generally seems to be the third IP address on the second interface. We confirmed this using the steps in the previous section.

The official documentation suggests that you should run tridentctl install command with the -n flag which is used to specify a namespace. This is good practice and will cause trident to remain isolated in its own namespace:

./tridentctl install -n trident

After the installation, we need to run another command to allow us to use the storage for our own containers. The install command just setups the trident container, now we need to set up a backend storage mechanism for us to use with our regular containers:

./tridentctl create backend -f backend.json -n trident

Testing our storage on Kubernetes

Duration: 5:00

Once Trident is deployed correctly and working, it is time for us to configure a default StorageClass and to try to create a PVC. When we create a PVC, a PV should be automatically created based on the StorageClass we have specified, usually, this is the default storageclass. Included in this repository is a default storage class based on the OnTap instance we just created. First, create this storageclass.yaml:

kind: StorageClass
  name: default
  namespace: trident
  backendType: "ontap-nas"

Next, we will use kubectl to apply it:

kubectl apply -f storageclass.yaml

Now we can then create a pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
  name: netapp-ontap-pvc
  storageClassName: default
  - ReadWriteOnce
      storage: 2Gi

Apply the file to the cluster:

kubectl apply -f pvc.yaml

Finally, you should see the PVC has been created, along with the PV:

kubectl get sc, pvc

And the output should look like this:

default   Delete          Immediate           false                  58s
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
netapp-ontap-pvc   Bound    pvc-253e8679-628e-4580-84f6-8e3ff0661e1b   2Gi        RWO            default        8s

You should be able to see this inside the CloudManager as well:

If you’re interested in doing some proper testing, try to deploy some containers which require PV. Every time you create a new PV it should also be created inside the NetApp cloud manager, so you can manage the volume, back it up, etc using this tool as well.

Removing Trident

For any reason you wish to remove Trident from your Kubernetes cluster, just run the following command:

/tridentctl uninstall

That’s all folks!

Duration: 1:00

Congratulations! In this tutorial, you deployed NetApp OnTap and integrated it with Charmed Kubernetes on AWS. You also deployed simple storage to see how this integration works.

Where to go from here?