How to resize the storage pool for the Anbox appliance on AWS

Hi,
I was using the 50G EBS volume for dedicated storage pool for anbox cloud appliance on AWS EC2.
Is there a way to resize the storage pool to 100GB.

Thank you.

Best regards.
Alex.

Hello Gary @gary-wzl77
I am looking forward to hearing from you if you have any idea.

Best regards.

Alex.

Hey Alex
Yes, there is.
First, you need to follow the guide to increase the size of EBS volume for your EC2 instance.
Let’s say you increased the EBS volume size to 100 GiB from AWS console. After this, you need to check the total space of the data storage pool and see if it matched up with the 100GiB.

     $ lxc storage info data

If the total space remained the same, you need to grow the data storage pool by auto-expanding.

     $ sudo zpool set autoexpand=on data
     $ sudo zpool online -e data  <block_device>
     $ sudo zpool set autoexpand=off data

The zpool utility can be found and installed from zfsutils-linux deb package.
Then you need to manually restart the ams for changes to take effect.

     $ /snap/anbox-cloud-appliance/current/bin/juju  switch appliance:anbox-cloud
     $ /snap/anbox-cloud-appliance/current/bin/juju  exec -u ams/0 -- "snap restart ams"

And run the following command to check if the disk size is increased to 100Gib from the ams perspective.

     $ amc node show lxd0

Hopefully it helps your case.

BR
Gary

Hi @gary-wzl77 Gary,
Thank you very much.
I’ve installed zfsutils-linux package.
then did run sudo zpool set autoexpand=on data

Then tried running juju switch appliance:anbox-cloud.
And it showed juju not found.

So I tried sudo snap install juju,
And it showed

error: This revision of snap “juju” was published using classic confinement and thus may perform
arbitrary system changes outside of the security sandbox that snaps are usually confined to,
which may put your system at risk.
If you understand and want to proceed repeat the command including --classic.

Should I proceed with sudo snap install juju --classic or is there anything you recommend?

Looking forward to hearing from you.

Best regards.
Alex.

And it showed juju not found?
A: You’re now running the appliance on AWS and have it fully bootstrapped, is that correct? if so, could you please share the output of sudo anbox-cloud-appliance.buginfo ? It looks like to me the appliance was not installed successfully or so.

BR
Gary

Yes, I am running the appliance on AWS.
It fully bootstrapped and I’ve been using it more than 4 months.
Creating application, containers, sessions worked so far.

And recently, I would like to afford more containers, and it asked me to resize the storage pool.

Anyway, the output of the sudo anbox-cloud-appliance.buginfo is very long.
Do you need all of them?

Best regards.
Alex.

Do you need all of them?
A: that’d be great if you could share all of them.
Thanks.
Meanwhile would you mind sending me the output of snap list command too?

Here is the output of sudo anbox-cloud-appliance.buginfo
https://we.tl/t-QF6Yiz26Sy
Please understand that I’ve masked the public ip address for security reason.

Please note the end of the output.
Disk 1:
NUMA node: 0
ID: nvme1n1
Device: 259:1
Model: Amazon Elastic Block Store
Type: nvme
Size: 100.00GiB
WWN: nvme.1d0f-766f6c3031393737663636626563393534343932-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
Read-Only: false
Removable: false
Partitions:
- Partition 1
ID: nvme1n1p1
Device: 259:4
Read-Only: false
Size: 49.99GiB
- Partition 9
ID: nvme1n1p9
Device: 259:5
Read-Only: false
Size: 8.00MiB

Maybe is this the problem with normal partitioning?

Additionally, here is the output of snap list

Name Version Rev Tracking Publisher Notes
amazon-ssm-agent 3.1.1927.0 6562 latest/stable/… aws✓ classic
amc 1.18.2 398 latest/stable canonicalâś“ -
ams-node-controller 1.18.2 138 - canonicalâś“ -
anbox-cloud-appliance 1.18.2 428 latest/stable canonicalâś“ classic
core18 20230530 2788 latest/stable canonicalâś“ base
core20 20230622 1977 latest/stable canonicalâś“ base
lxd 5.0.2-838e1b2 24326 5.0/stable/… canonical✓ -
snapd 2.59.5 19459 latest/stable canonicalâś“ snapd

Best regards.

Alex.

Hi @gary-wzl77 Gary,
Do you have any idea from the logs?

Best regards.

Alex.

Sorry for the late response.
There’re a few things that I’ve noticed may not be right.

  1. First thing I need to correct myself on the juju command. We ship the juju binary into our appliance snap for quite some time, that’s the reason why snap list doesn’t list the juju snap. So using the juju command /snap/anbox-cloud-appliance/current/bin/juju instead.

  2. As you can see from the output of ams ls, there’re 15 containers that ended up with the stopped status. Each container was allocated 3Gib disk capacity, meaning at the moment, 3 * 15 = 45 GiB disk capacity was in use in total. However the disk space for lxd0 node :

    disk:
     pool: data
     size: 48GB
    

    So you’re running out of disk capacity. Are you going to reuse those containers at some later date? if not, delete them with the command

    amc delete <container_id>
    

    This will free some disk resources. After this, you could launch a new session again.
    NOTE: Those containers still occupied the disk even after they stopped. Please refer to the doc for more details.

  3. Regarding the EBS volumes
    I guess you created a dedicated EBS volume with 50 GiB for LXD data storage pool initially. Later you increased it to 100GiB from the AWS console. However you missed to grow the partition and resize the filesystem. What you need to do is that

    sudo growpart /dev/nvme1n1 1
    sudo resize2fs /dev/nvme1n1p1
    

    After this, following the above steps, you should get the storage pool extended. (In your case, the <block_device> is nvme1n1)

1 Like

Hi @gary-wzl77
Great, It worked.
Thank you for your support.

Best regards.
Alex.