Juju deploy landscape-scalable charm bundle rev 11 to Ubuntu 22.04 (jammy), on aarch64 (arm64)

Steps to install on Oracle Public Cloud virtual machine of shape: VM.Standard.A1.Flex, using four Ampere Altra OCPUs and 24GB of RAM:

sudo iptables -F && sudo netfilter-persistent save
sudo snap install juju --classic
juju bootstrap localhost
juju set-model-constraints arch=arm64
juju deploy landscape-scalable --channel latest/edge

Deployment details are as follows:

Located bundle "landscape-scalable" in charm-hub, revision 11
Located charm "haproxy" in charm-store, revision 66
Located charm "landscape-server" in charm-store, revision 56
Located charm "postgresql" in charm-store, revision 246
Located charm "rabbitmq-server" in charm-store, revision 123
Executing changes:
- upload charm haproxy from charm-store for series focal with architecture=amd64
- deploy application haproxy from charm-store on focal
- expose all endpoints of haproxy and allow access from CIDRs 0.0.0.0/0 and ::/0
- set annotations for haproxy
- upload charm landscape-server from charm-store for series focal from channel edge with architecture=amd64
- deploy application landscape-server from charm-store on focal with edge
- set annotations for landscape-server
- upload charm postgresql from charm-store for series focal with architecture=amd64
- deploy application postgresql from charm-store on focal
  added resource wal-e
- set annotations for postgresql
- upload charm rabbitmq-server from charm-store for series focal with architecture=amd64
- deploy application rabbitmq-server from charm-store on focal
- set annotations for rabbitmq-server
- add relation landscape-server - rabbitmq-server
- add relation landscape-server - haproxy
- add relation landscape-server:db - postgresql:db-admin
- add unit haproxy/0 to new machine 0
- add unit landscape-server/0 to new machine 1
- add unit postgresql/0 to new machine 2
- add unit rabbitmq-server/0 to new machine 3
Deploy of bundle completed.

The juju status output reads as follows:

Model    Controller           Cloud/Region         Version  SLA          Timestamp
default  localhost-localhost  localhost/localhost  2.9.35   unsupported  15:44:56Z

App               Version  Status   Scale  Charm             Channel  Rev  Exposed  Message
haproxy                    error        1  haproxy           stable    66  yes      hook failed: "install"
landscape-server           waiting      1  landscape-server  edge      56  no       Waiting on relations: haproxy
postgresql        12.12    active       1  postgresql        stable   246  no       Live master (12.12)
rabbitmq-server   3.8.2    active       1  rabbitmq-server   stable   123  no       Unit is ready

Unit                 Workload  Agent  Machine  Public address  Ports     Message
haproxy/0*           error     idle   0        10.204.197.227            hook failed: "install"
landscape-server/0*  waiting   idle   1        10.204.197.98             Waiting on relations: haproxy
postgresql/0*        active    idle   2        10.204.197.102  5432/tcp  Live master (12.12)
rabbitmq-server/0*   active    idle   3        10.204.197.216  5672/tcp  Unit is ready

Machine  State    Address         Inst id        Series  AZ  Message
0        started  10.204.197.227  juju-4a159b-0  focal       Running
1        started  10.204.197.98   juju-4a159b-1  focal       Running
2        started  10.204.197.102  juju-4a159b-2  focal       Running
3        started  10.204.197.216  juju-4a159b-3  focal       Running

Observations:

  1. landscape-server is deployed from edge at revision 56
  2. every container is running in the focal series
  3. haproxy charm stable revision 66 has a hook failed: "install" message

Expectations:

  1. landscape-server would be deployed from the latest charm, revision 57
  2. the container series would match the host: jammy

These expectations may be misaligned from what is supposed to happen, is this the appropriate result?

1 Like

I am attempting to resolve the hook failed: "install" with this command:

juju resolved haproxy/0

The juju status output shortly afterwards reflected the “maintenance” state:

haproxy/0*           maintenance  executing  0        10.204.197.227            (install) installing charm software

After some time, the error persists:

haproxy/0*           error     idle   0        10.204.197.227            hook failed: "install"
landscape-server/0*  waiting   idle   1        10.204.197.98             Waiting on relations: haproxy

I removed the haproxy unit, and the haproxy application, and redeployed haproxy to an Ubuntu 18.04 (bionic) container to resolve this challenge.

juju remove-unit haproxy/0 --force --no-wait
juju remove-application haproxy
juju deploy haproxy --config default_timeouts='queue 60000, connect 5000, client 120000, server 120000' --config services='' --config ssl_cert='SELFSIGNED' --config global_default_bind_options='no-tlsv10' --series bionic
juju relate landscape-server haproxy

The juju status output shows everything is as it should be. Port forwarding haproxy and making it visible on the host network, along with the remaining configuration steps, remains.

Model    Controller           Cloud/Region         Version  SLA          Timestamp
default  localhost-localhost  localhost/localhost  2.9.35   unsupported  16:52:51Z

App               Version  Status  Scale  Charm             Channel  Rev  Exposed  Message
haproxy                    active      1  haproxy           stable    66  no       Unit is ready
landscape-server           active      1  landscape-server  edge      56  no       Unit is ready
postgresql        12.12    active      1  postgresql        stable   246  no       Live master (12.12)
rabbitmq-server   3.8.2    active      1  rabbitmq-server   stable   123  no       Unit is ready

Unit                 Workload  Agent  Machine  Public address  Ports           Message
haproxy/1*           active    idle   4        10.204.197.81   80/tcp,443/tcp  Unit is ready
landscape-server/0*  active    idle   1        10.204.197.235                  Unit is ready
postgresql/0*        active    idle   2        10.204.197.217  5432/tcp        Live master (12.12)
rabbitmq-server/0*   active    idle   3        10.204.197.193  5672/tcp        Unit is ready

Machine  State    Address         Inst id        Series  AZ  Message
1        started  10.204.197.235  juju-f5ad65-1  focal       Running
2        started  10.204.197.217  juju-f5ad65-2  focal       Running
3        started  10.204.197.193  juju-f5ad65-3  focal       Running
4        started  10.204.197.81   juju-f5ad65-4  bionic      Running

A better workaround would be to redeploy haproxy in Focal. The landscape-scalable charm deploys haproxy as machine 0. So we can forcefully remove the haproxy application, remove the unit, and to be absolutely certain, attempt to remove the machine. The machine removal will likely produce an error stating that machine is already gone.

juju remove-application haproxy --force
juju remove-unit haproxy/0 --force --no-wait
juju remove-machine 0

Next, redeploy haproxy with a self signed certificate. If you wish to use your own LetsEncrypt (or other) SSL certificate, read on.

juju deploy haproxy --config default_timeouts='queue 60000, connect 5000, client 120000, server 120000' --config services='' --config ssl_cert='SELFSIGNED' --config global_default_bind_options='no-tlsv10' --series focal

If you have used certbot to acquire an SSL certificate, convert the Full Chain and Private Key .pem files into base64 encoded character strings and store them into a variable in your bash shell:

FULLCHAIN=$(sudo base64 -w 0 /etc/letsencrypt/live/yourdomain.com/fullchain.pem)
PRIVKEY=$(sudo base64 -w 0 /etc/letsencrypt/live/yourdomain.com/privkey.pem)

Then deploy haproxy to focal, and provide the certificate variables as parameters:

juju deploy haproxy --config default_timeouts='queue 60000, connect 5000, client 120000, server 120000' --config services='' --config ssl_cert=$FULLCHAIN --config ssl_key=$PRIVKEY --config global_default_bind_options='no-tlsv10' --series focal

Lastly, relate the 2 charms together:

juju relate landscape-server haproxy

The nuances of certbot and exposing haproxy on the host network interface go beyond the scope of what I was looking to address in this thread. This deployment information will form the basis of some tutorials and how-to guides.

If you can identify any opportunities to better conform to best practices, or if you can speak to resolving the original hook failed: "install" error that resulted in going down this rabbit hole, please contribute to this thread.

1 Like

This failure was addressed on April 5 in this git commit: https://github.com/canonical/landscape-scripts/commit/e71200a8fba5669102e96dde0dc70cdb80dc0fba

This HAProxy issue has not been seen since. It is recommended to ignore the installation steps outlined in the original post, and to instead follow the steps found here: https://ubuntu.com/landscape/install