Unfortunately I can’t reproduce the issue - the microk8s snap starts fine on my Core 22 VM. I’m also using the same snap versions as you:
itrue@ubuntu:~$ snap list
Name Version Rev Tracking Publisher Notes
core20 20230801 2015 latest/stable canonical✓ base
core22 20230801 864 latest/stable canonical✓ base
htop 3.2.2 3873 latest/stable maxiberta -
microk8s v1.27.5 5892 1.27-strict/stable canonical✓ -
pc 22-0.3 146 22/stable canonical✓ gadget
pc-kernel 5.15.0-86.96.1 1433 22/stable canonical✓ kernel
snapd 2.59.5 19457 latest/stable canonical✓ snapd
snappy-debug 0.36-snapd2.59.4 704 latest/stable canonical✓ -
Microk8s starts without issue after a few minutes:
itrue@ubuntu:~$ sudo microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
disabled:
cert-manager # (core) Cloud native certificate management
community # (core) The community addons repository
dashboard # (core) The Kubernetes dashboard
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
metrics-server # (core) K8s Metrics Server for API access to service metrics
minio # (core) MinIO object storage
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
registry # (core) Private image registry exposed on localhost:32000
storage # (core) Alias to hostpath-storage add-on, deprecated
Can you please share some more info on what you’re running it on? Is it amd64 hardware? Microk8s (k8s in general) is also very resource hungry and it can be tricky to get it running with <= 4GB of RAM.
Can you also please install the htop snap and check if there are any microk8s/kubelite processes running and consuming resources in the background, so we know it’s not totally stopped running in the snap? It should be easier to check this using htop than using standard top.
Does sudo microk8s.kubectl get pods -A show anything?
Well some of the following logs can be seen consistently in journalctl even today where CNI is not getting setup properly. I am suspecting this is the cause behind the failure in my case
Oct 11 06:45:38 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:45:38.789942 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:45:38 mltr01 systemd[1]: snapd.snap-repair.service: Deactivated successfully.
Oct 11 06:45:38 mltr01 systemd[1]: Finished Automatically fetch and run repair assertions.
Oct 11 06:45:41 mltr01 microk8s.daemon-apiserver-kicker[578297]: Setting up the CNI
Oct 11 06:45:43 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:45:43.790836 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:45:46 mltr01 microk8s.daemon-apiserver-kicker[578343]: Setting up the CNI
Oct 11 06:45:48 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:45:48.792230 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:45:52 mltr01 microk8s.daemon-apiserver-kicker[578393]: Setting up the CNI
Oct 11 06:45:53 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:45:53.793760 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:45:57 mltr01 microk8s.daemon-apiserver-kicker[578441]: Setting up the CNI
Oct 11 06:45:58 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:45:58.794639 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:03 mltr01 microk8s.daemon-apiserver-kicker[578489]: Setting up the CNI
Oct 11 06:46:03 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:46:03.796303 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:08 mltr01 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 11 06:46:08 mltr01 microk8s.daemon-apiserver-kicker[578538]: Setting up the CNI
Oct 11 06:46:08 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:46:08.798182 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:13 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:46:13.800068 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:13 mltr01 microk8s.daemon-apiserver-kicker[578586]: Setting up the CNI
Oct 11 06:46:18 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:46:18.801026 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:19 mltr01 microk8s.daemon-apiserver-kicker[578637]: Setting up the CNI
Oct 11 06:46:23 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:46:23.802650 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:24 mltr01 microk8s.daemon-apiserver-kicker[578687]: Setting up the CNI
Oct 11 06:46:28 mltr01 microk8s.daemon-kubelite[14107]: E1011 13:46:28.804327 14107 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 11 06:46:30 mltr01 microk8s.daemon-apiserver-kicker[578735]: Setting up the CNI
~
What @dilyn-corner mentioned could be a good thing to look in to.
Can you please run sudo microk8s.kubectl get pods -A and see if the containers are being initialised? If the calico containers are getting stuck, this could be the cause of the issue you’re seeing.
You might also be able to find out more by running sudo microk8s.kubectl logs -n kube-system -f ds/calico-node. This should show you the log output of the calico containers.
# sudo microk8s.kubectl get pods -A
No resources found
# sudo microk8s.kubectl logs -n kube-system -f ds/calico-node
Error from server (NotFound): daemonsets.apps "calico-node" not found
I am attaching the microk8s inspect output to this reply
# microk8s inspect
Inspecting system
Inspecting Certificates
Inspecting services
Service microk8s.daemon-cluster-agent is running
Service microk8s.daemon-containerd is running
Service microk8s.daemon-kubelite is running
Service microk8s.daemon-k8s-dqlite is running
Service microk8s.daemon-apiserver-kicker is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting dqlite
Inspect dqlite
WARNING: The memory cgroup is not enabled.
The cluster may not be functioning properly. Please ensure cgroups are enabled
See for example: https://microk8s.io/docs/install-alternatives#heading--arm
Building the report tarball
Report tarball is at /var/snap/microk8s/5976/inspection-report-20231011_101955.tar.gz
Well with respect to network connection, currently I am using a VM where MAAS is installed and it has 2 ports, port1 connected towards servers which are brought up by MAAS, port2 is connected towards the Internet. Currently I am performing NAT in the VM so that servers can connect to the Internet. I am able to download other snaps, vm images for lxd and upgrade the underlying HOST OS in the servers without any issue. Only microk8s snap running in these servers is having the issue.
I had a similar issue, thankfully managed to get it sorted a few months back.
Hope it’s okay to post this here (I have tried elsewhere, but not been able to post it) but I am trying to fix an Ubuntu issue for a friend who has a var/lib/apt error with his Ubuntu. I think he might have made an error whilst as root, hopefully somebody can help:
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend) are you root?
Now, I’ve been told that entering sudo install -D -m755 /var/lib/dpkg && sudo apt-get update will solve this issue, but this advice does not come from an ‘expert’ and therefore any experts who can help, I’d love to hear from you.