Kernel Security updates and reboots (Servers)

Just wondering what strategies other people are doing when it comes to the reboots required with the Kernel Security updates (Servers). This month alone there are many (3?) kernel updates that require rebooting even with livepatch which I’m using on a few systems - this makes for some late nights trying to get production systems rebooted without causing issues for the business.

I’m not complaining about the patches being released in a timely manner, it’s just ironic that we reboot our Linux systems more than Windows systems these days.

Any suggestions and/or how do Canonical actually handle this?


Okay surely I’m not the only person on this planet frustrated by this, over the past few weeks I’m getting very tired of reboots. Either no one is patching and rebooting or my servers are very specific and reporting a reboot is required in error? I can’t imagine either is correct? A fair few for Jan already and there was one in December.

I really do appreciate the security teams work and fast response, but why do I not see this with say AWS EC2 Linux Servers that are based on Redhat?


I can’t speak to Red Hat nor Amazon Linux.

Normally our kernels are released on a three-week cadence. There’s no perfect time scale for new updates – too slow, and we may leave our users exposed to vulnerabilities for long enough for exploit authors to prepare tools. Too fast, and users will grow fatigued.

Sometimes, an issue feels important enough to issue updates out of cycle, or there may be regressions in fixes that should be addressed out of cycle.

So, we’ve tried to balance update timeliness with convenience as well as our own efforts in preparing and testing updates. No one frequency is going to serve everyone.

We offer a Livepatch service that can reduce the need for immediate reboots: Not all security issues can be mitigated with livepatches, but many are, and using the Livepatch service may reduce your need for rebooting.

Depending upon what services and users your machines have, you may also choose to skip reboots for a while. My laptop is a single-user machine where I keep up on updates daily, use extensive AppArmor profiles, and doesn’t interact much with untrusted content. On this machine I reboot perhaps every two months. The server in the basement that never interacts with untrusted content at all reboots perhaps every six months.

Determining if it’s safe to skip rebooting into a new kernel may be difficult; when in doubt, it’s probably better to reboot into the new kernel. (Or new OpenSSL libraries, glibc libraries, etc.) But you may not necessarily need to reboot for every one we publish.

You can also try to design your services so that you can perform rolling reboots and still be able to serve your users: taking machines out of load balancers during quiet times, rebooting, putting them back, adjusting DNS entries to move from one load balancer to another, etc. Not all services can do this but many can.


1 Like

Thanks for the info and suggestions. We are trying to move what we can into server-less and where possible use balancers/containers and so forth. However some of our requirements are for traditional single server configurations which is mostly where this causes the pain.

Hey @kdp, I definitely understand the frustration, January was an unfortunate month for Linux kernel updates. While we generally do advise applying all Ubuntu Security updates, we do try to give information in our Ubuntu Security Notices about the issues being addressed to help administrators and users judge the impact and risks in their own environments.

So, coming back to the kernel updates in January, we had:

  • January 4th - publication of the normal Stable Release Update (SRU) cycle kernels; these include security and bug fixes from the upstream stable kernel trees as well as other fixes identified by the Ubuntu Kernel team that warranted addressing. These happen on generally a three week cadence.
  • January 7th - NVIDIA graphics drivers coordinated security release; in order for these to work in a secure boot environment, they required associated kernel updates. Unfortunately, at the time of this release, NVIDIA only had desktop drivers prepared.
  • January 15th - Linux kernel LIO SCSI vulnerability; this issue was important to address for environments that had multiple backing stores served up via e.g. a single iscsi host.
  • January 20th - The corresponding update for NVIDIA server graphics drivers, and the associated kernel updates.
  • January 28th - the publication of the next three week SRU cadence kernels.

So, for users that did not rely on NVIDIA drivers or did not have a specific LIO SCSI environment affected by the out-of-cycle kernel updates, it would have been safe to skip rebooting into those updated kernels.

The Ubuntu Security and Kernel teams are cognizant of the impact of these updates, while trying to balance that some issues are very severe in some environments. Thanks for raising this issue.

While not a major issue, I find that if I skip a few reboots I end up with old kernels that cannot be removed with a simple apt autoremove.