Ubuntu EKS images now running on Jammy (22.04) from EKS 1.29 onward

Hi All!
AWS EKS images have been migrated to be based on jammy from EKS 1.29 forward for AMD64 and ARM64 (Graviton) instance types.

Changes:

  • kernel 6.2.0 out of the box

As always, you can find them using our image finder, AWS Marketplace or AWSCLI:

aws ssm get-parameters --names /aws/service/canonical/ubuntu/eks/22.04/1.29/stable/current/amd64/hvm/ebs-gp2/ami-id

:warning: Please note that, as the new version is based on 22.04 and EKS 1.29, any other combination using AWSCLI will give you zero results.

Hi @carlos-bravo
Since there are no public EKS AMIs available for Ubuntu 20.04 with versions 1.30 or 1.31, can we use the Ubuntu 20.04 AMI for EKS 1.29 and upgrade kubectl-eks and kubelet-eks to 1.30 using the snap command? Would there be any issues with this approach?

Hi @akshitsinghal ,

you can try that approach but this is outside of our supported EKS/Ubuntu version matrix.
What’s the reason for not going with a 22.04 EKS 1.30 AMI instead?

We are using the EKS 1.30 Jammy images for 22.04. We noticed that the tags are only available for previous month only. We have a question if that means the underlying AMI itself which no longer has tags presented in EKS are at risk of being removed.

Based on our application it is difficult to upgrade to new AMI’s that frequently.

Additionally, do we know going forward if that will be standard practice to remove tags for any AMI’s older than one month. We just noticed this recently.

Hi,

what exactly do you mean with “tags” ?
We just recently started to apply our retention/deprecation policy (documented under https://documentation.ubuntu.com/aws/aws-explanation/ec2-image-retention-policy/).
That means that we deprecate all images except the last 3 ones for EKS . But you can still use those images - they just do not appear when you do a aws ec2 describe-images anymore (see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-deprecate.html for details).

Does that answer clarifies your question?

closing the loop in both locations. the target use case (terraform) had a resolution discussed here. we are definitely open to considering a wider allow limit in the deprecation policy, but intend to wait for more end user feedback. thanks for raising this in both locations!

Hey @rpocase and @toabctl, can you comment on how long the images will remain in the deprecated state before they are removed? I am concerned that since now that we are using deprecated images, we won’t know when they are being cleaned up until the image is actually removed, which would break our service as cluster-autoscaler would no longer be able to create new EKS nodes.

Since we have to track this manually now, it would be good to know how long images will remain available after being marked as deprecated.

Thanks in advance.

Have you looked at the retention policy link I mentioned?
As long as both (the Ubuntu version and the EKS version) are supported, we’ll never delete an image.
We start to delete images (except the last 3 serials) when either

Does that make things more clear?