Get ffmpeg to do AV1 encoding with the Intel Arc GPU inside a docker container

hi @jimbobstpaul were you able to find any solutions for this?

I think I have a similar situation, I am trying to get ffmpeg to do AV1 encoding with the Intel Arc GPU. Worse still, I am trying to do it inside a Docker container so the methods I need to use are slightly more complicated than what you might need.

However I saw your thread here and I was wondering, what is the video output format you are targetting with ffmpeg?

I do not have AV1 encoding working yet, however, I got h264 and h265 / HEVC encoding working, inside my Docker container (which is running on Ubuntu 24.04)

The link to the Dockerfile is here ffmpeg-av1/intel_arc/Dockerfile at master · tazzuu/ffmpeg-av1 · GitHub

I will copy the contents here for clarity

FROM ubuntu:24.04

# actually we are using 0b097ed9f141f57e2b91f0704c721a9eff0204c0 because the latest version breaks with AV1 lib
# ENV FFMPEG_VERSION=7.1.1
ENV FFMPEG_VERSION=0b097ed9f141f57e2b91f0704c721a9eff0204c0
ENV SOURCE_DIR=/opt/ffmpeg_sources
ENV BUILD_DIR=/opt/ffmpeg_build
ENV BIN_DIR=/opt/bin

RUN mkdir -p $SOURCE_DIR $BIN_DIR
ENV PATH=$BIN_DIR:$PATH

RUN apt update && apt upgrade -y
RUN apt -y install \
build-essential \
git-core \
i965-va-driver \
libass-dev \
libfreetype6-dev \
libgnutls28-dev \
libunistring-dev \
libmp3lame-dev \
libsdl2-dev \
libtool \
libva-dev \
libvdpau-dev \
libvorbis-dev \
libfdk-aac-dev \
libopus-dev \
libvpl-dev \
libx264-dev \
libx265-dev \
libnuma-dev \
libvpx-dev \
libfdk-aac-dev \
libopus-dev \
libmfx-gen1.2 libmfx-tools \
libva-drm2 libva-x11-2 libva-wayland2 libva-glx2 vainfo intel-media-va-driver-non-free \
nasm \
wget

# Intel drivers
# https://dgpu-docs.intel.com/driver/client/overview.html
RUN wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | \
  gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
RUN echo \
  "deb [arch=amd64,i386 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu noble unified" | \
  tee /etc/apt/sources.list.d/intel-gpu-noble.list
RUN apt update && apt-get install -y libze-intel-gpu1 libze1 intel-opencl-icd clinfo intel-gsc libze-dev intel-ocloc

# ffmpeg
# # NOTE THE GIT COMMIT USED HERE INSTEAD OF VERSION TAG
# # BECAUSE THE 7.1.1 RELEASE KEPT BREAKING ON SOME LIB
# NOTE: the repo is huge so we need to take drastic measures to not clone the whole thing
# while also keeping our Dockerfile pinned to a specific version
RUN cd $SOURCE_DIR && \
git init ffmpeg && \
cd ffmpeg && \
git remote add origin https://github.com/FFmpeg/FFmpeg.git && \
git fetch --depth=1 origin "$FFMPEG_VERSION" && \
git checkout "$FFMPEG_VERSION" && \
PATH="$BIN_DIR:$PATH" PKG_CONFIG_PATH="$BUILD_DIR/lib/pkgconfig" ./configure \
  --prefix="$BUILD_DIR" \
  --pkg-config-flags="--static" \
  --extra-cflags="-I$BUILD_DIR/include" \
  --extra-ldflags="-L$BUILD_DIR/lib" \
  --extra-libs="-lpthread -lm" \
  --ld="g++" \
  --bindir="$BIN_DIR" \
  --enable-gpl \
  --enable-gnutls \
  --enable-libass \
  --enable-libfdk-aac \
  --enable-libfreetype \
  --enable-libmp3lame \
  --enable-libopus \
  --enable-libvorbis \
  --enable-libvpx \
  --enable-libx264 \
  --enable-libx265 \
  --enable-libvpl \
  --enable-version3 \
  --enable-nonfree && \
PATH="$BIN_DIR:$PATH" make -j $(nproc) && \
make install -j $(nproc) && \
hash -r

# NOTE:
# --enable-libmfx \
# can not use libmfx and libvpl together

Keep in mind this is still a work in progress, so your mileage may vary here. But I think this should have all the packages and libraries needed to get a recent version of ffmpeg working
with Ubuntu 24.04 + Intel GPU. You mention that you have a NUC with Xe graphics ; I believe that it should be using the same drivers as my Intel Arc GPU… maybe. Actually, I also have a NUC 13 Pro with Xe graphics, I guess I will have to try it on there later. (btw good choice on the NUC, they are fantastic machines and punch above their weight for tasks like this)

For reference, all the docs I have used to get to this point are included linked on the README of that repo; GitHub - tazzuu/ffmpeg-av1: ffmpeg Dockerfile with AV1 encoding libraries included

I also included some example scripts to show how it works (replace input.mkv with your input file)

  • encode to h264 with Intel QSV; https:// github .com/tazzuu/ffmpeg-av1/blob/master/scripts/ffmpeg_h264-qsv.sh
#!/bin/bash
set -euo pipefail

CONTAINER="ffmpeg-av1:7.1.1-intel"
INTEL_PCI_NODE="$(lspci | grep 'VGA compatible controller: Intel Corporation' | cut -d ' ' -f1)"
INTEL_CARD="$(readlink -f /dev/dri/by-path/pci-0000:$INTEL_PCI_NODE-card)"
INTEL_RENDER="$(readlink -f /dev/dri/by-path/pci-0000:$INTEL_PCI_NODE-render)"

set -x
docker run --rm \
--device=$INTEL_CARD \
--device=$INTEL_RENDER \
--group-add video \
-v $PWD:$PWD \
-w $PWD \
-e MFX_ACCEL_MODE=VAAPI \
-e MFX_VAAPI_DEVICE=$INTEL_RENDER \
"$CONTAINER" \
ffmpeg -y \
-loglevel verbose \
-i input.mkv \
-init_hw_device vaapi=va:$INTEL_RENDER \
-c:v h264_qsv -c:a copy -c:s copy output.mkv

  • encode to h265 / HEVC with Intel QSV https:// github .com/tazzuu/ffmpeg-av1/blob/master/scripts/ffmpeg_hevc-qsv.sh
#!/bin/bash
set -euo pipefail

CONTAINER="ffmpeg-av1:7.1.1-intel"
INTEL_PCI_NODE="$(lspci | grep 'VGA compatible controller: Intel Corporation' | cut -d ' ' -f1)"
INTEL_CARD="$(readlink -f /dev/dri/by-path/pci-0000:$INTEL_PCI_NODE-card)"
INTEL_RENDER="$(readlink -f /dev/dri/by-path/pci-0000:$INTEL_PCI_NODE-render)"

set -x
docker run --rm \
--device=$INTEL_CARD \
--device=$INTEL_RENDER \
--group-add video \
-v $PWD:$PWD \
-w $PWD \
-e MFX_ACCEL_MODE=VAAPI \
-e MFX_VAAPI_DEVICE=$INTEL_RENDER \
"$CONTAINER" \
ffmpeg -y \
-loglevel verbose \
-i input.mkv \
-init_hw_device vaapi=va:$INTEL_RENDER \
-c:v hevc_qsv -c:a copy -c:s copy output.mkv

You will note that I had to do some extra shenanigans to get the GPU passed into the Docker container correctly for this to work, if you are not using Docker you wouldnt need this but you may still need to direct ffmpeg to the address of your GPU on the PCIe bus. Or maybe not, I am not sure yet. And these scripts are just tests to make sure the encoding works, you would want to modify the final ffmpeg commands with the encoding arguments you want for your desired video settings (you already found the docs for that on the official ffmpeg site).

If you are not familiar with Docker do not worry, you can copy most of the commands shown in the Dockerfile and run them in your Ubuntu 24.04 system to get mostly the same effect. Docker is helpful here since it lets you keep the libraries needed for this specific installation & configuration of ffmpeg isolated from your global system, but its by no means a requirement. I just find it a lot easier to wrangle all these apt packages using Docker instead of trying to manage them globally on the system (where, as you mentioned, messing up the wrong package could have bad side effects such as breaking things).

Let me know if any of this works for you. I am still trying to get AV1 to work (I suspect there is some issue related to this https:// github .com/intel/cartwheel-ffmpeg/issues/233 but not sure yet) but at the very least this should get you h264 and h265 encoding on your hardware hopefully.

proof that it worked for me;

quick update, I tried that Dockerfile with my Intel NUC 13 Pro with Xe graphics, and it worked there as well. Screenshots of usage with the included h265 / HEVC script

Hello @ndyt and welcome to Ubuntu Discourse!

I created a new topic to have this instructions as an alternative in a separate post. You didn’t have the necessary reading to get the permission to do so (you should have received a PM with those instructions). A link to the existing topic is added automatically.

Again, Welcome!

I wrote that dockerfile and helper scripts specifically in response to the issues @jimbobstpaul was having, and did extra testing to try to make sure it works under the conditions he describes. I did not post it to look for help for my own issues. I’m not sure how he’s going to find the posts now that they are separated from his thread. I’m not done debugging the AV1 issues so I don’t think it’s worth making a thread about it yet, which is why I did not do so. But my current progress should be helpful for Jim and anyone else trying to get hardware acceleration from Intel Xe and Arc GPUs to work on ffmpeg.

Thanks, @ndyt, for your extensive note. Unfortunately, so much of it is way over my head that I think it would take me at least a month of it being my primary spare-time activity to figure it out, and I will have very little time to work on such things in the near future, so I don’t think you should invest any more effort specifically toward helping me, though there may be others having the same problem, and your notes may help them. So I’m just letting you know I’m not ungrateful; I just won’t be able to take advantage of your input for a long time.

Meanwhile, I have just one question, please: on any given encoding task that you execute, how do you know for sure that it actually did use the graphics hardware (GPU) in the Intel CPU? Maybe you said that somewhere but I don’t see it or don’t understand the significance of the relevant text; can you tell me specifically how to answer that question on my system?

Ok finally found a solution.

see the details here Can't get -hwaccel qsv command to work. · Issue #322 · intel/cartwheel-ffmpeg · GitHub

it seems like with this current software stack (Linux kernel, ffmpeg version, Intel AV1 QSV libs, etc.), the ffmpeg av1_qsv is just not working, or buggy, or something ; its not clear to me exactly what the issue is because I see reports of others who got it to work fine.

Regardless, as per that GitHub Issue on the Intel / cartwheel-ffmpeg repo, the solution for the time being is to use VA-API instead. I was actually already attempting to use it in my previous commands but only half-way. Here is the final command that does work, given the software configurations I have described previously;

ffmpeg -v verbose -y \
-init_hw_device vaapi=va:/dev/dri/renderD129 \
-hwaccel vaapi \
-hwaccel_output_format vaapi \
-i input4.mkv -c:v av1_vaapi output.mkv

The key difference here being that I am using av1_vaapi instead of av1_qsv, with the corresponding VA-API hardware initialization args for ffmpeg and the corresponding args for Docker et. al. to make the Intel GPU available inside the container where this is all running.

again, full script with Docker container and docs is at this repo ffmpeg-av1/scripts/ffmpeg_av1-qsv.sh at 6607ec30d72e2aa15d6a1c833abe893384c7fb10 · tazzuu/ffmpeg-av1 · GitHub though I will likely be changing everything repeatedly to streamline my own usages

now that this basic scripted usage is hashed out clearly, its just a matter of modifying the ffmpeg command to suit individual transcoding video quality needs.

Yes. Here is how I am verifying that the Intel GPU really is doing the encoding, instead of the CPU.


First off, you have the ffmpeg command itself. At the risk of being overwhelming, I think its extremely helpful to read through the docs about Linux hardware acceleration and how it relates to ffmpeg

and to compare that to the actual commands and cli parameters being used here

# the one that worked for me
ffmpeg -v verbose -y \
-init_hw_device vaapi=va:$INTEL_RENDER \
-hwaccel vaapi \
-hwaccel_output_format vaapi \
-i "$INPUT_FILE"  -c:v av1_vaapi "$OUTPUT_FILE"

# this arg did not work
# -c:v av1_qsv

# however this did work for h265 / HEVC
# I think the vaapi args here might not actually be needed
ffmpeg -y \
-loglevel verbose \
-i "$INPUT_FILE" \
-init_hw_device vaapi=va:$INTEL_RENDER \
-c:v hevc_qsv -c:a copy -c:s copy "$OUTPUT_FILE"

and this is in contrast with usage of SVT-AV1 which is the CPU-based AV1 transcoding library used for software based encoding

ffmpeg ... -c:v libsvtav1 ...

So by understanding that the ffmpeg plugin libsvtav1 (SVT-AV1) only runs on CPU via software,
hevc_qsv only runs on Intel for hardware-based encoding of h265 / HEVC (and works), that av1_vaapi only runs on GPU’s for hardware-based encoding (and works), and that av1_qsv only runs on Intel GPU for hardware based encoding and does not work (yet), I conclude that the GPU is indeed being used here. Because these software do not function any other way.

Its basically circular logic or a tautology or w/e, the fact that the plugin works indicates that it is indeed using the GPU hardware because I have faith in the docs for these various softwares, that when it says it works only on CPU or on GPU, its doing what it says. This is backed up by both the trials and errors I went through trying to get this all set up which included a lot of debugging to verify that the GPU’s themselves were being correctly detected as PCIe devices both by the host and inside the Docker containers (I have scripts that use clinfo and vainfo and nvidia-smi in the repo there that verify this).


That is not the answer you wanted to hear though so there are other simpler methods

The other way is to look at nvtop. You can get this from apt install nvtop however I actually had to use the Snap version instead since it had updates I needed for compatibility with the Intel GPU

# you might have to futz with snap if you have not used it previously
snap install nvtop

then when I run sudo nvtop I can see the Intel GPU. This is how it looks at idle

  • the idle power draw is about 1-5W
  • note I need to use sudo here to see the memory usage ; nvtop still lacks a lot of functionality for Intel Arc GPU so the activity graphs and process monitoring do not actually populate (yet)

when I go run the ffmpeg script in a second terminal, I can see the GPU stats change

  • memory usage and power draw have increased
  • for Nvidia GPU it would even tell you the exact process that is running on the GPU ; unfortuantely that is lacking for Intel GPU in nvtop currently so you just gotta guess, I guess

And there is one final method to verify that the GPU is indeed being used. Going back to the circular logic of “the fact that the GPU-based plugin is working means its using the GPU”, you have ffmpeg itself.

when I run the command described which uses av1_vaapi I can see some of this in the console output

this verifies that ffmpeg is indeed using the av1_vaapi plugin for the encoding, and it shows that the encoding is running at ~450-500 frames per second (its not applying any quality settings)

This is in stark contrast to running with the SVT-AV1 which looks more like this

# snippet of the SVT-AV1 command
ffmpeg -y -i "input.mkv" \
-pix_fmt yuv420p10le \
-g 240 -svtav1-params tune=0 -map 0:0 \
-c:v libsvtav1 -crf 25 -preset 4 ...

You can see here that using the CPU, with some quality presets applied, I only get about 15-50 frames per second

This is using Ryzen 9900X which is an optimal CPU for software-based encoding, you can see that its got high usage under btop while doing the transcode


so tl;dr:

  • the fact that the ffmpeg plugins for hardware-based encoding are working, indicates that it is indeed using the GPU hardware because these plugins only work on hardware
  • I can see increased GPU activity in nvtop
  • the performance of the encoding is massively accelerated when using the hardware plugins (450fps vs. 50fps)

let me know if you have any questions :slight_smile: hope all this is helpful

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.