Anbox Cloud has support to manage GPUs and can provide them to individual containers for rendering and video encoding functionality.
In case that no GPU is available Anbox Cloud automatically falls back to software rendering and video encoding. This makes it possible to run entirely without a GPU.
Anbox Clouds allows access to GPUs from Intel, AMD and Nvidia inside the Anbox container. Concrete support for the individual GPU depends on the platform being using for Anbox. The included
webrtc platform currently supports the following GPUs:
|Vendor||Model||Render||Hardware Video Encode|
For those GPUs which Anbox Cloud doesn’t support hardware video encoding, a software based video encoding fallback is available.
Enable Support for GPUs in Anbox Cloud
Anbox Cloud will automatically detect GPU devices on deployment and configure the cluster for these. You can’t mix GPUs from different vendors in a single deployment.
Configure and Use Available GPU Slots
As GPUs have limited capacity but can be shared between containers AMS provides functionality to restrict usage of GPUs. To allow usage of a single or multiple GPUs by a set of containers AMS manages GPU slots as a scheduling primitive. GPU slots are used by the container scheduler inside AMS to decide on which LXD node a container should be placed and allows to provide only a limited number of containers with access to the available GPUs.
Each LXD node has a number of GPU slots configured. You can see the number of currently configured GPU slots for a node with the following command:
$ amc node show lxd0 name: lxd0 status: online disk: size: 8GB network: address: 10.188.62.13 bridge-mtu: 1500 config: public-address: 10.188.62.13 cpus: 4 cpu-allocation-rate: 4 memory: 3GB memory-allocation-rate: 2 gpu-slots: 0
In this case the node
lxd0 has zero GPU slots. You can change the number of GPU slots of each node with the following command:
$ amc node set lxd0 gpu-slots 10
This will give the node
lxd0 10 GPU slots.
gpu-slots declared from the resources field will be used over
instance-type when both are defined in the application manifest file. Hence there are 3 gpu slots that will be in use by a container launching from the following
android application in the end.
name: android instance-type: g4.3 resources: gpu-slots: 3
Containers can be configured to use a hardware or software video encoder for video encoding
This can be done through
video-encoder field declared in the manifest file when creating an application as well. See Managing applications for more details.
If all GPU slots are used by existing containers no more containers requiring a GPU can be launched. Containers not requiring a GPU can still be launched.
Determine Number of GPU slots
Determining the correct number of GPU slots for a specific GPU model depends on various things. The following just gives an idea of what should drive the decision for the right number of GPU slots:
- Memory a GPU provides
- Memory a container uses
- Number of parallel encoding pipelines a GPU offers
Finding the right number of GPU slots requires benchmarking and testing of the intended workload.
Using GPUs inside a Container
AMS configures a LXD container to passthrough a GPU device from the host. As of right now all GPUs available to a machine are passed to every container owning a GPU slot. For Nvidia GPUs LXD uses the Nvidia container runtime to make the GPU driver of the host available to a container. When GPUs from Intel or AMD are being used no GPU driver is made available automatically. It has to be provided by an addon.
If a GPU driver is available inside the container there are no further differences of how to use it in comparison to a regular environment.
If you want to let an application use the GPU but are not interested in streaming its visual output, you can simply launch a container with the
webrtc platform. The platform will automatically detect the underlying GPU and make use of it.
$ amc launch -p webrtc my-application
Force Software Rendering and Video Encoding
Note: Software rendering and video encoding will utilize the CPU. This will mean you can run less containers on a system than you can, when you have a GPU.
It is possible to force a container to run with software rendering. For that simply launch a container with
$ amc launch -p swrast my-application
This will start the container with the
swrast platform which forces software based rendering.
If you want to force an application to use software rendering and video encoding when streaming via the Anbox Stream Gateway you can simply set a an instance type which doesn’t require a GPU slot. For example
$ amc application set my-app instance-type a4.3