1 | initial version |
GPU memory can be shared among various Docker containers by using NVIDIA Container Toolkit.
First, you need to install NVIDIA Container Toolkit on your host machine. You can follow the instructions mentioned in the official documentation to install it.
Once installed, you need to create a docker-compose file for your containers. In the docker-compose file, you need to specify the runtime as nvidia and the devices as /dev/nvidia0:/dev/nvidia0. This will enable the container to use the GPU.
In the Dockerfile of each container, you can set the environment variable NVIDIAVISIBLEDEVICES to the list of GPUs that the container needs access to.
You can then run the containers using the docker-compose command, and each container will have access to the specified GPUs.
Example docker-compose file:
version: '3'
services:
gpu-container-1:
build: ./gpu-container-1
runtime: nvidia
devices:
- /dev/nvidia0:/dev/nvidia0
environment:
- NVIDIA_VISIBLE_DEVICES=0,1
gpu-container-2:
build: ./gpu-container-2
runtime: nvidia
devices:
- /dev/nvidia0:/dev/nvidia0
environment:
- NVIDIA_VISIBLE_DEVICES=2