Deployment with Docker
Using Docker provides a consistent, isolated environment for running nvtop, ensuring all dependencies are correctly managed. This is particularly useful on systems where installing the necessary libraries might be complex.
Prerequisites
- Docker installed.
- For NVIDIA GPUs, the NVIDIA Container Toolkit must be installed to allow containers to access the GPU hardware.
Building the Docker Image
The repository includes a Dockerfile to build a nvtop image.
-
Clone the Repository:
git clone https://github.com/Syllo/nvtop.git cd nvtop -
Build the Image:
sudo docker build --tag nvtop .
Customizing the Base Image
The Dockerfile allows you to specify a different base image using the IMAGE build argument. This is useful if you need to align with a specific CUDA or OS version.
docker build . -t nvtop --build-arg IMAGE=nvcr.io/nvidia/cudagl:11.4.2-base-ubuntu20.04
Running the Container
To run nvtop inside the container and monitor the host's GPUs and processes, use the following command:
sudo docker run -it --rm --gpus=all --pid=host nvtop
Understanding the Command Flags
-it: Runs the container in interactive mode with a pseudo-TTY, allowing you to interact withnvtop.--rm: Automatically removes the container when it exits, keeping your system clean.--gpus=all: Exposes all available host GPUs to the container. This requires the NVIDIA Container Toolkit.--pid=host: This is a crucial flag. It shares the host's PID namespace with the container. Without this,nvtopwould only see processes running inside the container (which would be justnvtopitself) and not the GPU processes running on the host machine.