The Atos HPCF features 32 special GPIL nodes with GPUs for experimentation and testing with Machine Learning and AI workloads for authorised users. Present in only one of the complexes (AC), each node is equipped with 4 NVIDIA A100 40GB cards, They can be used in batch through the special "ng
" QoS in the SLURM Batch System. Interactive jobs are also possible with ecinteractive -g
.
Note | ||
---|---|---|
| ||
Since the number of GPUs is limited, be mindful of your usage and do not leave your jobs or sessions on GPU nodes idle. Cancel your jobs when you are done and someone else will be able to make use of the resources. |
Info | ||
---|---|---|
| ||
The number of requested GPUs will be reserved exclusively and only those will be visible within the job. |
Table of Contents |
---|
Submitting a batch job
You can run a batch job asking for GPUs using the following SBATCH directives:
Code Block |
---|
#SBATCH --qos=ng #SBATCH --gpus=1 |
You may request more than one GPU in the same job if your workload requires it. All the rest of the SLURM options may be used as well to configure your job to fit your needs.
Note | ||
---|---|---|
| ||
You can submit the job from any Atos HPCF complex, but note that will be automatically redirected to AC. If you are logged into a different complex, you will not be able to query the state of the job or cancel it with the standard
|
Working interactively
You may also open an interactive session on one of the GPU nodes with ecinteractive using the -g
option, which will allocate one GPU for your interactive job. All the other options still apply when it comes to requesting other resources such as CPUs, memory or TMPDIR space.
Only one interactive session on GPU node may be active at any point. That means that if you rerun ecinteractive -g
from a different terminal, you will be attached to the same session using the same resources.
Tip | ||
---|---|---|
| ||
You can run a Jupyter Lab on a node with a GPU with:
More details on JupyterLab with ecinteractive can be found here. |
Software stack
Most AI/ML tools and libraries are Python based, so in most cases you can use one of the following methods
Readily available tools
A number of standard Data Science and AI/ML Python packages such as TensorFlow or PyTorch are available out of the box, as part of the standard Python 3 offering via modules. For best results, use the newest version of the Python3 module:
No Format |
---|
module load python3/new |
Custom Python Virtual environments
If you need to customise your Python environment, you may create a virtual environtment based on the installations provided. This may be useful if you need to use a newer version of a specific python package, but still want to benefit from the rest of the managed Python environment:
No Format |
---|
module load python3/new mkdir -p $PERM/venvs cd $PERM/venvs python3 -m venv --system-site-packages myvenv |
Then you can activate it when you need it with:
No Format |
---|
source $PERM/venvs/myenv/bin/activate |
And then install any packages you need.
You can also create a completely standalone environment from scratch, by removing the --system-site-packages
option above.
Conda-based stack
You may create your own conda environments with all the AI/ML tools you need.
For example, to create a conda environment with PyTorch:
No Format |
---|
module load conda/new conda create -n mymlenv pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia conda activate mymlenv |
Monitoring your GPU usage
You may use the following commands to monitor the usage of the GPUs you have access to. If you want to do it interactively, you may open a new shell on the node running your job and run the corresponding monitoring tool. You can get the name the node running your job with squeue.
If running an ecinteractive job, just call ecinteractive
from another terminal to get a shell on the relevant node.
nvidia-smi
nvidia-smi
provides monitoring and management capabilities for the GPUs from the command line and will give you instantaneous information about your GPUs.
No Format |
---|
$ nvidia-smi Wed Mar 8 14:39:45 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-SXM... On | 00000000:84:00.0 Off | 0 | | N/A 38C P0 59W / 400W | 0MiB / 40960MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ |
This command has a number of advanced command options. If you want to log the usage of the GPUs by your processes in a batch job you could use the following strategy:
No Format |
---|
nvidia-smi pmon -o DT -d 5 --filename gpu_usage.log & monitor_pid=$! your_gpu_workload goes here kill $monitor_pid |
In this example, nvidia-smi
will then log into gpu_usage.log the processes using the gpu and their resource usage, every 5 seconds, and adding the date and time on each line for better tracking.
See man nvidia-smi
for more information
nvtop
Nvtop stands for Neat Videocard TOP, a (h)top like task monitor for GPUs. It can handle multiple GPUs and print information about them in a htop familiar way. It is useful if you want to interactively monitor the GPU usage and see its evolution live. In order to make the command available, you will need to do:
No Format |
---|
module load nvtop |
See man nvtop
for all the options.