Determined provides three main methods to take advantage of multiple GPUs:
Parallelism across experiments. Schedule multiple experiments at once: more than one experiment can proceed in parallel if there are enough GPUs available.
Parallelism within an experiment. Schedule multiple trials of an experiment at once: a hyperparameter search may train more than one trial at once, each of which will use its own GPUs.
Parallelism within a trial. Use multiple GPUs to speed up the training of a single trial (distributed training). Determined can coordinate across multiple GPUs on a single machine or across multiple GPUs on multiple machines to improve the performance of training a single trial.
This guide will focus on the third approach, demonstrating how to perform distributed training with Determined to speed up the training of a single trial.
In the Experiment Configuration, the
controls the number of GPUs that will be used to train a single trial. The
default value is 1, which disables distributed training. Setting
slots_per_trial to a larger value enables multi-GPU training
automatically. Note that these GPUs might be on a single machine or across
multiple machines; the experiment configuration simply defines how many GPUs
should be used for training, and the Determined job scheduler decides whether to
schedule the task on a single agent or multiple agents, depending on the
machines in the cluster and the other active workloads.
slots_per_trial option is changed, the per-slot batch size is set to
global_batch_size // slots_per_trial. The per-slot (per-GPU) and global batch size
should be accessed via the context using
context.get_global_batch_size(), respectively. If
not evenly divisible by
slots_per_trial, the remainder is dropped.
Example configuration with distributed training:
resources: slots_per_trial: N
To use distributed training,
slots_per_trial must be set to a multiple
of the number of GPUs per machine. For example, if you have a cluster of machines with
8 GPUs each, you should set
slots_per_trial to a multiple of 8, such as 8, 16, or 24.
This ensures that the full network and interconnect bandwidth are available to
multi-GPU workloads and results in better performance.
Distributed training is designed to maximize performance by training with all the resources of a machine. This can lead to situations where an experiment is created but never starts running on the cluster, for example if the number of GPUs requested is not a multiple of the number of GPUs per machine. Similarly, if a task is running on a multi-GPU machine and using one or more of its GPUs, that will prevent a distributed training job from starting on that machine.
If a multi-GPU experiment does not become active after a minute or so,
please confirm that
slots_per_trial is a multiple of the number of GPUs
available on a machine. Also, you can also use the CLI command
list to check if any other tasks are using GPUs and preventing your
experiment from using all the GPUs on a machine.