Main Content

Train a Network on Amazon Web Services Using MATLAB Deep Learning Container

This example shows how to train a deep learning model in the cloud using MATLAB® on an Amazon EC2® instance.

This workflow helps you speed up your deep learning applications by training neural networks in the MATLAB Deep Learning Container on the cloud. Using MATLAB in the cloud allows you to choose machines where you can take full advantage of high-performance NVIDIA® GPUs. You can access the MATLAB® Deep Learning Container remotely using a web browser or a VNC connection. Then you can run MATLAB desktop in the cloud on an Amazon EC2 GPU-enabled instance to benefit from the computing resources available.

To start training a deep learning model on Amazon Web Services (AWS®) using MATLAB Deep Learning Container, you must:

  • Check the requirements to use the MATLAB Deep Learning Container

  • Prepare your AWS account

  • Launch the docker host instance

  • Pull and run the container

  • Run MATLAB in the container

  • Test the container using the deep learning example, "MNISTExample.mlx", included in the default folder of the container

For step-by-step instructions for this workflow, see MATLAB Deep Learning Container on NVIDIA GPU Cloud for Amazon Web Services.

To learn more and see screenshots of the same workflow, see the blog post https://blogs.mathworks.com/deep-learning/2021/05/03/ai-with-matlab-ngc/.

Semantic Segmentation in the Cloud

To show the compute capability available in the cloud, results are shown for a semantic segmentation network trained using the MATLAB Deep Learning Container cloud workflow. On AWS, the training was verified on a p3.2xlarge EC2 GPU enabled instance on an NVIDIA Tesla® V100 SMX2 with 16 GB of GPU memory and took around 70 minutes to meet validation criterion as shown in the Training Progress plot. To learn more about the semantic segmentation network example, see Semantic Segmentation Using Deep Learning.

Note: To train the semantic segmentation network using the Live Script example, change doTraining to true.

Semantic Segmentation in the Cloud With Multiple GPUs

Train the network on a machine with multiple GPUs to improve performance.

When training with multiple GPUs, each image batch is distributed between the GPUs. This effectively increases the total GPU memory available, allowing larger batch sizes. A recommended practice is to scale up the mini-batch size linearly with the number of GPUs, in order to keep the workload on each GPU constant. Because increasing the mini-batch size improves the significance of each iteration, also increase the initial learning rate by an equavilent factor.

For example, to run this training on a machine with 4 GPUs:

  1. In the semantic segmentation example, set the ExecutionEnvironment to multi-gpu in the training options.

  2. Increase the mini-batch size by 4, to match the number of GPUs.

  3. Increase the initial learning rate by, 4 to match the number of GPUs.

The following Training Progress plot shows the improvement in performance when using multiple GPUs. The results show the semantic segmentation network trained on 4 NVIDIA Titan Xp GPUs with 12 GB of GPU memory. The example used the multi-gpu training option with the mini-batch size and initial learning rate scaled by a factor of 4. This network trained for 20 epochs in around 20 minutes.

As shown in the following plot, using 4 GPUs and adjusting the training options as described above results in the network training to approximately the same validation accuracy about 3.5x faster.

Related Topics