How to know the GPU memory needed when training a detector network like faster R-CNN?
4 views (last 30 days)
Show older comments
jingxue chen
on 9 Jul 2020
Commented: Mahesh Taparia
on 17 Jul 2020
I have a GPU which only have 6G total memory. When training a faster R-CNN detector, although I have set the input size to 224*224*3, the miniBatchSize can only be set as 2. If I set miniBatchSize as 4 or 8 or larger, there are errors that out of memory on device and the data no longer exists on the device. Now I want to buy a new gpu which support me to train 1920*1080 picture with miniBatchSize set to be 64 or 128. But I don't know how to compute the memory and other paraneters the GPU needed. So how can I decide which GPU to choose?
0 Comments
Accepted Answer
Mahesh Taparia
on 14 Jul 2020
Hi
I think 6GB GPU is enough for your code. Check if the code is running on CPU/ GPU. To run the code on GPU, set ExecutionEnvironment to 'gpu' , if you are using trainingOptions and trainNetwork function for training. You can refer this document for that. For custom training loop, you need to convert the array to gpuArray. For more information, refer this documentation of gpuArray.
2 Comments
Mahesh Taparia
on 17 Jul 2020
Hi
If you are using 1920*1080 size (HD) image, try to reduce its size and then start the training. Big images require more memory. Also, set the 'Execution Environment' to 'gpu' in the training option in your code, i.e
options = trainingOptions('sgdm',...
'MaxEpochs',4,...
'MiniBatchSize',2,...
'InitialLearnRate',1e-3,...
'CheckpointPath',tempdir,...
'ValidationData',validationData,...
'ExecutionEnvironment','gpu',...
);
and check if the problem get resolve.
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!