RTX 2080 TI GPU Acceleration for Deep Learning Toolbox?
Show older comments
Hi,
We have one laptob and one workstation in our lab. Laptop has nvidia quadro p600 and workstation has nvidia rtx2080 ti gpu on it. I know that Matlab 2018b deep learning toolbox implements single precision operations for GPU by default. Processing power is 11 750.40 GFLOPS for RTX2080 ti and 1195 GFLOPS for Quadro P600 (defined in here) for single precision. But training time of quadro p600 of is 4x faster than rtx2080 on deep learning tooolbox for the same program. I think that rtx2080ti shold be much and much faster than quadro p600. I'm really confused. How could I accelerate training time for RTX2080 ti? Which Matlab settings must I change for this?
Would you give me advices please?
Best Regards...
8 Comments
This is interesting (I also have RTX 2080 ti!). Have you checked with profile as well to see maybe something else is going on there? Also make sure that MATLAB recognizes your GPU (gpuDevice), and if so, set ExecutionEnvironment of trainingOptions to gpu.
ahmet emir
on 2 Jan 2021
Edited: ahmet emir
on 2 Jan 2021
ahmet emir
on 2 Jan 2021
Oops, I misspelled it, it's profile. Put it around your code block like this:
profile on
% your training snippet
profile viewer
ahmet emir
on 2 Jan 2021
ahmet emir
on 2 Jan 2021
ahmet emir
on 5 Jan 2021
Answers (0)
Categories
Find more on Parallel and Cloud in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!