Can I use my gpu to fasten my multiobjective optimization using gamultiobj?

4 views (last 30 days)
I am running a multiobjective optimization program using gamultiobj and parallel computing (cpu cores). Can I use gpu to speed up this process?

Accepted Answer

Walter Roberson
Walter Roberson on 30 Sep 2024
In order to use GPU inside of parallel computations you would need to have one distinct GPU for each parallel computation.
It is not possible for parallel computations to "share" a single GPU.
Probably your best bet is to set UseVectorized and not set UseParallel. Then have your evaluation routine transfer the block of input data to GPU, work with it on GPU, and then gather() the result back from GPU.
It is not possible to keep the population on GPU; you need to transfer to GPU, work with it, transfer back.
  2 Comments
Shiv
Shiv on 1 Oct 2024
Thanks Walter.
I can't use vectorized iput anyway due to complex objective function defiinition and using gpuarray for a minor part in that calculation; hence not much benefit. I had read and knew that gpu cores can't be parallelized, still i was hoping to be wrong and find out if there was any other way to utilize this to fasten up my optimization.
I am using parallel progeamming with possible gpuarray calculations in my code. However, the functions working with gpuArray is very limited; hence, the analysis time is still too high. can a ode solver like ode45 (or a modified code of it) be used on gpuArrays?
Walter Roberson
Walter Roberson on 4 Oct 2024
ode45() has the same limitation: the input and output must be normal arrays.
If you have auxillary variables then they could be passed in as gpuarrays paramfun

Sign in to comment.

More Answers (0)

Products


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!