Main Content

gpucoder.atomicSub

Atomically subtract a specified value from a variable in global or shared memory

Since R2021b

    Description

    [A,oldA] = gpucoder.atomicSub(A,B) subtracts B from the value of A in global or shared memory and writes the result back into A. The operation is atomic in a sense that the entire read-modify-write operation is guaranteed to be performed without interference from other threads. The order of the input and output arguments must match the syntax provided.

    example

    Examples

    collapse all

    Perform a simple atomic subtraction operation by using the gpucoder.atomicSub function and generate CUDA® code that calls corresponding CUDA atomicSub() APIs.

    In one file, write an entry-point function myAtomicSub that accepts matrix inputs a and b.

    function a = myAtomicSub(a,b)
    
    coder.gpu.kernelfun;
    for i = 1:numel(a)
        [a(i),~] = gpucoder.atomicSub(a(i),b);
    end
    
    end
    

    To create a type for a matrix of doubles for use in code generation, use the coder.newtype function.

    A = coder.newtype('int32', [1 30], [0 1]);
    B = coder.newtype('int32', [1 1], [0 0]);
    inputArgs = {A,B};
    

    To generate a CUDA library, use the codegen function.

    cfg = coder.gpuConfig('lib');
    cfg.GenerateReport = true;
    
    codegen -config cfg -args inputArgs myAtomicSub -d myAtomicSub

    The generated CUDA code contains the myAtomicSub_kernel1 kernel with calls to the atomicSub() CUDA APIs.

    //
    // File: myAtomicSub.cu
    //
    ...
    
    static __global__ __launch_bounds__(1024, 1) void myAtomicSub_kernel1(
        const int32_T b, const int32_T i, int32_T a_data[])
    {
      uint64_T loopEnd;
      uint64_T threadId;
    ...
    
      loopEnd = static_cast<uint64_T>(i - 1);
      for (uint64_T idx{threadId}; idx <= loopEnd; idx += threadStride) {
        int32_T b_i;
        b_i = static_cast<int32_T>(idx);
        atomicSub(&a_data[b_i], b);
      }
    }
    ...
    
    void myAtomicSub(int32_T a_data[], int32_T a_size[2], int32_T b)
    {
      dim3 block;
      dim3 grid;
    ...
    
        cudaMemcpy(gpu_a_data, a_data, a_size[1] * sizeof(int32_T),
                   cudaMemcpyHostToDevice);
        myAtomicSub_kernel1<<<grid, block>>>(b, i, gpu_a_data);
        cudaMemcpy(a_data, gpu_a_data, a_size[1] * sizeof(int32_T),
                   cudaMemcpyDeviceToHost);
    ...
    
    }
    ...
    
    }
    

    Input Arguments

    collapse all

    Operands, specified as scalars, vectors, matrices, or multidimensional arrays. Inputs A and B must satisfy the following requirements:

    • Have the same data type.

    • Have the same size or have sizes that are compatible. For example, A is an M-by-N matrix and B is a scalar or 1-by-N row vector.

    Data Types: int32 | uint32

    Limitations

    • Function handle input to the gpucoder.stencilKernel pragma cannot contain calls to atomic functions. For example,

      out1 = gpucoder.stencilKernel(@myAtomicSub,A,[3 3],'same',B);
      

    Version History

    Introduced in R2021b