MASK RCNN Error DataSource

11 views (last 30 days)
mohd akmal masud
mohd akmal masud on 6 May 2023
Commented: mohd akmal masud on 30 Aug 2023
Dear All,
I want to do the MASK RCNN deep learning method for object detection. (All the data set was attached.)
But I got error like this
Dot indexing is not supported for variables of this type.
Error in MASKRCNN_PC_PHONE (line 50)
boxmaskpc= table(gTruthnouveau.DataSource.Source(1:9,1), gTruthnouveau.LabelData.ordinateur(1:9,1),...
Below is my Coding
%% ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
% __________________________________________MaskRCNN_Voiture implémenter par un Maskhead______________
% ______________________________________________Executer sans affichage du masks et boxes ._____________________________
% ________________________________________________Meme apres creation des images logiques _______________________________
%% Load data set ******************************************************************
clc
clear all
close all
imds1= imageDatastore('BDD','IncludeSubfolders',true,'LabelSource','Foldernames');
%% Convertir en image logique"
% OutputFolder4= fullfile('PixelLabelData_12');
% imdsboxmask= imageDatastore(fullfile(OutputFolder4),'LabelSource','Foldernames');
% p=height(imdsboxmask.Files);
% for p= 1:p
% pp=imread(imdsboxmask.Files{p});
% pp=logical(pp);
% imshow(pp);
% imsave;
% end
% %% this one for imwrite from binary subplot(imshow3D) to single image binary
% % pastikan outt22(:,:,ii) = imopen(outt2,st2); imshow(outt22(:,:,ii)) buat
% % dulu kat atas
% for k = 1:18
% metadata = dicominfo('1.3.46.670589.5.2.10.2.4.46.678.1.2428.1625227336625.dcm');
% dicomwrite(b(:,:,k),sprintf('I-13125610N1256%03d.dcm',k), metadata, 'CreateMode', 'copy');
% % dicomwrite(P(:,:,k),sprintf('I-13125610N125666666%03d.dcm',k));
% % imwrite(BW1(:,:,k), sprintf(['bwaftersegmentationpngI13125610N1' ...
% % '%03d.png'], k)); %allBW is from gradientweight imshow3D
% end
%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%ù
OutputFolder1= fullfile('logique_masks');
logique_image= imageDatastore(fullfile(OutputFolder1),'LabelSource','Foldernames');
%% For files "Masks"
I1 = readimage(imds1,1);
imshow(I1)
classNames1 = {'PC', 'PHONE'};
pixelLabelID1 = [1 2];
load gTruthnouveau.mat;
%% _____________________Create a Table of Data___________________________________
% Create Box Label and Masks from gTruthbox
% boxmaskpc= table(gTruthnouveau.DataSource.Source, gTruthnouveau.LabelData.ordinateur,...
% 'VariableNames',{'files','box'});
% boxmaskphone= table(gTruthnouveau.DataSource.Source, gTruthnouveau.LabelData.portable,...
% 'VariableNames',{'files','box'});
boxmaskpc= table(gTruthnouveau.DataSource.Source(1:9,1), gTruthnouveau.LabelData.ordinateur(1:9,1),...
'VariableNames',{'files','box'});
boxmaskphone= table(gTruthnouveau.DataSource.Source(10:18,1), gTruthnouveau.LabelData.portable(10:18,1),...
'VariableNames',{'files','box'});
z=[boxmaskpc;boxmaskphone];
bldsmask = boxLabelDatastore(z(:,'box'));
% pxdsboxmask = pixelLabelDatastore(logique_image.Files,classNames1,pixelLabelID1);
pxdsboxmask = pixelLabelDatastore(gTruthnouveau.LabelData.PixelLabelData,classNames1,pixelLabelID1);
databoxmask=combine(imds1,bldsmask);
databoxmask=combine(databoxmask,pxdsboxmask);
%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Create Faster R-CNN Detection Network%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
imageSize = [300 300 3];
numClasses= 2;
classNames1=[classNames1 {'background'}];
%% configuration parameters
params = createMaskRCNNConfig(imageSize, numClasses, classNames1);
disp(params);
%% $$$$$$$$$$$$$$$$$$$$ Create the maskRCNN network$$$$$$$$$$$$$$$$$$$$$$
%
dlnet = createMaskRCNN(numClasses, params);
%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
isVariance = strcmp(dlnet.State.Parameter, "TrainedVariance");
dlnet.State.Value(isVariance) = cellfun(@(x) max(x, 1e-10), dlnet.State.Value(isVariance), 'UniformOutput', false);
%%
% augmenter = imageDataAugmenter('RandXReflection',true,...
% 'RandXTranslation',[-10 10],'RandYTranslation',[-10 10]);
% pximds = pixelLabelImageDatastore(imdsTrain,pxds1Train, ...
% 'DataAugmentation',augmenter);
%% Extract Mask segmentation sub-network
executionEnvironment = "cpu";
% SGDM learning parameters
initialLearnRate = 0.01;
momemtum = 0.9;
decay = 0.0001;
velocity = [];
maxEpochs = 5;
minibatchSize = 2;
% Configure the data dispatcher
% Create the batching function. The images are concatenated along the 4th
% dimension to get a HxWxCxminiBatchSize shaped batch. The other ground truth data is
% configured a cell array of length = minibatchSize.
trainDS = transform(databoxmask, @(x)helper.preprocessData(x, imageSize));
myMiniBatchFcn = @(img, boxes, labels, masks) deal(cat(4, img{:}), boxes, labels, masks);
mb = minibatchqueue(trainDS, 4, "MiniBatchFormat", ["SSCB", "", "", ""],...
"MiniBatchSize", minibatchSize,...
"OutputCast", ["single","","",""],...
"OutputAsDlArray", [true, false, false, false],...
"MiniBatchFcn", myMiniBatchFcn,...
"OutputEnvironment", [executionEnvironment,"cpu","cpu","cpu"]);
doTraining= true;
numEpoch = 1;
numIteration = 1;
start = tic;
if doTraining
% Create subplots for the learning rate and mini-batch loss.
fig = figure;
[lossPlotter] = helper.configureTrainingProgressPlotter(fig);
% Initialize verbose output
helper.initializeVerboseOutput([]);
% Custom training loop.
while numEpoch < maxEpochs
mb.reset();
mb.shuffle();
while mb.hasdata()
% get next batch from minibatchqueue
[X, gtBox, gtClass, gtMask] = mb.next();
% Evaluate the model gradients and loss using dlfeval
[gradients, loss, state] = dlfeval(@networkGradients, X, gtBox, gtClass, gtMask, dlnet, params);
dlnet.State = state;
% compute the learning rate for current iteration
learnRate = initialLearnRate/(1 + decay*numIteration);
if(~isempty(gradients) && ~isempty(loss))
[dlnet.Learnables, velocity] = sgdmupdate(dlnet.Learnables, gradients, velocity, learnRate, momemtum);
else
continue;
end
helper.displayVerboseOutputEveryEpoch(start,learnRate,numEpoch,numIteration,loss);
% Plot loss/ accuracy metric
D = duration(0,0,toc(start),'Format','hh:mm:ss');
addpoints(lossPlotter,numIteration,double(gather(extractdata(loss))))
subplot(2,1,2)
title("Epoch: " + numEpoch + ", Elapsed: " + string(D))
drawnow
numIteration = numIteration + 1;
end
numEpoch = numEpoch + 1;
end
end
net = dlnet;
maskSubnet = helper.extractMaskNetwork(net);
modelDateTime = string(datetime('now','Format',"yyyy-MM-dd-HH-mm-ss"));
save(strcat("trainedMaskRCNN-",modelDateTime,"-Epoch-",num2str(numEpoch),".mat"),'net');
% Read the image for inference
img = imread('teste_1.jpg');
% Define the target size of the image for inference
targetSize = [300 300 3];
% Resize the image maintaining the aspect ratio and scaling the largest
% dimension to the target size.
% imgSize = size(img);
% [~, maxDim] = max(imgSize);
% resizeSize = [NaN NaN];
% resizeSize(maxDim) = targetSize(maxDim);
% img = imresize(img, resizeSize);
if size(img,1) > size(img,2)
img = imresize(img,[targetSize(1) NaN]);
else
img = imresize(img,[NaN targetSize(2)]);
end
% detect the objects and their masks
[boxes, scores, labels, masks] = detectMaskRCNN3(net, maskSubnet, img, params, executionEnvironment);
if(isempty(masks))
overlayedImage = img;
else
overlayedImage = insertObjectMask(img,masks);
end
figure;
imshow(overlayedImage);
showShape("rectangle",gather(boxes),"Label",labels,"LineColor",'r')
function layer = roiAlignLayer(outputSize, NameValueArgs)
% roiAlignLayer Non-quantized ROI pooling layer for Mask-CNN.
%
% layer = roiAlignLayer(outputSize) creates an ROI align layer.
% outputSize is a 2-element vector, [height width], defining the
% pooled output size.
%
% The ROI align layer outputs fixed size feature maps for every
% rectangular ROI within the input feature map. The fixed sized outputs
% are computed by pooling the ROI into fixed size bins without quantizing
% the grid points. Each bin is sampled at SamplingRatio locations. The
% value at each sampled point is inferred using bilinear interpolation.
% The sampled values in each bin is averaged. Given an input feature
% map of size [H W C N], where C is the number of channels and N is the
% number of observations, the output feature map size is [height width C
% sum(M)], where M is a vector of length N and M(i) is the number of ROIs
% associated with the i-th input feature map. This layer is used in
% Mask-RCNN instance segmentation networks.
%
% [...] = roiAlignLayer(..., Name, Value) specifies additional
% name-value pair arguments described below:
%
% 'Name' A name for the layer.
%
% Default: ''
%
% 'ROIScale' Ratio of scale of input feature map to that of the
% ROI coordinates. This specifies the factor used
% to scale input ROIs to the input feature map size.
% This value is a scalar.
%
% Default: 1.0
%
% 'SamplingRatio' Number of samples in each pooled bin along height
% and width expressed as a 1x2 vector. The
% interpolated values of these points are used to
% determine the output value of each pooled bin.
%
% Default: 'auto'. This calculates adaptive number
% of samples as ceil(roiWidth/outputWidth) along the
% X axis and similarly for samples along Y axis.
%
% An ROI max pooling layer has the following inputs:
%
% 'in' - Input feature map.
% 'roi' - A list of ROIs to pool.
%
% The ROIs are formatted as 5xM, where M is the number of ROIs and each
% ROI is formatted as [x1 y1 x2 y2 batchIdx]. Here, (x1,y1) and (x2,y2)
% are the coordinates top-left and bottom-right corners of the ROI. The
% ROIs are expected either as pixel coordinates or as normalized
% coordinates.
% For a default value of 'ROIScale', the rois must be in the feature
% coordinate space.
% Use <a href="matlab:help nnet.cnn.LayerGraph/connectLayers">connectLayers</a> to connect other layers to these inputs.
%
% Example: Create an ROI Align layer
% ----------------------------------
% % Specify output grid size.
% outputSize = [7 7];
%
% % Create ROI Align layer.
% roiPool = roiAlignLayer(outputSize,'Name','roiAlign')
%
% See also trainFastRCNNObjectDetector, trainFasterRCNNObjectDetector,
% roiMaxPooling2dLayer, nnet.cnn.layer.ROIAlignLayer.
% Copyright 2020 The MathWorks, Inc.
arguments
outputSize (1,2) double {nnet.cnn.layer.ROIAlignLayer.validateOutputSize(outputSize,'roiAlignLayer.m','OutputSize')}
NameValueArgs.Name char {nnet.internal.cnn.layer.paramvalidation.validateLayerName} = ''
NameValueArgs.ROIScale (1,1) double {nnet.cnn.layer.ROIAlignLayer.validateROIScale(NameValueArgs.ROIScale,'roiAlignLayer.m','SamplingRatio')} = 1
NameValueArgs.SamplingRatio {nnet.cnn.layer.ROIAlignLayer.validateSamplingRatio(NameValueArgs.SamplingRatio,'roiAlignLayer.m','SamplingRatio')} = 'auto'
end
if(isstring(NameValueArgs.SamplingRatio))
NameValueArgs.SamplingRatio = validatestring(NameValueArgs.SamplingRatio,{'auto'}, mfilename, 'SamplingRatio');
end
% Create an internal representation of the layer.
internalLayer = nnet.internal.cnn.layer.ROIAlignLayer(...
NameValueArgs.Name, outputSize, NameValueArgs.ROIScale, NameValueArgs.SamplingRatio);
% Pass the internal layer to a function to construct a user visible layer.
layer = nnet.cnn.layer.ROIAlignLayer(internalLayer);
end
function layers = createMaskHead(numClasses, params)
if(params.ClassAgnosticMasks)
numMaskClasses = 1;
else
numMaskClasses = numClasses;
end
tconv1 = transposedConv2dLayer(2, 256,'Stride',2, 'Name', 'mask_tConv1' );
conv1 = convolution2dLayer(1, numMaskClasses, 'Name', 'mask_Conv1','Padding','same' );
sig1 = sigmoidLayer('Name', 'mask_sigmoid1');
layers = [tconv1 conv1 sig1];
end
function params = createMaskRCNNConfig(imageSize, numClasses, classNames1)
% createNetworkConfiguration creates the maskRCNN training and detection
% configuration parameters
% Copyright 2020 The MathWorks, Inc.
% Network parameters
params.ImageSize = imageSize;
params.NumClasses = numClasses;
params.ClassNames = classNames1;
params.BackgroundClass = 'background';
params.ROIAlignOutputSize = [14 14]; % ROIAlign outputSize
params.MaskOutputSize = [14 14];
params.ScaleFactor = [0.0625 0.0625]; % Feature size to image size ratio
params.ClassAgnosticMasks = true;
% Target generation params
params.PositiveOverlapRange = [0.6 1.0];
params.NegativeOverlapRange = [0.1 0.6];
% Region Proposal network params
params.AnchorBoxes = [[32 16];
[64 32];
[128 64];
[256 128];
[32 32];
[64 64];
[128 128];
[256 256];
[16 32];
[32 64];
[64 128];
[128 256]];
params.NumAnchors = size(params.AnchorBoxes,1);
params.NumRegionsToSample = 200;
% NMS threshold
params.OverlapThreshold = 0.7;
params.MinScore = 0;
params.NumStrongestRegionsBeforeProposalNMS = 3000;
params.NumStrongestRegions = 1000;
params.BoxFilterFcn = @(a,b,c,d)fasterRCNNObjectDetector.filterBBoxesBySize(a,b,c,d);
params.RPNClassNames = {'Foreground', 'Background'};
params.RPNBoxStd = [1 1 1 1];
params.RPNBoxMean = [0 0 0 0];
params.RandomSelector = vision.internal.rcnn.RandomSelector();
params.StandardizeRegressionTargets = false;
params.MiniBatchPadWithNegatives = true;
params.ProposalsOutsideImage = 'clip';
params.BBoxRegressionNormalization = 'valid';
params.RPNROIPerImage = params.NumRegionsToSample;
params.CategoricalLookup = reshape(categorical([1 2 3],[1 2],params.RPNClassNames),[],1);
% Detection params
params.DetectionsOnBorder = 'clip';
params.Threshold = 0.5;
params.SelectStrongest = true;
params.MinSize = [1 1];
params.MaxSize = [inf inf];
end

Answers (1)

Sanjana
Sanjana on 23 Aug 2023
Hi mohd,
I Understand that you are facing an issue in accessing “groundTruth” Object. As per the data provided, the “DataSource” of “groundTruth” Object is not created properly.
As per the documentation, “DataSource” can be created in the following way, and then you can create a “groundTruth” Object using the “DataSource”,”LabelDefinitions” and “LabelData”.
%Creating the dataSource
dataSource = groundTruthDataSource(gTruthnouveau.DataSource)
%Creating the groundTruthObject
groundTruthObject = groundTruth(dataSource, gTruthnouveau.LabelDefintions ,gTruthnoveau.LabelData)
Please refer to the following documentation, for further information,
Hope this helps!
Regards,
Sanjana.
  1 Comment
mohd akmal masud
mohd akmal masud on 30 Aug 2023
Hi @Sanjana, i got error
%Creating the dataSource
>> dataSource = groundTruthDataSource(gTruthnouveau.DataSource);
Error using groundTruthDataSource/parseAndPopulateInputs
Cannot find files or folders matching:
'C:\Users\NGSI\Desktop\FasterRCNN_and_MaskRCNN_PC\BDD_ESSAI\02.jpg'.
Error in groundTruthDataSource (line 194)
parseAndPopulateInputs(this, varargin{:});
>> groundTruthObject = groundTruth(dataSource, gTruthnouveau.LabelDefintions ,gTruthnoveau.LabelData);
Unrecognized function or variable 'dataSource'.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!