MATLAB Answers

How to set dimension of input data for LSTM?

41 views (last 30 days)
Ather Abbas
Ather Abbas on 5 Jun 2020
Edited: Ather Abbas on 11 Jun 2020
I have come from Tensorflow background and want to use MATLAB for time-series prediction problems because my colleagues are using MATLAB. I know that in Tensorflow, the input to LSTM for each batch has following dimensions (batch_size, lookback, input_features). The term lookback is taken from Francois Chollet's book, however the similar words such as sequence length, num steps are also used for this. This represents how long sequence is fed to LSTM to predict next value. I do not see an option to set this lookback in the function lstmLayer. My question is how can we set this sequence length/look back? Is it possible to view the data being fed to LSTM at each step or for each batch?
My colleague has distributed the input data into train and test by randomly choosing points from input data as test points and remaining as training points. I have difficulty in understanding that how does this works because by removing certain points from input data, will break the "sequence" nature of the data. The code to split the input data into train and test is as following:
```MATLAB
[data] = xlsread('example.xlsx','Sheet1');
X = (data(:,1:3))';
Y = (data(:,4))';
input = num2cell(X,1);
output = num2cell(Y,1);
data_size = 1:size(input,2);
%% Seperation of training data and validation data
% validation ratio: 0.3
ratio = 0.3;
tst_num = floor(size(input,2)*ratio);
% randomly separate
idx_te = randperm(size(input,2), tst_num);
idx_te2 = sort(idx_te);
idx_tr = setdiff([data_size],idx_te2);
% training/test
XTrain = input(:,idx_tr);
YTrain = output(:,idx_tr);
```
The file "example.xlsx" contains input and output data in different columns.

  0 Comments

Sign in to comment.

Answers (1)

Asvin Kumar
Asvin Kumar on 10 Jun 2020
For the first part of your question on number of steps in an LSTM I am going to redirect you to an earlier answer of mine. Essentially, the LSTM unit unrolls to fit the entire length of the sequence.
Have a look at the Japanese Vowel Classification example. It has multiple illustrative cases.
  1. It covers the case where you have multiple sequences of varying length while they share the common feature size. JVC: Load Sequence Data
  2. You also have control over the SequenceLength in each Mini Batch. JVC: Prepare Data for Padding
If these don’t meet your requirements, you can always manually construct your dataset in a forward moving window fashion for the required number of steps and associate it with a prediction. This would come under a data preparation step.
For the second part of the question, it is hard for me to comment on the approach to splitting the dataset since I am not aware of the problem setup. If the data along the second dimension of the ‘input’ variable belongs to the same time sequence, I would agree with you that randomly removing data points / time steps would interfere with the temporal information.

  1 Comment

Ather Abbas
Ather Abbas on 11 Jun 2020
Thank you for your answer. My problem is related to pollutant prediction in surface water where the input data refers to environmental variables affecting pollutant measured at different time steps while the output is the measurement of target pollutant at same time steps. I have written a running example as following
clc; clear;
n = 300;
a = linspace(1, 10, n)';
b = rand(1,n)';
c = rand(1, n)';
% a hypothetical function. In reality y is a very complex function of a,b and c
y = sin(a) .* c + cos(b);
Y = y';
% a b c are input variables affecting y.
% Both input and output are measured at specific time steps.
X = [a b c];
X_min = min(X); X_max = max(X);
Y_min = min(Y); Y_max = max(Y);
for j = 1:size(X,2)
Normal_Input(:,j) = (X(:,j)-X_min(j))./(X_max(j)-X_min(j))*2;
end
input = num2cell(Normal_Input',1);
output = num2cell(Y,1);
%% Seperation of training data and validation data
%%%%validation ratio: 0.3
ratio = 0.3;
tst_num = floor(size(input,2)*ratio);
tr_ind = 1:size(input,2);
val_ind = randperm(size(input,2),tst_num);
tr_ind(val_ind) = [];
%%
% training/test
XTot = input;
YTot = output;
XTrain = input(:,tr_ind');
YTrain = output(tr_ind);
XTest = input (:,val_ind);
YTest = output (val_ind);
date_value = datetime(2013,01,01,1,0,0) + hours(0:n-1)';
%% Define Network Structure
numResponses = size(YTrain{1},1);
featureDimension = size(XTrain{1},1);
numHiddenUnits = 128;
layers = [ ...
sequenceInputLayer(featureDimension)
lstmLayer(numHiddenUnits,'OutputMode','sequence')
fullyConnectedLayer(featureDimension)
dropoutLayer(0.3)
fullyConnectedLayer(numResponses)
regressionLayer];
maxepochs = 300;
options = trainingOptions('adam', ... %%adam
'MaxEpochs',maxepochs, ...
'MiniBatchSize',4, ...
'GradientThreshold',1, ...
'InitialLearnRate',1e-2, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',125, ...
'LearnRateDropFactor',1, ...
'Verbose',0, ...
'Plots','training-progress');
%% training
net = trainNetwork(XTrain,YTrain,layers,options);
YPred = predict(net,XTrain);
YPred_n = cell2mat(YPred(:));
% test
net = resetState(net);
YPred_t = predict(net,XTest);
YPred_tn = cell2mat(YPred_t(:));
%% Plotting
obs = Y';
sim = YPred_n;
sim_t = YPred_tn;
figure
plot(date_value,obs,'k-o','LineWidth',1)
hold on
plot(date_value(tr_ind),sim,'ro','LineWidth',0.9)
% plot(date_value(1:length(sim)),sim,'r--','LineWidth',0.9)
hold on
plot(date_value(val_ind),sim_t,'bo','LineWidth',0.9)
legend (["Observation" "Training" "Test"],'FontName','Times New Roman','FontSize',12)
ylabel('ARGs(copies/mL)','FontName','Times New Roman','FontSize',14)
datetick('x','yyyy-mm-dd','keeplimits','keepticks')
title("Prediction of ARGs",'FontName','Times New Roman','FontSize',16)
hold off
The output of above code after 300 epochs is in 'test_me.png' file attached.
My question is, if this method of randomly selecting points from data as training and test is correct, why do I get reasonably good result?

Sign in to comment.

Products


Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!