read inconsistent ascii file to matrix
4 views (last 30 days)
Show older comments
I'd like to obtain maximum performance in reading a file containing both, numeric and non-numeric lines. The files typically look as such:
% comment
text 1.49
1.52 -5.3 8.9710
3.629 -5.77 9
another text and numbers
% comment again
1 2 3
and so on
The file can easily contain 1 million lines.
I would like to obtain two cell arrays:
- One that contains all rows that match %f %f %f , i.e. a numeric triplet. Already parsed as numeric doubles. Invalid lines should show up as empty entries or NaN.
- Another matrix, that contains all rows that did not match cell-array 1. Still as cellstr, prefereably with trimmed whitespaces.
Obtaining matrix 2 is sort of simple if you already have 1: simply by issuing textscan, and setting all rows that did not match 1 as empty. However, I struggle in obtaining cell array #1. textscan will stop reading once it encounters invalid lines.
In a working example I used sscanf and parsed everything line-by-line. This took about 15s for 1 million lines. Since textscan can read the whole file in less than a second, I am confident that there is room for improvement...
4 Comments
Accepted Answer
Jan
on 27 Mar 2019
Edited: Jan
on 9 Apr 2019
Data = fileread(FileName);
C = strsplit(Data, char(10));
% [EDITED] Remove comments:
C(strncmp(C, '%', 1)) = [];
match = true(size(C));
NumC = cell(size(C));
for iC = 1:numel(C)
% [EDITED2] Small shortcut:
aC = C{iC};
if ~isempty(aC) && any(aC(1) == '1234567890-.')
[Num, n] = sscanf(aC, '%g %g %g');
if n == 3
NumC{iC} = Num;
match(iC) = false;
end
end
end
TextC = C(match);
Is this your current version using a loop? How long does it take?
5 Comments
Jan
on 10 Apr 2019
@Tom: textscan is fast for valid inputs. Then I expect fscanf to be even faster. But as soon as the input cannot be caught by a simple format specifier, the processing gets much slower.
Some C code will be faster also, but very tedious to write. It must import the file line by line, but you have to create a buffer, which must be able to contain the longest line also. Unfortunately you do not know the length in advance and the same for the number of outputs. Re-allocation the output array dynamically is a mess in C. So maybe the code runs some seconds faster, but you need a lot of hours for writing and testing. Therefore I like MATLAB.
More Answers (1)
Guillaume
on 27 Mar 2019
Edited: Guillaume
on 27 Mar 2019
Unfortunately, there's no ignore invalid lines for textscan, so you're going to have to parse the file line by line, or implement the parsing in mex.
The following takes about 10s on my machine for a million lines. It's probably similar to what you've done already:
function [num, text] = parsefile(path)
lines = strsplit(fileread(path), '\n');
num = cellfun(@(l) sscanf(l, '%f %f %f')', lines, 'UniformOutput', false);
text = lines(cellfun(@isempty, num)); %could use cellfun('isempty', num) for a marginal speed gain
end
See Also
Categories
Find more on Data Import and Export in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!