How to represent gray scale images as affine subspaces?

4 views (last 30 days)
M
M on 23 Oct 2023
Edited: M on 11 Dec 2023
How to represent gray scale images as affine subspaces?
  4 Comments

Sign in to comment.

Answers (4)

Image Analyst
Image Analyst on 23 Oct 2023
I don't know what you mean. What's the context? What do you mean by "model"? What do you mean by "affine subspaces"? Do you just want to warp or spatially transform the image?
If you have any more questions, then attach your data and code to read it in with the paperclip icon after you read this:
  1 Comment
M
M on 23 Oct 2023
Edited: M on 23 Oct 2023
@Image Analyst @Matt J In the attached paper they represented the images in Affine subspaces, I am asking generally if there is a popular method/code of representing the image in affine space.
My data is huge attached is a sample.

Sign in to comment.


Matt J
Matt J on 23 Oct 2023
Edited: Matt J on 23 Oct 2023
One way, I suppose would be to train an affine neural network with the Deep Learning Toolbox, e.g.,
layers=[imageinputLayer(120,160,1);
convolution2dLayer([120,160],N) )
regressionLayer];
XTrain=images;
YTrain=zeros(1,1,N,size(XTrain,4));
net=trainNetwork(XTrain, YTrain, layers,.... )
but you would need a truly huge number of images and good regularization for the training to be well-posed. You should probably look at augmentedImageDatastore.
  34 Comments
M
M on 23 Nov 2023
Hi @Matt J, I have a question please, Why did you decide her to use regressionLayer in your Network? And what is the advantage of using this layer? thanks
Matt J
Matt J on 23 Nov 2023
As opposed to what? What else might we have used?

Sign in to comment.


Matt J
Matt J on 27 Oct 2023
Edited: Matt J on 27 Oct 2023
Well, in general, we can write the estimation of A,b as the norm minimization problem,
If X can be fit in RAM, you could just use svd() to solve it
N=14;
X=images(:,:)';
vn=vecnorm(X,inf,1);
[~,~,V]=svd([X./vn, ones(height(X),1)] , 0);
Abt=V(:,end+1-N:end)./vn';
A=Abt(1:end-1,:)';
b=Abt(end,:)';
s=vecnorm(A,2,2);
[A,b]=deal(A./s, b./s);
  43 Comments
Torsten
Torsten on 5 Dec 2023
Edited: Torsten on 5 Dec 2023
So you say you have 3 classes for your images that are known right from the beginning.
Say you determine the affine subspace for each 49000 images that best represents your image and you compute the mutual distance by SVD between these 49000 affine subspaces (which would give a 49000x49000 matrix). Now say you cluster your images according to this distance matrix into 3 (similar) clusters derived from the distance matrix.
The question you should ask yourself is: would these three clusters resemble the 3 classes that you think the images belong to right at the beginning ?
If the answer is no and if you consider the 3 classes from the beginning as fixed, then the distance measure via SVD is not adequate for your application.
M
M on 6 Dec 2023
Edited: M on 6 Dec 2023
@Matt J unfortunately the pre-normalization, and increasing the number of N didnt improve the performance.
I sure that the problem in the indicitor (norm), other indicators may provide better results as Grassman Kernel and distance. because that's proved in the literature. but still I am looking how can I apply them

Sign in to comment.


Matt J
Matt J on 5 Dec 2023
Edited: Matt J on 5 Dec 2023
So how Can I get the direction and origins so I can compute the grassman distance?
Here is another variant that gives a Basis/Origin description of the subspace.
N=100; %estimated upper bound on subspace dimension
X=reshape(XTrain,[], size(XTrain,4)); %<----no tranpose
mu=mean(X,2);
X=X-mu;
[Q,~,~]=qr(X , 0);
Basis=Q(:,1:N); %Basis direction vectors
Origin=mu-Basis*(Basis.'*mu); %Orthogonalize
  18 Comments
Matt J
Matt J on 7 Dec 2023
Edited: Matt J on 7 Dec 2023
is there here a criteria for selcting the N final V columns of its SVD as you suggested for N initial columns of Q?
The dimension of the subspace that you fit to X will be NumPixels-N, where NumPixels is the total number of pixels in one image.
Also, Regarding Basis/Origin description, can you give me an idea how do we usually use this information for classification?
No, pursuing it was your idea. You said it would help you compute the Grassman distance, whatever that is.
M
M on 11 Dec 2023
Edited: M on 11 Dec 2023
Dear @Matt J thank you for your suggestions and clarifications.
I reached to a conclusion that representing the images as it is as vectors in a subspace is not a good idea!
Especially if the test images are not typical for the training set.(stacking the images as vectors causes problems!)
I think I have to do some feature extractions of region of interest first then representing the features as subspaces.(whether as matrices or vectors) , Still I am thinking how to do that.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!