conversion of decimal number matrix into binary matrix using optimal bit length

how to convert decimal numbers into binary in a matrix column wise with user defined bits?

15 Comments

Have you tried using "de2bi" command. Is it not solving your purpose?
Optimal means user defined...for example in a matrix in same column i want to store elements of different bits
Please provide a short example of input and desired output.
So, I have read this multiple times and I still don't understand what the rules are for the output. And it seems you want a different number of bits for different spots. Can you clarify?
I do not understand how we get from 5 needing 3 bits to represent, to using 5 bits for the other items in the same column.
In all of the columns except the first, the rule you used is consistent with using 8 bits for the largest item, and using the number of bits needed for the second largest item in the column for all of the items in the row. But the rule for the first column is different.
Ruchi Agarwal's "Answer" moved here:
Is there any rule to represent different no. Of spots with different number of bits?
There is simple logic i want to represent the largest element of each column with 8 bits and then i will calculate the no. Of bits required for representing second largest element of each column and then that much no.of bits i will reserve for every other element for each column respectively.
The first column contains
1
2
5
10
The largest of those is 10; you allocate 8 bits for that entry. The second largest is 5, which requires 3 bits (101 binary), so according to your description you should allocate 3 bits for every element of the column other than 10. That would make it
1 -> 3 bits -> 001
2 -> 3 bits -> 010
5 -> 3 bits -> 101
10 -> 8 bits -> 00001010
However, that is not what your diagram says must be used. Your diagram for the first column contains
1 -> 5 bits -> 00001
2 -> 5 bits -> 00010
5 -> 5 bits -> 00101
10 -> 8 bits -> 00001010
Yes i want exactly what u said...in diagram it was by mistake.
Sir,is there any way to perform the above action?
What is the data type of the required output? Some rows could require as few as 1 bit per column , but other rows could require 8 bits per column. A regular numeric matrix cannot hold this.
What was your plan in decoding this? You can create a row-by-row vector of bits easily enough, but to decode you have to know one of a small number of things:
  1. a row-by-row list of sizes for each element, probably encoded in binary. If you were to keep such a row-by-row list then you might as well use just enough bits needed to encode the value. Except for the case of 0 itself, you could use a hidden-bit algorithm. For example if you know that you need 3 bits to represent 5, binary [1 0 1], then if you have that "3" stored, you can omit the leading 1 because for any value other than 0, the leading bit for minimum representation will always be a 1. So you could store "3" and [0 1]. The number of bits required to store the length would be 0 (for 0) to 4 (8 itself needs 4 bits). WIth the number of bits for the length being variable, you have to consider whether you want to use some kind of variable bits scheme to encode the length, or if you want to use a fixed width.If you use a fixed width, that would add 4 bits per element, making it 4 bits (for 0) to 12 bits (for 128 to 255). You should be asking yourself what your average cost of representation would be in such a system.; OR
  2. Separately, keep a column by column list of sizes of the secondary widths, along with a per-column row number of which row the maximum occurred at (the one that the full 8 bits was used for.)The secondary widths could be 0 (for 0) to 8 (for 128 to 255), requiring 0 to 4 bits per column. If you used a fixed-width scheme, you could pack the widths for C columns into C/2 8-bit bytes. Then you would need C rows numbers to indicate where the maxima occurs. The number of bits needed to represent the row numbers depends upon how many rows there were in the table. Total space for this information would be ceiling(log2(number_of_rows)) * number_of_columns; OR
  3. Instead of keeping a list of row numbers for each column, each entry could be preceeded by a bit that indicates whether it is full width or the column-specific narrower width. Total space for this information over the entire table would be 1 (bit) * number_of_rows * number_of_columns. However, this setup permits you to partition the column entries into "needs 8 bits" and "needs fewer than 8 bits". For example if the rows contained 197 and 183 and 29 and 5 and 17, then with your existing scheme, the maximum one, 197, would be allocated 8 bits, and all other entries in the column would be allocated the number of bits required to store the second largest, 183, which would also be 8 bits, and therefore all entries in the row would end up as 8 bits. If you partitioned into "needs 8 bits" vs "not", then the 197 and 183 would both need 8 bit, and the width for the remaining entries would be determined by the maximum of the remaining values, 29, so only 5 bits each would be needed for those rather than 8.

Sign in to comment.

Answers (0)

Asked:

on 25 Apr 2019

Commented:

on 1 May 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!