When a manufacturer makes two devices with different precision, the manufacturer often "left justifies" the lower-precision values to the width of the higher precision, so that the value range is still the same between the two devices, but the higher precision device returns values that are more precise.
For the same reason, it is not uncommon for manufacturers to output 12 bit values as the left-most 12 bits (12 most significant) in a 16 bit field -- leaving room for a future more precise device with the same value range but higher accuracy. This relieves the need to rewrite processing software.
You should look at min(diff(unique(a))) . If it is 4 then you have 10 bits worth of data; if it is 1 then you have 12 actual bits worth of data.
Another thing that happens is that 10 bit data that has been recorded raw with a bayer mosaic might be left shifted and interpolated to 12 bits of RGB