G
Guest
This probably something stupid, or I am missing some fundemantal concept, but
I can't figure this one out.
Consider the following code:
byte[] bd = new byte[2];
bd[0] = 0x00;
bd[1] = 0x01;
System.Int16 a;
a = System.BitConverter.ToInt16(bd, 0);
MessageBox.Show(a.ToString());
BitConverter.ToInt16 returns 16 bit signed integer at specified position in
a byte array.
So, one would think hex value 0x0001 is 1 in decimal, no the following code
produces 256 which is 0x0010. Now switch around values in byte array:
byte[] bd = new byte[2];
bd[0] = 0x01;
bd[1] = 0x00;
System.Int16 a;
a = System.BitConverter.ToInt16(bd, 0);
MessageBox.Show(a.ToString());
The code above does produce 1. But hex 0x0100 is 256.
Can someone explain this?
I can't figure this one out.
Consider the following code:
byte[] bd = new byte[2];
bd[0] = 0x00;
bd[1] = 0x01;
System.Int16 a;
a = System.BitConverter.ToInt16(bd, 0);
MessageBox.Show(a.ToString());
BitConverter.ToInt16 returns 16 bit signed integer at specified position in
a byte array.
So, one would think hex value 0x0001 is 1 in decimal, no the following code
produces 256 which is 0x0010. Now switch around values in byte array:
byte[] bd = new byte[2];
bd[0] = 0x01;
bd[1] = 0x00;
System.Int16 a;
a = System.BitConverter.ToInt16(bd, 0);
MessageBox.Show(a.ToString());
The code above does produce 1. But hex 0x0100 is 256.
Can someone explain this?