I am not even sure that I understand why memcpy was used in C.
tmpshort = (short)((pTmpDataIn[2] << 8) | pTmpDataIn[1]);
There's a potential endianness difference here. memcpy is guaranteed to
work with the local processor endianness in the buffer,
Assuming the data was originally the same endianness as that being used.
That wouldn't be guaranteed if, for example, the data was previously
stored in a file or transmitted over a network or...
So let's assume that we can guarantee the endianness is always the same
and let's assume the reason is that the origin of the data is internal to
the application (I know, that's a lot of assumptions
). Then...
while yours is guaranteed to need LSB-first.
Does .NET run on any big-endian systems? If the data originates from
within the application, are there true .NET scenarios where code that
assumes little-endian wouldn't work?
Until Mono has 100% parity with .NET (which I assume will never
happen...another assumption, I know
), I wouldn't consider it a
legitimate concern, even if it does run on a big-endian system (being
open-source, I assume it eventually will even if it doesn't now...does
Mono work on PowerPC-based Macs?)
Also you need some unsigned casts in that
expression to prevent sign-extension of the LSB from overwriting the MSB.
Or just masking would work too.
That said, in C something as simple as "tmpshort = *((short *)(pTmpDataIn
+ 1));" should have worked fine. There wasn't really a need to call
memcpy _or_ to manipulate the individual bytes of the original data.
If we're going to deconstruct the original C, we might as well do a
complete job.
Pete