N
Neil Zanella
Hello all,
In C and C++ a primitive data type is represented by a minimum number
of bits as defined by the corresponding standard. For instance an int
is assumed to be at least 16 bits on all platform where in practice
an int is 16 bits on the arcane 16 bit machines and 32 bits on the
most commonly used 32 bit machines.
However, my C# reference says that an int in C# is always 32 bits wide.
It also specifies a specific number of bits for all the other primitive
data types, namely bool, char, sbyte, byte, short, ushort, int, uint,
long, ulong, float, double, and decimal, where some of the corresponding
data types in C and C++ have minimum values but no fixed values. However
I have seen some C and C++ manuals make wrong claims about the width of
C and C++ data types as well.
So, I was just wondering whether anyone could please confirm whether the
C# standard defines fixed values for all primitives or whether this is
not so.
Thanks,
Neil
In C and C++ a primitive data type is represented by a minimum number
of bits as defined by the corresponding standard. For instance an int
is assumed to be at least 16 bits on all platform where in practice
an int is 16 bits on the arcane 16 bit machines and 32 bits on the
most commonly used 32 bit machines.
However, my C# reference says that an int in C# is always 32 bits wide.
It also specifies a specific number of bits for all the other primitive
data types, namely bool, char, sbyte, byte, short, ushort, int, uint,
long, ulong, float, double, and decimal, where some of the corresponding
data types in C and C++ have minimum values but no fixed values. However
I have seen some C and C++ manuals make wrong claims about the width of
C and C++ data types as well.
So, I was just wondering whether anyone could please confirm whether the
C# standard defines fixed values for all primitives or whether this is
not so.
Thanks,
Neil