i understand how and why this happens now. thank you guys. but still, i
think that this code is really unintuitive:
// error
short x = 5, y = 12;
short z = x + y;
// ok
short x = 5, y = 12;
short z = (short)(x + y);
Apperently (I had no idea). This is the IL for the OK code.
..method private hidebysig static void Main() cil managed
{
.entrypoint
// Code size 11 (0xb)
.maxstack 2
.locals init (int16 V_0, int16 V_1, int16 V_2)
IL_0000: ldc.i4.5
IL_0001: stloc.0
IL_0002: ldc.i4.s 12
IL_0004: stloc.1
IL_0005: ldloc.0
IL_0006: ldloc.1
IL_0007: add
IL_0008: conv.i2
IL_0009: stloc.2
IL_000a: ret
} // end of method ShortApp::Main
And in this case I don't see the point either. Since IL does support the
int16 type, the C# compiler might as well use it without further ado. There
doesn't seem to be a good reason to expand it to int32. Possibly it is was
designed this way because it will perform better in the majority of cases.
Constants seem to be dealt with using int32 as a minimum. Integer arithmetic
doesn't come smaller than 32 bits anyway these days. Note that after the
addition of the values the result is converted to 16 bits so even while 16
bit values are read from the stack it is assumed they will be processed in
32 bit registers or wider. By using shorts you save space on the stack but
not in the processor.
The compiler seems to like 32 bit integers a lot as the following shows. If
we change the code to this.
using System;
class LongApp
{
static void Main()
{
long x = 5;
long y = 12;
long z = x + y;
}
}
the compiler doesn't complain about the implicit conversion anymore but it
will too start off by treating the supplied constants as 32 bit, only
converting them when it needs to as is the case when they must actually be
stored into the local variables:
..method private hidebysig static void Main() cil managed
{
.entrypoint
// Code size 12 (0xc)
.maxstack 2
.locals init (int64 V_0, int64 V_1, int64 V_2)
IL_0000: ldc.i4.5
IL_0001: conv.i8
IL_0002: stloc.0
IL_0003: ldc.i4.s 12
IL_0005: conv.i8
IL_0006: stloc.1
IL_0007: ldloc.0
IL_0008: ldloc.1
IL_0009: add
IL_000a: stloc.2
IL_000b: ret
} // end of method ShortApp::Main
It is very stubborn in this, appending an L to the constants will not change
this behavior. It is only after we supply really big numbers that the
compiler gives in and immediately picks up a 64 bit value. This code
using System;
class LongApp
{
static void Main()
{
long x = 0x100000000; // doesn't fit in 32 bits
long y = 0x200000000; // doesn't fit in 32 bits
long z = x + y;
}
}
compiles to
..method private hidebysig static void Main() cil managed
{
.entrypoint
// Code size 25 (0x19)
.maxstack 2
.locals init (int64 V_0, int64 V_1, int64 V_2)
IL_0000: ldc.i8 0x100000000
IL_0009: stloc.0
IL_000a: ldc.i8 0x200000000
IL_0013: stloc.1
IL_0014: ldloc.0
IL_0015: ldloc.1
IL_0016: add
IL_0017: stloc.2
IL_0018: ret
} // end of method LongApp::Main
And here we see that the i4's and the conversions are gone.
Martin.