theoretic question

F

fleimeris

hello,
why do we need explicit cast here ?

short i = 0;
//i = i + 1; // Cannot implicitly convert type 'int' to 'short'
//i = i + (short)1; // Cannot implicitly convert type 'int' to 'short'
i = (short)(i + 1);

thanks,
fleimeris
 
W

Wessel Troost

short i = 0;
//i = i + 1; // Cannot implicitly convert type 'int' to 'short'
//i = i + (short)1; // Cannot implicitly convert type 'int' to 'short'
i = (short)(i + 1);
The 1 literal is an integer, so the expression on the right side evaluates
to integer. This is converted back to short before it is stored in
variable i. The conversion may throw an overflow exception. It makes
sense to require explicit casting to make you aware of the possible
exception.

Greetings,
Wessel
 
B

Bob Grommes

Fleimeris,

An integer constant evaluates to an int. An int *could* overflow a short.
It is safer to explicitly state your intentions which not only makes the
code more self-evident but insures that the developer is aware of what they
are doing. C# has some implicit conversions when they are overflow-safe,
but otherwise tends to favor explitiness over being "too clever by half".

Of course the compiler could know that a constant that evaluates to less
than or equal to System.Int16.MaxValue would not result in an overflow but I
suppose they decided consistency and performance was more important than
trying to check for this.

I am away from my development machine and don't recall for sure but I think
you can do:

long lngFoo = 0L;

If so, I wonder whether there is an equivalent notation for shorts, e.g.,
"0S"? You might check it out in the help. That would save the runtime
cast. Or you could:

const short shOne = (short)1;
short i = 0;
i = i + shOne;

Of course:

short i = 0;
i++;

.... would be best of all if you just want to increment it, but I assume you
are just contriving an example.

As a final note, there is generally no advantage to using byte or short at
runtime rather than int; indeed, int can be faster. You only use short if
you are building a large array or collection of values and want to conserve
memory at some expense to access speed. My understanding is that the CLR is
optimized for int operations, at least on 32-bit hardware.

--Bob
 
J

Jakob Nielsen

short i = 0;
//i = i + 1; // Cannot implicitly convert type 'int' to 'short'
//i = i + (short)1; // Cannot implicitly convert type 'int' to 'short'
i = (short)(i + 1);

Because short+1 can be too big to fit inside a short.
If we make it into bytes instead you will get
byte i=0;
i=i+1; //if i is 255 now, then i+1 is 256 which doesnt fit inside a byte. it
will wrap and become 1, sinze there is no sign
i=i+(byte)1; //same goes here.. doesnt matter if 1 is short. byte+bytecan be
too big for short
i=(byte)(i+1);// now you state that you simply just *want* the result to fit
into a byte even though it might truncate it. On your own risk
 
F

fleimeris

try
{
#if foo
int a = int.MaxValue;
int sum1 = a + 3;
#else
short b = short.MaxValue;
short sum2 = (short)(b + 7);
#endif
}
catch
{
Console.WriteLine("Got one !");
}

none of these throw an excpetion :\

i understand why i get the error, but i can't understand why you have to
cast the sum of two shorts back to short (not to mention the reason, that
framework told you to :] ). the sum (multiplication) of two ints may also
cause overflow (that is my main point). so some built-in data types are
"better" than the others :]

p.s. i know that this is not a big problem, but i like to understand
everything to smallest details :]
p.p.s sorry for my english
 
M

Martin Maat

try
{
#if foo
int a = int.MaxValue;
int sum1 = a + 3;
#else
short b = short.MaxValue;
short sum2 = (short)(b + 7);
#endif
}
catch
{
Console.WriteLine("Got one !");
}

none of these throw an excpetion :\

i understand why i get the error, but i can't understand why you have to
cast the sum of two shorts back to short (not to mention the reason, that
framework told you to :] ). the sum (multiplication) of two ints may also
cause overflow (that is my main point). so some built-in data types are
"better" than the others :]

p.s. i know that this is not a big problem, but i like to understand
everything to smallest details :]
p.p.s sorry for my english

Yes, overflow can occur also when using one type only. That is why we have
range checking and if you want the performance gain you may switch it off
with the compiler switch or an Unchecked section. But when 32 bits are to be
crammed into 16 bits it is obvious, it doesn't fit no matter what the value
is, data will be lost. It may be zero's that are lost yet data is lost. A
value is a matter of semantics, the compiler doesn't deal with semantics, it
deals with types. After all, 3 + 3 is only 6 because you and I agreed on
that. As far as the compiler is concerned you may be using the individual
bits in the type as flags telling who showed up at the latest "Nerds United"
meeting. Dropping the high order bits you would never know if the 16 most
valued nerds showed up or not.

Martin. (EBL)


Before anyone asks: EBL stands for eenvoudige boeren lul.
 
F

fleimeris

Yes, overflow can occur also when using one type only. That is why we have
range checking and if you want the performance gain you may switch it off
with the compiler switch or an Unchecked section. But when 32 bits are to be
crammed into 16 bits it is obvious, it doesn't fit no matter what the value
is, data will be lost. It may be zero's that are lost yet data is lost. A
value is a matter of semantics, the compiler doesn't deal with semantics, it
deals with types. After all, 3 + 3 is only 6 because you and I agreed on
that. As far as the compiler is concerned you may be using the individual
bits in the type as flags telling who showed up at the latest "Nerds United"
meeting. Dropping the high order bits you would never know if the 16 most
valued nerds showed up or not.

Martin. (EBL)


Before anyone asks: EBL stands for eenvoudige boeren lul.

as far as i understand, you are explaining why it is import to use explicit
cast when casting from int to short. i think that's quite obvious. what i
want to find out is what's the point of transforming "short + short" to
"(short)((int)short + (int)short)". in either way in case of overflow data
will be lost, right? wessel's explanation was that this additional cast will
throw exception in case of overflow, but it doesn't (or am i doing something
wrong?). that would be really meaningful. bob's first explanation was that
it's some sort of "self documentation". when programmer meets such a cast,
he will be warned about the overflow danger. but "int + int" does not have
such protection. bob also noted that clr is probably optimized for int
operations. that would explain much. but is it less computativly expensive
to cast short to int, than add (subtract, multiply, ...) and than convert it
back to short ?

either way, thank you for your thoughts
 
M

Martin Maat [EBL]

Fleimeris,
wessel's explanation was that this additional cast will throw exception
in case of overflow, but it doesn't (or am i doing something wrong?).

Yes. The example code you posted does throw an exception if you have range
checking turned on which it is not by default.

Try this:

checked // check this part for overflow
{
try
{
int a = int.MaxValue;
int sum1 = a + 3;
}
catch (Exception exeption)
{
Console.WriteLine(exeption.Message);
}
}

Alternatively, use /checked+ when compiling the program.

Martin.
 
W

Wessel Troost

in case of overflow data will be lost, right? wessel's explanation was
that this additional cast will throw exception in case of overflow,

You're right-- like Martin posted, it doesn't throw an exception by
default.

However, the code doesn't do what it looks like. The short will "swap", so
adding 1 to short-max will leave you with a large negative number. The
casting requirement is just to make sure you're aware of that.

Greetings,
Wessel
 
J

Jeroen Smits

fleimeris said:
as far as i understand, you are explaining why it is import to use
explicit cast when casting from int to short. i think that's quite
obvious. what i want to find out is what's the point of transforming
"short + short" to "(short)((int)short + (int)short)". in either way
in case of overflow data will be lost, right? wessel's explanation

Ah, I think this is the answer your looking for.

Internally IL only computes on ints. So when adding two shorts, C# will cast
them internally into an int. The compiler knows when adding to shorts that
the user expects the result to be a short again, so you don't notice that IL
computes on ints.
But when adding a const value like 1 C# thinks your adding a short and an
int. To avoid losing data the result will be an int. Now you manualy have to
cast to a short.
 
F

fleimeris

i understand how and why this happens now. thank you guys. but still, i
think that this code is really unintuitive:

// error
short x = 5, y = 12;
short z = x + y;

// ok
short x = 5, y = 12;
short z = (short)(x + y);
 
M

Martin Maat [EBL]

i understand how and why this happens now. thank you guys. but still, i
think that this code is really unintuitive:

// error
short x = 5, y = 12;
short z = x + y;

// ok
short x = 5, y = 12;
short z = (short)(x + y);

Apperently (I had no idea). This is the IL for the OK code.

..method private hidebysig static void Main() cil managed
{
.entrypoint
// Code size 11 (0xb)
.maxstack 2
.locals init (int16 V_0, int16 V_1, int16 V_2)
IL_0000: ldc.i4.5
IL_0001: stloc.0
IL_0002: ldc.i4.s 12
IL_0004: stloc.1
IL_0005: ldloc.0
IL_0006: ldloc.1
IL_0007: add
IL_0008: conv.i2
IL_0009: stloc.2
IL_000a: ret
} // end of method ShortApp::Main

And in this case I don't see the point either. Since IL does support the
int16 type, the C# compiler might as well use it without further ado. There
doesn't seem to be a good reason to expand it to int32. Possibly it is was
designed this way because it will perform better in the majority of cases.
Constants seem to be dealt with using int32 as a minimum. Integer arithmetic
doesn't come smaller than 32 bits anyway these days. Note that after the
addition of the values the result is converted to 16 bits so even while 16
bit values are read from the stack it is assumed they will be processed in
32 bit registers or wider. By using shorts you save space on the stack but
not in the processor.

The compiler seems to like 32 bit integers a lot as the following shows. If
we change the code to this.

using System;

class LongApp
{
static void Main()
{
long x = 5;
long y = 12;
long z = x + y;
}
}

the compiler doesn't complain about the implicit conversion anymore but it
will too start off by treating the supplied constants as 32 bit, only
converting them when it needs to as is the case when they must actually be
stored into the local variables:

..method private hidebysig static void Main() cil managed
{
.entrypoint
// Code size 12 (0xc)
.maxstack 2
.locals init (int64 V_0, int64 V_1, int64 V_2)
IL_0000: ldc.i4.5
IL_0001: conv.i8
IL_0002: stloc.0
IL_0003: ldc.i4.s 12
IL_0005: conv.i8
IL_0006: stloc.1
IL_0007: ldloc.0
IL_0008: ldloc.1
IL_0009: add
IL_000a: stloc.2
IL_000b: ret
} // end of method ShortApp::Main

It is very stubborn in this, appending an L to the constants will not change
this behavior. It is only after we supply really big numbers that the
compiler gives in and immediately picks up a 64 bit value. This code

using System;

class LongApp
{
static void Main()
{
long x = 0x100000000; // doesn't fit in 32 bits
long y = 0x200000000; // doesn't fit in 32 bits
long z = x + y;
}
}

compiles to

..method private hidebysig static void Main() cil managed
{
.entrypoint
// Code size 25 (0x19)
.maxstack 2
.locals init (int64 V_0, int64 V_1, int64 V_2)
IL_0000: ldc.i8 0x100000000
IL_0009: stloc.0
IL_000a: ldc.i8 0x200000000
IL_0013: stloc.1
IL_0014: ldloc.0
IL_0015: ldloc.1
IL_0016: add
IL_0017: stloc.2
IL_0018: ret
} // end of method LongApp::Main

And here we see that the i4's and the conversions are gone.

Martin.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top