X
xlar54
I have some code that ends up looking alot like:
ushort x = (ushort)(m + b);
whereas both m and b are ushorts also. This is due to the automatic
casting of values to int. Some quick questions:
1 - Why? Why would they do this, this way? Why not automatically cast
it to the size of the recipient variable? I have not benchmarked
this, but I cant imagine using many of these casts all over the place -
in order to keep memory usage low - would be faster than the compiler
doing this automatically. And frankly, its very ugly. I can see lazy
developers using ints wherever they want since its quicker. (and then
using unnecessary validation code to check their values).
2 - Would it be possible to write up some operator overloads that
handle this, and create a custom type? Sure, you're still stuck with
the casting on your own, but the effect would be cleaner and more
readable.
Thanks for the replies in advance
ushort x = (ushort)(m + b);
whereas both m and b are ushorts also. This is due to the automatic
casting of values to int. Some quick questions:
1 - Why? Why would they do this, this way? Why not automatically cast
it to the size of the recipient variable? I have not benchmarked
this, but I cant imagine using many of these casts all over the place -
in order to keep memory usage low - would be faster than the compiler
doing this automatically. And frankly, its very ugly. I can see lazy
developers using ints wherever they want since its quicker. (and then
using unnecessary validation code to check their values).
2 - Would it be possible to write up some operator overloads that
handle this, and create a custom type? Sure, you're still stuck with
the casting on your own, but the effect would be cleaner and more
readable.
Thanks for the replies in advance