G
Guest
///////////////////////////////////////////////////////////////////////////////////////////////
/// [1] CONSTRAINTS ON GENERICS
////////////////////////////////////////////////////
public class Node<T> where T:IComparable<T>
I don't like the syntax, and would prefer something that groups the
constraint along with the type that it governs them, as depicted in this
suggestion:
**(Should be: public class Node<T(IComparable,ISerializable)>
The reason, is that if you simply group them, then you can reuse those
symantics everwhere else you're gonna want to define parameterized types.
This promotes stability towards the unforeseable future and promotes code
readability. Probably someone will come up with a reason to refute this.
Also, I don't like introducing a new "where" keyword that has a special
purpose used in 0.1% of the code. (I do like the LINQ stuff however, ;-)).
///////////////////////////////////////////////////////////////////////////////////////////////
/// [2] NULLABLE TYPES
//////////////////////////////////////////////////////////////////
int? alpha = null;
Obvious comments...
1) Special-purpose operators/symbols/keywords etc. should be avoided at all
costs or we'll be swimming in them after several iterations of language
enhancements.
2) If all types were nullable it would just make life easier with reflection
and other things, but we'd lose 32 bits of memory per 32 local nullable types
and a few extra bits of precious time.
3) The compiler should figure out based on usage whether it's nullable or
not, especially if you protect operations from returning null, which you
should, but you don't!!! It's a stupid feed-forward of a bad idea from long
ago.
Primary comment...
4) **(NULL-valued value types should be treated as ZERO, or FALSE for
purposes of mathematic and logical operations!) -- This gives people an
intuitive understanding of the program behavior and prevents the code from
resulting in values that will make other code crash.
I am supposed to be an expert in that area especially. So somebody please
explain why we have the behavior that:
I) Comparative operators return FALSE if either operand is NULL.
**(The comparative and mathematical operators should treat NULL as ZERO)
II) Bitwise binary operations & and | are overloaded on the Bool type to
serve as boolean logical operations with NULL as a ternary value...
**(NULL should be treated as FALSE and work accordingly in all comparative
operators).
Here is the corresponding truth table for the & and | operators on the bool?
(that is, nullable bool) datatype:
A B & |
1 1 1 1
1 0 0 1
1 N N 1
0 1 0 1
0 0 0 0
0 N 0 N
N 1 N 1
N 0 0 N
N N N N
This makes nullable types dangerous to use in expressions, because they'll
resolve to non-intuitive null values that will make the code crash.
For example:
true & null == null
false & null == false
Consider looking for the lowest int? in a collection of int? (nullable int)...
foreach(int? n in collection) {
if(n < curr) curr = n;
}
Not only would that code crash if collection==null, but it would lock onto
NULL as the lowest value (if curr was initially NULL) because the comparative
always returns FALSE when NULL is in an operand.
Moreover, it yields non-intuitive and asymetrical behavior... Consider
looking for the maximum int? in a collection:
foreach(int? n in collection) {
if(n > curr) curr = n;
}
Again, this finds NULL as the maximum value if curr is initially NULL.
Okay, that might seem a contrived example. Hopefully you see the point.
/// [1] CONSTRAINTS ON GENERICS
////////////////////////////////////////////////////
public class Node<T> where T:IComparable<T>
I don't like the syntax, and would prefer something that groups the
constraint along with the type that it governs them, as depicted in this
suggestion:
**(Should be: public class Node<T(IComparable,ISerializable)>
The reason, is that if you simply group them, then you can reuse those
symantics everwhere else you're gonna want to define parameterized types.
This promotes stability towards the unforeseable future and promotes code
readability. Probably someone will come up with a reason to refute this.
Also, I don't like introducing a new "where" keyword that has a special
purpose used in 0.1% of the code. (I do like the LINQ stuff however, ;-)).
///////////////////////////////////////////////////////////////////////////////////////////////
/// [2] NULLABLE TYPES
//////////////////////////////////////////////////////////////////
int? alpha = null;
Obvious comments...
1) Special-purpose operators/symbols/keywords etc. should be avoided at all
costs or we'll be swimming in them after several iterations of language
enhancements.
2) If all types were nullable it would just make life easier with reflection
and other things, but we'd lose 32 bits of memory per 32 local nullable types
and a few extra bits of precious time.
3) The compiler should figure out based on usage whether it's nullable or
not, especially if you protect operations from returning null, which you
should, but you don't!!! It's a stupid feed-forward of a bad idea from long
ago.
Primary comment...
4) **(NULL-valued value types should be treated as ZERO, or FALSE for
purposes of mathematic and logical operations!) -- This gives people an
intuitive understanding of the program behavior and prevents the code from
resulting in values that will make other code crash.
I am supposed to be an expert in that area especially. So somebody please
explain why we have the behavior that:
I) Comparative operators return FALSE if either operand is NULL.
**(The comparative and mathematical operators should treat NULL as ZERO)
II) Bitwise binary operations & and | are overloaded on the Bool type to
serve as boolean logical operations with NULL as a ternary value...
**(NULL should be treated as FALSE and work accordingly in all comparative
operators).
Here is the corresponding truth table for the & and | operators on the bool?
(that is, nullable bool) datatype:
A B & |
1 1 1 1
1 0 0 1
1 N N 1
0 1 0 1
0 0 0 0
0 N 0 N
N 1 N 1
N 0 0 N
N N N N
This makes nullable types dangerous to use in expressions, because they'll
resolve to non-intuitive null values that will make the code crash.
For example:
true & null == null
false & null == false
Consider looking for the lowest int? in a collection of int? (nullable int)...
foreach(int? n in collection) {
if(n < curr) curr = n;
}
Not only would that code crash if collection==null, but it would lock onto
NULL as the lowest value (if curr was initially NULL) because the comparative
always returns FALSE when NULL is in an operand.
Moreover, it yields non-intuitive and asymetrical behavior... Consider
looking for the maximum int? in a collection:
foreach(int? n in collection) {
if(n > curr) curr = n;
}
Again, this finds NULL as the maximum value if curr is initially NULL.
Okay, that might seem a contrived example. Hopefully you see the point.