C# Language Specification - Generics

G

Guest

///////////////////////////////////////////////////////////////////////////////////////////////
/// [1] CONSTRAINTS ON GENERICS
////////////////////////////////////////////////////

public class Node<T> where T:IComparable<T>

I don't like the syntax, and would prefer something that groups the
constraint along with the type that it governs them, as depicted in this
suggestion:

**(Should be: public class Node<T(IComparable,ISerializable)>

The reason, is that if you simply group them, then you can reuse those
symantics everwhere else you're gonna want to define parameterized types.
This promotes stability towards the unforeseable future and promotes code
readability. Probably someone will come up with a reason to refute this.

Also, I don't like introducing a new "where" keyword that has a special
purpose used in 0.1% of the code. (I do like the LINQ stuff however, ;-)).

///////////////////////////////////////////////////////////////////////////////////////////////
/// [2] NULLABLE TYPES
//////////////////////////////////////////////////////////////////

int? alpha = null;

Obvious comments...
1) Special-purpose operators/symbols/keywords etc. should be avoided at all
costs or we'll be swimming in them after several iterations of language
enhancements.
2) If all types were nullable it would just make life easier with reflection
and other things, but we'd lose 32 bits of memory per 32 local nullable types
and a few extra bits of precious time.
3) The compiler should figure out based on usage whether it's nullable or
not, especially if you protect operations from returning null, which you
should, but you don't!!! It's a stupid feed-forward of a bad idea from long
ago.

Primary comment...
4) **(NULL-valued value types should be treated as ZERO, or FALSE for
purposes of mathematic and logical operations!) -- This gives people an
intuitive understanding of the program behavior and prevents the code from
resulting in values that will make other code crash.

I am supposed to be an expert in that area especially. So somebody please
explain why we have the behavior that:

I) Comparative operators return FALSE if either operand is NULL.
**(The comparative and mathematical operators should treat NULL as ZERO)

II) Bitwise binary operations & and | are overloaded on the Bool type to
serve as boolean logical operations with NULL as a ternary value...
**(NULL should be treated as FALSE and work accordingly in all comparative
operators).

Here is the corresponding truth table for the & and | operators on the bool?
(that is, nullable bool) datatype:
A B & |
1 1 1 1
1 0 0 1
1 N N 1
0 1 0 1
0 0 0 0
0 N 0 N
N 1 N 1
N 0 0 N
N N N N

This makes nullable types dangerous to use in expressions, because they'll
resolve to non-intuitive null values that will make the code crash.

For example:
true & null == null
false & null == false

Consider looking for the lowest int? in a collection of int? (nullable int)...

foreach(int? n in collection) {
if(n < curr) curr = n;
}

Not only would that code crash if collection==null, but it would lock onto
NULL as the lowest value (if curr was initially NULL) because the comparative
always returns FALSE when NULL is in an operand.

Moreover, it yields non-intuitive and asymetrical behavior... Consider
looking for the maximum int? in a collection:

foreach(int? n in collection) {
if(n > curr) curr = n;
}

Again, this finds NULL as the maximum value if curr is initially NULL.

Okay, that might seem a contrived example. Hopefully you see the point.
 
G

Guest

While I think comments this late in the process will be ignored or
postponed for later, you really should address these concerns directly
to Microsoft.

Primary comment...
4) **(NULL-valued value types should be treated as ZERO, or FALSE for
purposes of mathematic and logical operations!) -- This gives people an
intuitive understanding of the program behavior and prevents the code from
resulting in values that will make other code crash.

I don't like that at all. NULL usually means "unknown value" or "value
not specified", something which might come in handy when you are
recording data from a user in a database. What typically happens if you
drop NULL is that you pick one value as a "magic value" that means "the
user did not specify this value", and that effectively removes that
value as something that is valid. In other words, if the user types in 0
you suddenly don't know if the user left the field empty or if he typed
in 0, something which will be important for a lot of applications. In
cases where the value is used to refer to an object in a different table
this might be crucial because the database will typically not allow you
to refer to non-existant rows. In such a case you end up adding
dummy-rows to tables so that you can use 0 as a reference and not have
the database engine balk at your database updates.
I) Comparative operators return FALSE if either operand is NULL.
**(The comparative and mathematical operators should treat NULL as ZERO)

Again, since this meaning of NULL typically comes from the database
world where NULL means "unknown", not even two NULL values are
considered to be equal.
II) Bitwise binary operations & and | are overloaded on the Bool type to
serve as boolean logical operations with NULL as a ternary value...
**(NULL should be treated as FALSE and work accordingly in all comparative
operators).

Again, see above.
Here is the corresponding truth table for the & and | operators on the bool?
(that is, nullable bool) datatype:
This makes nullable types dangerous to use in expressions, because they'll
resolve to non-intuitive null values that will make the code crash.

It'll only make your code crash if you're using nullable boolean types
and don't cater for a NULL result.
For example:
true & null == null
false & null == false

Consider looking for the lowest int? in a collection of int? (nullable int)...

foreach(int? n in collection) {
if(n < curr) curr = n;
}

Not only would that code crash if collection==null, but it would lock onto
NULL as the lowest value (if curr was initially NULL) because the comparative
always returns FALSE when NULL is in an operand.

In this loop you're effectively not catering for NULL so again, this is
a bug in your code.
Moreover, it yields non-intuitive and asymetrical behavior... Consider
looking for the maximum int? in a collection:

foreach(int? n in collection) {
if(n > curr) curr = n;
}

Again, this finds NULL as the maximum value if curr is initially NULL.

Okay, that might seem a contrived example. Hopefully you see the point.

I think you'll find that adding support for nullable types in your
software will require more than just adding a few question marks here
and there, it will require more extensive coding to support it.

foreach (int? n in collection) {
if (n > curr && n != null) curr = n;
}

this does not solve the problem of picking the first value, so something
like:

foreach (int? n in collection) {
if (n != null) {
if (curr == null || n > curr) curr = n;
}
}
 
G

Guest

1)

Although I concur that it might be easier to read it does appear to
introduce an ambiguity into the language. Also remember that many C#
developers come from C++ and therefore this syntax is confusing because a C++
developer could see this as an inline function pointer definition which is
confusing. If reuse is your goal then wouldn't an interface that combines
IComparable and ISerializable be a better solution then reusing multiple
constraints.

As for the where statement it was not a keyword originally but it is used in
other languages as a keyword so using where as a var name or something
probably isn't the best idea anyway. Besides you said you liked LINQ and it
is a keyword there as well. I think your 0.1% code estimation is a little
low. I suspect that generics will become very common and popular just like
templates did in C++. I don't think it'll be long before every project is
using generics at least somewhere. I wouldn't be surprised if a grep
doesn't reveal more where usages then overrides.

2)

There seems to be two main camps of people: those who think C# and .NET are
the same and those who think they are completely different. The reality lies
in the middle. C# is a "clean" implementation of a .NET language. Almost
every feature in C# is based in .NET. Nullable types are no exception. A
nullable type maps to System.Nullable. You can use this full type name if
you like and the compiler won't complain. But since nullable types are going
to be pretty common the developers thought it'd be easier to use ?. I concur
that it might initially be confusing since there are now 3 meanings for ? but
it sort of makes sense when you read it in context (int? - it might be an
int). Permiting any object to be nullable defeats the purpose of having
value types. Therefore you can't just assume that any type can be null. As
already stated null represents an unspecified value. What should DateTime be
when it is null? How about a float? If you say zero then you are specifying
a value. It would seem that NaN would be better but then again maybe not.
Nullable types where added primarily to support DB work. When I retrieve a
value from the DB it might be important to know that the value was zero vs.
unspecified. In general code I don't suspect that nullable types are going
to be common.

As for the compiler figuring out the usage that would require type
inferencing. Unfortunately type inferencing only works when the compiler can
see all the code. This just isn't a realistic limitation. Take the
following code:

//Assembly 1
int Foo ( int x )
{
//Do some work
return x1;
}

//Assembly 2
int var = Foo(10);

Is var a nullable int? From the function signature it can't be. So how can
we tell the compiler that the function could be null? Catch-22.

I've noticed that you've been posting a couple of comments about the spec.
I would gather that you're reading through it and posting your comments. As
someone already said it is a little too late. We compiler writers went to MS
back at the beginning of the year to discuss the changes in Whidbey as it
impacts compiler writers and all of this was already set in stone. If you
want to have any influence in the decision process you should get started far
earlier. MS has recently released the 3.0 spec for C#. I recommend that you
start looking at it now if you want to be able to argue any points they are
making. Also be aware that most of the stuff they're adding comes from other
research projects so use those resources as well. For example LINQ is
derived from C Omega and the functional programming stuff like lambda
expression comes from F# (if I remember correctly).

Michael Taylor - 9/19/05

Marshal said:
///////////////////////////////////////////////////////////////////////////////////////////////
/// [1] CONSTRAINTS ON GENERICS
////////////////////////////////////////////////////

public class Node<T> where T:IComparable<T>

I don't like the syntax, and would prefer something that groups the
constraint along with the type that it governs them, as depicted in this
suggestion:

**(Should be: public class Node<T(IComparable,ISerializable)>

The reason, is that if you simply group them, then you can reuse those
symantics everwhere else you're gonna want to define parameterized types.
This promotes stability towards the unforeseable future and promotes code
readability. Probably someone will come up with a reason to refute this.

Also, I don't like introducing a new "where" keyword that has a special
purpose used in 0.1% of the code. (I do like the LINQ stuff however, ;-)).

///////////////////////////////////////////////////////////////////////////////////////////////
/// [2] NULLABLE TYPES
//////////////////////////////////////////////////////////////////

int? alpha = null;

Obvious comments...
1) Special-purpose operators/symbols/keywords etc. should be avoided at all
costs or we'll be swimming in them after several iterations of language
enhancements.
2) If all types were nullable it would just make life easier with reflection
and other things, but we'd lose 32 bits of memory per 32 local nullable types
and a few extra bits of precious time.
3) The compiler should figure out based on usage whether it's nullable or
not, especially if you protect operations from returning null, which you
should, but you don't!!! It's a stupid feed-forward of a bad idea from long
ago.

Primary comment...
4) **(NULL-valued value types should be treated as ZERO, or FALSE for
purposes of mathematic and logical operations!) -- This gives people an
intuitive understanding of the program behavior and prevents the code from
resulting in values that will make other code crash.

I am supposed to be an expert in that area especially. So somebody please
explain why we have the behavior that:

I) Comparative operators return FALSE if either operand is NULL.
**(The comparative and mathematical operators should treat NULL as ZERO)

II) Bitwise binary operations & and | are overloaded on the Bool type to
serve as boolean logical operations with NULL as a ternary value...
**(NULL should be treated as FALSE and work accordingly in all comparative
operators).

Here is the corresponding truth table for the & and | operators on the bool?
(that is, nullable bool) datatype:
A B & |
1 1 1 1
1 0 0 1
1 N N 1
0 1 0 1
0 0 0 0
0 N 0 N
N 1 N 1
N 0 0 N
N N N N

This makes nullable types dangerous to use in expressions, because they'll
resolve to non-intuitive null values that will make the code crash.

For example:
true & null == null
false & null == false

Consider looking for the lowest int? in a collection of int? (nullable int)...

foreach(int? n in collection) {
if(n < curr) curr = n;
}

Not only would that code crash if collection==null, but it would lock onto
NULL as the lowest value (if curr was initially NULL) because the comparative
always returns FALSE when NULL is in an operand.

Moreover, it yields non-intuitive and asymetrical behavior... Consider
looking for the maximum int? in a collection:

foreach(int? n in collection) {
if(n > curr) curr = n;
}

Again, this finds NULL as the maximum value if curr is initially NULL.

Okay, that might seem a contrived example. Hopefully you see the point.
 
M

Michael S

Marshal said:
///////////////////////////////////////////////////////////////////////////////////////////////
/// [1] CONSTRAINTS ON GENERICS
////////////////////////////////////////////////////

public class Node<T> where T:IComparable<T>

I don't like the syntax, and would prefer something that groups the
constraint along with the type that it governs them, as depicted in this
suggestion:

**(Should be: public class Node<T(IComparable,ISerializable)>

I think your suggestion is harder to read.
Also, I don't like introducing a new "where" keyword that has a special
purpose used in 0.1% of the code. (I do like the LINQ stuff however, ;-)).

But with LINQ the keyword will be used alot.

///////////////////////////////////////////////////////////////////////////////////////////////
/// [2] NULLABLE TYPES
//////////////////////////////////////////////////////////////////

int? alpha = null;

Obvious comments...
1) Special-purpose operators/symbols/keywords etc. should be avoided at
all
costs or we'll be swimming in them after several iterations of language
enhancements.

Why should they be avioded at all costs? Who died and made someone god and
where is his commandmends?
As long as operators can't be used in weird ways to create silly stuff like
the old c-gotcha int i = i+++++i, I don't see any problems.
2) If all types were nullable it would just make life easier with
reflection
and other things, but we'd lose 32 bits of memory per 32 local nullable
types
and a few extra bits of precious time.

I don't think life would be easier if all types was nullable.
But from what I read below you seem to have a hard time understanding what
null is.
3) The compiler should figure out based on usage whether it's nullable or
not, especially if you protect operations from returning null, which you
should, but you don't!!! It's a stupid feed-forward of a bad idea from
long
ago.

Really. Figure out on usage? How would it go about in doing that?
What you really are saying is that all types should be nullable.
Primary comment...
4) **(NULL-valued value types should be treated as ZERO, or FALSE for
purposes of mathematic and logical operations!) -- This gives people an
intuitive understanding of the program behavior and prevents the code from
resulting in values that will make other code crash.

Are you mad? null is not zero nor false in .NET.
I am supposed to be an expert in that area especially. So somebody please
explain why we have the behavior that:

Becauae you are obviously not an expert! =)
This makes nullable types dangerous to use in expressions, because they'll
resolve to non-intuitive null values that will make the code crash.

Yes, nulls are dangerous to use in expressions. You got something right.
Hopefully you see the point.

Nope, I don't see the point in much you wrote.
Anyways soon you'll be able to fix your null-values without much hassle.

string s = null;
int i = Convert.ToInt32(s ?? "0");

Happy Nulling
- Michael S
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top