Boxing and Unboxing ??

B

Bruce Wood

Peter said:
Are you referring to Generics? Does this address this issue of passing a struct
by (address) reference?

No. "ref" and "out" address the issue of passing a struct by reference
(to a method).

Generics address the problem of writing a type that contains another,
arbitrary type that could be a value type or a reference type, without
incurring boxing overhead and while maintaining compile-time type
safety. For example, in .NET 1.1 if I wanted a "vector" of Persons, I
would write:

ArrayList personList = new ArrayList();
personList.Add(new Person("Frank"));
Person frank = (Person)personList[0];

Here, Person is a reference type (as most user-defined types are) and
so the ArrayList now contains one entry, which is a reference to a
Person that has space allocated for it on the heap. However, I've lost
(compile-time) type checking: ArrayList is a collection of Object, and
so to get Frank back I had to use a cast, which involves a run-time
type check.

If I want a "vector" of ints, I say this:

ArrayList intList = new ArrayList();
intList.Add(15);
int i = (int)intList[0];

Here, 15 is an integer value type. Since ArrayList is a collection of
Object, the value type has to be boxed onto the heap and a reference to
it placed in the ArrayList. In order to get the value back, I have to
unbox the value, the unbox operation here represented by the cast.

Generics eliminate both problems. You'll recognize the template syntax
of C++:

List<Person> personList = new List<Person>();
personList.Add(new Person("Frank"));
Person frank = personList[0];

There's no need for a cast or for a run-time type check because the
compiler already knows that every reference in the list refers to a
Person.

Similarly,

List<int> intList = new List<int>();
intList.Add(15);
int i = intList[0];

No boxing, no unboxing. The list holds native integers, not Objects.
I think that it is possible to take the concept of C# further along. To be able
to provide every required feature of a language such as C++, yet to do this in
an entirely type safe way, with essentially no additional execution time
overhead, and drastically reduce the number of details that must be handled by
the programmer. I think that C# has done an excellent job of achieving these
goals up to this point. I think that there are two opportunities for
improvement:

(1) Little tweaks here and there to eliminate more of the additional execution
time overhead.

(2) Abstract out the distinction between reference types and value types so that
the programmer will not even need to know the difference. The underlying
architecture can still have separate reference types and value types, yet, this
can be handled entirely transparently by the compiler and CLR.

How would you begin to achieve this?

I'm sorry, but I have to ask this... I don't mean to be combative or
snobby, but I'm a bit confused. It appears to me that you're trying to
get your head around C#'s version of value types, how they work, what
is boxing and unboxing and when does it happen, what are generics, and
some basic plumbing issues about how C# works. Correct me if I'm wrong,
but you seem knowledgeable about the inner workings of the machine, but
you're still trying to map what's going on in C# back to what happens
under the covers. If I can presume to sum up your questions, you're
experienced and intelligent, but some aspects of C# haven't quite
"clicked" for you yet.

And yet... you claim that C# is somehow lacking and needs improvement,
and I'm just dying to ask... based on what? I guess something just
isn't "clicking" for ME here.... I find myself very productive in C#.
There are some areas of the .NET Framework that I think could use
improvement, but the language itself works just fine for me. I find the
difference between value types and reference types very clear and
logical. I don't see where the "great rewards in increased programmer
productivity" will come from by trying to unify the two of them into...
what? A C++ type model? I'm sorry, but I found more people utterly
confounded by C++ than I have ever found confounded by Java or (now)
C#. C++ is a difficult language to master; C# by comparison is much
simpler, at least IMHO.

My problem is that I can't see where you're going with this idea. Could
you outline what's wrong with C#'s type system, how it should be
improved, and how those improvements would increase programmer
productivity?
 
B

Bruce Wood

Peter said:
There are some cases where this suggestion would be bad design. The case in mind
is similar to the cases where a "friend" function is used in C++.

What on earth has "friend" to do with value versus reference types?

By the way, C# has no concept of "friend", and there's really no way to
fake it. Again, you have to think differently when you design in C#. If
you start your design assuming that there is some sort of "friend"
mechanism and then try to fudge it in, you'll end up with nothing but a
mess, just as if you start your design in C++ by creating a class
called "Object" and then try to derive everything from it (a la C#)
what you end up with is a mess.

Different language, different idioms.
 
B

Bruce Wood

Peter said:
There are some cases where this suggestion would be bad design. The case in mind
is similar to the cases where a "friend" function is used in C++.

What on earth has "friend" to do with value versus reference types?

By the way, C# has no concept of "friend", and there's really no way to
fake it. Again, you have to think differently when you design in C#. If
you start your design assuming that there is some sort of "friend"
mechanism and then try to fudge it in, you'll end up with nothing but a
mess, just as if you start your design in C++ by creating a class
called "Object" and then try to derive everything from it (a la C#)
what you end up with is a mess.

Different language, different idioms.
 
J

Jesse McGrew

Peter said:
Jesse McGrew said:
Why should the programmer have to change his code just to enable an
optimization? Why not just say "your methods will run faster if you
don't modify large value-type parameters"?

This aspect is not a matter of optimization, it is a matter of preventing
programming errors. By using the [const] parameter qualifier it is impossible to
inadvertently change a function parameter that was not intended to be changed.

It's already impossible! Those errors don't exist today: if you pass a
value type into a method, any changes the method makes will only affect
the local copy inside that method; the original value in the caller
won't be changed.

The potential errors are purely a side effect of the *optimization*
you've been proposing, which is to pass "in" value types by reference
even though you don't expect to get a changed value back from the
method (which is the purpose of "ref" and "out" parameters). If you
weren't passing them by reference, there'd be no need for those
parameters to be read-only, because any changes would be local to the
method being called.

Jesse
 
P

Peter Olcott

Bruce Wood said:
Peter said:
Are you referring to Generics? Does this address this issue of passing a
struct
by (address) reference?

No. "ref" and "out" address the issue of passing a struct by reference
(to a method).

Generics address the problem of writing a type that contains another,
arbitrary type that could be a value type or a reference type, without
incurring boxing overhead and while maintaining compile-time type
safety. For example, in .NET 1.1 if I wanted a "vector" of Persons, I
would write:

ArrayList personList = new ArrayList();
personList.Add(new Person("Frank"));
Person frank = (Person)personList[0];

Here, Person is a reference type (as most user-defined types are) and
so the ArrayList now contains one entry, which is a reference to a
Person that has space allocated for it on the heap. However, I've lost
(compile-time) type checking: ArrayList is a collection of Object, and
so to get Frank back I had to use a cast, which involves a run-time
type check.

If I want a "vector" of ints, I say this:

ArrayList intList = new ArrayList();
intList.Add(15);
int i = (int)intList[0];

Here, 15 is an integer value type. Since ArrayList is a collection of
Object, the value type has to be boxed onto the heap and a reference to
it placed in the ArrayList. In order to get the value back, I have to
unbox the value, the unbox operation here represented by the cast.

Generics eliminate both problems. You'll recognize the template syntax
of C++:

List<Person> personList = new List<Person>();
personList.Add(new Person("Frank"));
Person frank = personList[0];

There's no need for a cast or for a run-time type check because the
compiler already knows that every reference in the list refers to a
Person.

Similarly,

List<int> intList = new List<int>();
intList.Add(15);
int i = intList[0];

No boxing, no unboxing. The list holds native integers, not Objects.
I think that it is possible to take the concept of C# further along. To be
able
to provide every required feature of a language such as C++, yet to do this
in
an entirely type safe way, with essentially no additional execution time
overhead, and drastically reduce the number of details that must be handled
by
the programmer. I think that C# has done an excellent job of achieving these
goals up to this point. I think that there are two opportunities for
improvement:

(1) Little tweaks here and there to eliminate more of the additional
execution
time overhead.

(2) Abstract out the distinction between reference types and value types so
that
the programmer will not even need to know the difference. The underlying
architecture can still have separate reference types and value types, yet,
this
can be handled entirely transparently by the compiler and CLR.

How would you begin to achieve this?

I'm sorry, but I have to ask this... I don't mean to be combative or
snobby, but I'm a bit confused. It appears to me that you're trying to
get your head around C#'s version of value types, how they work, what
is boxing and unboxing and when does it happen, what are generics, and
some basic plumbing issues about how C# works. Correct me if I'm wrong,
but you seem knowledgeable about the inner workings of the machine, but
you're still trying to map what's going on in C# back to what happens
under the covers. If I can presume to sum up your questions, you're
experienced and intelligent, but some aspects of C# haven't quite
"clicked" for you yet.

And yet... you claim that C# is somehow lacking and needs improvement,
and I'm just dying to ask... based on what? I guess something just
isn't "clicking" for ME here.... I find myself very productive in C#.

There will always be a need for improvement in computer programming languages
until computers reach the point where they can anticipate every possible need in
advance. What I am saying is that C# has made great strides in providing
essentially all of the capability of C++, but at a reduced cost of programmer
effort. C# is in many respects an improved C++.

All of the benefits of advances in programming languages are derived in terms of
reduced programming effort to achieve the desired result. C# can go one more
step further with this and eliminate the need for programmers to ever pay any
attention to the differences between value types and reference types.
There are some areas of the .NET Framework that I think could use
improvement, but the language itself works just fine for me. I find the
difference between value types and reference types very clear and
logical. I don't see where the "great rewards in increased programmer
productivity" will come from by trying to unify the two of them into...
what? A C++ type model? I'm sorry, but I found more people utterly

No, not into a C++ type model.
C# unified [Varname.FieldName and Varname->FieldName]
into the single [Varname.FieldName]
Take this same C# idea to its logical conclusion.
 
B

Bruce Wood

Jesse said:
Peter said:
Jesse McGrew said:
Why should the programmer have to change his code just to enable an
optimization? Why not just say "your methods will run faster if you
don't modify large value-type parameters"?

This aspect is not a matter of optimization, it is a matter of preventing
programming errors. By using the [const] parameter qualifier it is impossible to
inadvertently change a function parameter that was not intended to be changed.

It's already impossible! Those errors don't exist today: if you pass a
value type into a method, any changes the method makes will only affect
the local copy inside that method; the original value in the caller
won't be changed.

The potential errors are purely a side effect of the *optimization*
you've been proposing, which is to pass "in" value types by reference
even though you don't expect to get a changed value back from the
method (which is the purpose of "ref" and "out" parameters). If you
weren't passing them by reference, there'd be no need for those
parameters to be read-only, because any changes would be local to the
method being called.

Jesse, I think that you misunderstand. C++ has a concept of "const" by
which you can pass a reference type, but can indicate that none of its
fields can be changed.

To my knowledge, the C# team is considering such a construct, but they
still haven't figured out how to fit it cleanly into the language (and
possibly the CLR) taking into account security concerns, etc.
 
J

Jon Skeet [C# MVP]

Jesse, I think that you misunderstand. C++ has a concept of "const" by
which you can pass a reference type, but can indicate that none of its
fields can be changed.

To my knowledge, the C# team is considering such a construct, but they
still haven't figured out how to fit it cleanly into the language (and
possibly the CLR) taking into account security concerns, etc.

For what it's worth, this has been debated around Java for *ages* (well
before .NET came on the scene) - C# is moving somewhat quicker in
general, but there are certainly lots of thorny issues :(
 
B

Bruce Wood

Peter said:
Bruce Wood said:
Peter said:
It might be possible to design a language that has essentially all of the
functionally capabilities of the lower level languages, without the
requirement
of ever directly dealing with pointers.

True, but one question I know the C# team constantly asks is whether a
feature is worth the additional complexity it adds to the language. (I
know this because they frequently cite that as a reason for not
including certain features.) What does it really buy you being able to
take the address of an arbitrary variable (in safe code... I know that
you can do it in unsafe code)? As I said, I think that Java (and now
C#) have demonstrated that it doesn't buy you much. You mentioned
boxing overhead, but in .NET 2.0 you can pretty-much avoid boxing...
all you have to do is learn a new idiom: a new way to do what you've
always done, but now in a new language.

Are you referring to Generics? Does this address this issue of passing a
struct
by (address) reference?

No. "ref" and "out" address the issue of passing a struct by reference
(to a method).

Generics address the problem of writing a type that contains another,
arbitrary type that could be a value type or a reference type, without
incurring boxing overhead and while maintaining compile-time type
safety. For example, in .NET 1.1 if I wanted a "vector" of Persons, I
would write:

ArrayList personList = new ArrayList();
personList.Add(new Person("Frank"));
Person frank = (Person)personList[0];

Here, Person is a reference type (as most user-defined types are) and
so the ArrayList now contains one entry, which is a reference to a
Person that has space allocated for it on the heap. However, I've lost
(compile-time) type checking: ArrayList is a collection of Object, and
so to get Frank back I had to use a cast, which involves a run-time
type check.

If I want a "vector" of ints, I say this:

ArrayList intList = new ArrayList();
intList.Add(15);
int i = (int)intList[0];

Here, 15 is an integer value type. Since ArrayList is a collection of
Object, the value type has to be boxed onto the heap and a reference to
it placed in the ArrayList. In order to get the value back, I have to
unbox the value, the unbox operation here represented by the cast.

Generics eliminate both problems. You'll recognize the template syntax
of C++:

List<Person> personList = new List<Person>();
personList.Add(new Person("Frank"));
Person frank = personList[0];

There's no need for a cast or for a run-time type check because the
compiler already knows that every reference in the list refers to a
Person.

Similarly,

List<int> intList = new List<int>();
intList.Add(15);
int i = intList[0];

No boxing, no unboxing. The list holds native integers, not Objects.
That, in the end, is what it comes down to: C# works very well. It's
just that it does things differently than does C++, and you can't take
C++ idioms and concepts and start writing C# as though it were C++. In
a few domains, C++ is much better suited to the problems than is C#,
but in most domains C# gives you all the functionality you need while
helping keep you out of trouble.

I think that it is possible to take the concept of C# further along. To be
able
to provide every required feature of a language such as C++, yet to do this
in
an entirely type safe way, with essentially no additional execution time
overhead, and drastically reduce the number of details that must be handled
by
the programmer. I think that C# has done an excellent job of achieving these
goals up to this point. I think that there are two opportunities for
improvement:

(1) Little tweaks here and there to eliminate more of the additional
execution
time overhead.

(2) Abstract out the distinction between reference types and value types so
that
the programmer will not even need to know the difference. The underlying
architecture can still have separate reference types and value types, yet,
this
can be handled entirely transparently by the compiler and CLR.

How would you begin to achieve this?

I'm sorry, but I have to ask this... I don't mean to be combative or
snobby, but I'm a bit confused. It appears to me that you're trying to
get your head around C#'s version of value types, how they work, what
is boxing and unboxing and when does it happen, what are generics, and
some basic plumbing issues about how C# works. Correct me if I'm wrong,
but you seem knowledgeable about the inner workings of the machine, but
you're still trying to map what's going on in C# back to what happens
under the covers. If I can presume to sum up your questions, you're
experienced and intelligent, but some aspects of C# haven't quite
"clicked" for you yet.

And yet... you claim that C# is somehow lacking and needs improvement,
and I'm just dying to ask... based on what? I guess something just
isn't "clicking" for ME here.... I find myself very productive in C#.

There will always be a need for improvement in computer programming languages
until computers reach the point where they can anticipate every possible need in
advance. What I am saying is that C# has made great strides in providing
essentially all of the capability of C++, but at a reduced cost of programmer
effort. C# is in many respects an improved C++.

All of the benefits of advances in programming languages are derived in terms of
reduced programming effort to achieve the desired result. C# can go one more
step further with this and eliminate the need for programmers to ever pay any
attention to the differences between value types and reference types.

I disagree. The only way I can think of to interpret your suggestion is
by way of comparison, viz:

C# has two fundamental classes of types: value types and reference
types. The act very differently: value types being... well, values such
as integers, doubles, floats. Also included here are date/times and
some other types such as System.Drawing.Point and
System.Drawing.Rectangle. You can also make your own value types, but
you should have very little cause to do so. Reference types are used
for almost everything else. Java takes a similar approach to this.

One could unify the two classes of types by making everything a
reference type, or at least act like a reference type. Every integer,
double, float, or decimal would live (or appear to live) on the heap,
with a reference pointing to it. I believe that Smalltalk took an
approach similar to this, although I can't be sure because I've never
actually used Smalltalk.

One could also unify the two classes of types by making everything a
value type. This is essentially what C++ does. If you don't explicitly
take the address of something, it's a value. C# could do this, but then
you would have to have some syntax to indicate that you were passing a
reference to a method, for example, instead of passing the entire
object on the stack, which most of the time you don't want to do.
Because C++ has pointers, this is all very easy in C++. However, I
claim that this is one place where C# got it right: MOST OF THE TIME
you want to pass a reference to an object, and you want to pass what C#
considers value types by value. In C#, if you say nothing special, this
is what you get. If you say "ref", then you can pass a value type by
reference (or change the reference to a reference type--a pointer to a
pointer, in C++ terms). The C# team _could_ create a "copy" keyword
that would allow you to pass a reference type "by value" as it were, as
you can do in C++, rather than forcing you to say
"(Person)frank.Clone()" which is rather clunky.

So, again, I don't see how unifying the two kinds of types would
"improve programmer productivity". It would certainly make it easier
for people coming from the C++ world to understand what was going on,
but I think that it would confuse most newbies and if it were done
wrong would lead to newbies copying huge objects onto the stack by
mistake, which undermines claims to "improved productivity."
There are some areas of the .NET Framework that I think could use
improvement, but the language itself works just fine for me. I find the
difference between value types and reference types very clear and
logical. I don't see where the "great rewards in increased programmer
productivity" will come from by trying to unify the two of them into...
what? A C++ type model? I'm sorry, but I found more people utterly

No, not into a C++ type model.
C# unified [Varname.FieldName and Varname->FieldName]
into the single [Varname.FieldName]
Take this same C# idea to its logical conclusion.

Again, I'm confused. C# has no Varnam->FieldName syntax. That's C++.
The only ways I can see to "unify" the model is put everything on the
heap (or make it act as though it were on the heap), or make everything
a value type (which is essentially C++'s approach). Neither alternative
makes the language easier to understand or use, IMHO.

And it's not really very difficult to start with. There's only one rule
that's different from C++, which is this: "Reference types (which is to
say, most types) in C# act natively like pointer types in C++. That is,
if Person is a reference type, passing a Person to a method passes a
reference; every Person variable holds a reference; and every Person
instance lives on the heap." All other types (that is, value types) act
like C++ types. In the end, it's C# reference types, not value types,
that differ from C++.

And you know what? I like that better. No more "&", no more "*"... you
just use the types and they do what you probably wanted to do anyway
without having to pepper your code with additional syntax. Oh, and
you're not allowed to do patently silly things like pass an 80Kb
instance on the stack, even if 0.01% of the time that really is what
you would have wanted to do....
 
B

Bruce Wood

Jon said:
For what it's worth, this has been debated around Java for *ages* (well
before .NET came on the scene) - C# is moving somewhat quicker in
general, but there are certainly lots of thorny issues :(

Indeed. C++ sidesteps the problems by providing an un-cost-cast by
which you can remove the "constness" of something. In other words, C++
provides "const", but the language doesn't guarantee that none of the
called methods will modify the object. As such, "const" in C++ is more
a suggestion to the programmer that carries weight only if the
programmer decides to abide by it.

I believe that the C# team's concern is that they want a tight
contract: they want a "const" that guarantees "const", not just a
guideline for programmers.

As well, there's the ugly C++ cascading-const problem. As a programmer,
even if you want to abide by a "const" contract imposed upon you, it's
not uncommon to find yourself in the situation in which "if a is
'const' then that means that parameter b to function F must be 'const',
but then if that's 'const' then parameter c to function G must also be
'const', but then...." If you haven't carefully used 'const' throughout
your libraries and code from the very start, it can get ugly pretty
quickly. Doubly so if you buy a library from someone and _they_ haven't
bothered using 'const' everywhere they could.

Anyway, I trust the C# team to introduce this (very useful) feature
only when they have it sorted out and not before. As Anders Heljsberg
said, "It's easy to add new features to a language later; it's very
difficult to change them if you include them and then later decide that
you didn't get it right" or words to that effect. Wise man.
 
J

Jon Skeet [C# MVP]

As well, there's the ugly C++ cascading-const problem. As a programmer,
even if you want to abide by a "const" contract imposed upon you, it's
not uncommon to find yourself in the situation in which "if a is
'const' then that means that parameter b to function F must be 'const',
but then if that's 'const' then parameter c to function G must also be
'const', but then...." If you haven't carefully used 'const' throughout
your libraries and code from the very start, it can get ugly pretty
quickly. Doubly so if you buy a library from someone and _they_ haven't
bothered using 'const' everywhere they could.

That's one way of making it cascade in an ugly way - another is in
terms of generics. If people think Map<Foo,List<Bar>> is bad, what
about Map said:
Anyway, I trust the C# team to introduce this (very useful) feature
only when they have it sorted out and not before. As Anders Heljsberg
said, "It's easy to add new features to a language later; it's very
difficult to change them if you include them and then later decide that
you didn't get it right" or words to that effect. Wise man.

Well, that's true from a *language* point of view - but much harder in
terms of a framework. Applying "const" retrospectively is an infeasibly
resource-intensive process, I suspect - and working out backwards
compatibility will be nightmarish.

One easier thing to do with reliability which I think the C# team is
interested in is the opposite of nullable value types: non-nullable
reference types. Basically a compile-time way of preventing non-null
references from being passed in. I suspect it has similar gotchas, but
might be somewhat simpler.
 
P

Peter Olcott

Bruce Wood said:
I disagree. The only way I can think of to interpret your suggestion is
by way of comparison, viz:

C# has two fundamental classes of types: value types and reference
types. The act very differently: value types being... well, values such
as integers, doubles, floats. Also included here are date/times and
some other types such as System.Drawing.Point and
System.Drawing.Rectangle. You can also make your own value types, but
you should have very little cause to do so. Reference types are used
for almost everything else. Java takes a similar approach to this.

One could unify the two classes of types by making everything a
reference type, or at least act like a reference type. Every integer,
double, float, or decimal would live (or appear to live) on the heap,
with a reference pointing to it. I believe that Smalltalk took an
approach similar to this, although I can't be sure because I've never
actually used Smalltalk.

One could also unify the two classes of types by making everything a
value type. This is essentially what C++ does. If you don't explicitly
take the address of something, it's a value. C# could do this, but then
you would have to have some syntax to indicate that you were passing a
reference to a method, for example, instead of passing the entire
object on the stack, which most of the time you don't want to do.
Because C++ has pointers, this is all very easy in C++. However, I
claim that this is one place where C# got it right: MOST OF THE TIME
you want to pass a reference to an object, and you want to pass what C#
considers value types by value. In C#, if you say nothing special, this
is what you get. If you say "ref", then you can pass a value type by
reference (or change the reference to a reference type--a pointer to a
pointer, in C++ terms). The C# team _could_ create a "copy" keyword
that would allow you to pass a reference type "by value" as it were, as
you can do in C++, rather than forcing you to say
"(Person)frank.Clone()" which is rather clunky.

So, again, I don't see how unifying the two kinds of types would
"improve programmer productivity". It would certainly make it easier
for people coming from the C++ world to understand what was going on,
but I think that it would confuse most newbies and if it were done
wrong would lead to newbies copying huge objects onto the stack by
mistake, which undermines claims to "improved productivity."

What I am saying is to raise the C# language to a whole other level of
abstraction where programmers writing programs in this language never have any
reason to know the first thing about separate value types and reference types,
there is only one type, and it simply works correctly and efficiently without
paying any attention at all to the underlying details.

There may still be separate value types and reference types under the covers,
but, the programmer using the language will care about these to the same degree
that they care about which operand is located in which machine register, not at
all.
There are some areas of the .NET Framework that I think could use
improvement, but the language itself works just fine for me. I find the
difference between value types and reference types very clear and
logical. I don't see where the "great rewards in increased programmer
productivity" will come from by trying to unify the two of them into...
what? A C++ type model? I'm sorry, but I found more people utterly

No, not into a C++ type model.
C# unified [Varname.FieldName and Varname->FieldName]
into the single [Varname.FieldName]
Take this same C# idea to its logical conclusion.

Again, I'm confused. C# has no Varnam->FieldName syntax. That's C++.

C# has no VarName->FieldName syntax specifcally because it took the C++ syntax
for working with reference types and value types and combined this sytax into a
single syntax.
 
P

Peter Olcott

Bruce Wood said:
Indeed. C++ sidesteps the problems by providing an un-cost-cast by
which you can remove the "constness" of something. In other words, C++
provides "const", but the language doesn't guarantee that none of the
called methods will modify the object. As such, "const" in C++ is more
a suggestion to the programmer that carries weight only if the
programmer decides to abide by it.

I believe that the C# team's concern is that they want a tight
contract: they want a "const" that guarantees "const", not just a
guideline for programmers.
The idea of [const] is to prevent accidental errors. Until C# has some form of
[const] it will be either somewhat more error prone than C++ or somewhat slower
than C++. If you really want to enforce [const] at run-time, then there is no
way to prevent its cascading effect.
 
J

Jesse McGrew

Bruce said:
Jesse said:
Peter said:
Why should the programmer have to change his code just to enable an
optimization? Why not just say "your methods will run faster if you
don't modify large value-type parameters"?

This aspect is not a matter of optimization, it is a matter of preventing
programming errors. By using the [const] parameter qualifier it is impossible to
inadvertently change a function parameter that was not intended to be changed.

It's already impossible! Those errors don't exist today: if you pass a
value type into a method, any changes the method makes will only affect
the local copy inside that method; the original value in the caller
won't be changed.

The potential errors are purely a side effect of the *optimization*
you've been proposing, which is to pass "in" value types by reference
even though you don't expect to get a changed value back from the
method (which is the purpose of "ref" and "out" parameters). If you
weren't passing them by reference, there'd be no need for those
parameters to be read-only, because any changes would be local to the
method being called.

Jesse, I think that you misunderstand. C++ has a concept of "const" by
which you can pass a reference type, but can indicate that none of its
fields can be changed.

Maybe I am misunderstanding Peter's suggestion, then. I thought he was
talking about providing this for value types, such that 'void Foo(in
LargeStruct bar)' would pass bar by reference like the "ref" keyword,
but it would then be read-only from within Foo.

I can see the merits of applying a "const" attribute to reference types
- although read-only wrappers seem to be a decent solution too.

Jesse
 
J

Jesse McGrew

Peter said:
What I am saying is to raise the C# language to a whole other level of
abstraction where programmers writing programs in this language never have any
reason to know the first thing about separate value types and reference types,
there is only one type, and it simply works correctly and efficiently without
paying any attention at all to the underlying details.

There may still be separate value types and reference types under the covers,
but, the programmer using the language will care about these to the same degree
that they care about which operand is located in which machine register, not at
all.

Perhaps you could share your thoughts as to how this might be done.
Bruce has explained two ways to do it, but they both suffer from some
key problems. It seems to me that the distinction between value and
reference types is one that has evolved out of necessity over the
years, building on the experience of Smalltalk, C++, and Java, and that
if there were an obvious better way to handle these types, we'd already
be using it.

Jesse
 
P

Peter Olcott

Jesse McGrew said:
Perhaps you could share your thoughts as to how this might be done.
Bruce has explained two ways to do it, but they both suffer from some
key problems. It seems to me that the distinction between value and
reference types is one that has evolved out of necessity over the
years, building on the experience of Smalltalk, C++, and Java, and that
if there were an obvious better way to handle these types, we'd already
be using it.

Jesse
I am not saying that this would be easy, but, then the advances from machine
language to object oriented programming weren't easy either. The end result that
I am proposing is far simpler than the prior example.

I was thinking along the lines of treating everything as a reference type and
mostly doing away with value types under the covers. There would be three
parameter qualifiers: {in, out, io}, [out] would be the same as it already is,
and [io] would take the place of the deprecated [ref], [in] would be for
read-only input parameters.
 
B

Barry Kelly

Peter said:
The idea of [const] is to prevent accidental errors. Until C# has some form of
[const] it will be either somewhat more error prone than C++

That's only true if, absent a const modifier, C++ and C# are equal in
error liability. (And I strongly believe they aren't.)
or somewhat slower
than C++.

Ditto re speed, but "speed" arguments are meaningless without context.
There are architectural limitations in both .NET and C++ that constrain
compilers / runtimes differently.
If you really want to enforce [const] at run-time, then there is no
way to prevent its cascading effect.

And does this not result in a net increase in error liability?

-- Barry
 
B

Bruce Wood

Peter said:
What I am saying is to raise the C# language to a whole other level of
abstraction where programmers writing programs in this language never have any
reason to know the first thing about separate value types and reference types,
there is only one type, and it simply works correctly and efficiently without
paying any attention at all to the underlying details.

OK, then we disagree. I prefer the way that it is now.

The issue is that abstracting away details can be good or bad. It can
be good when the details are getting in the way of understanding a
problem; abstract away the details, and the problem becomes clearer. It
can be bad when the details _matter_; abstract away the details and you
obscure the problem because the details were an important part of the
problem or its solution.

C++ has a unified type system that essentially lets the programmer do
anything they want. This includes both elegant and efficient solutions
to problems, as well as horribly inefficient misapplications of said
features. It gives you, as one author put it, "Enough rope to shoot
yourself in the foot with."

C# divides the type system into two groups, based on what you would
normally use the type for, and makes the types act (by default) in the
way you would usually want to use them. It reduces flexibility, but,
IMHO, makes the code easier to understand.

So is the unified abstraction that C++ offers a "bad abstraction,"
then? Not for those who have the high IQ necessary to understand how it
works, and understand which uses of types are good solutions and which
are misapplications. C# takes a lot of the guesswork out of it: how you
apply each kind of type (value vs reference) is baked into the
language. It makes it hard to do patently stupid things, but at the
cost of making the language a little less flexible.

I say that unifying C# value and reference types (a la C++) would make
the language harder to use well, not easier, and would reduce
programmer productivity.
There may still be separate value types and reference types under the covers,
but, the programmer using the language will care about these to the same degree
that they care about which operand is located in which machine register, not at
all.

A specious comparison. Changing the register in which an operand is
located doesn't change the semantics of my program. Passing something
on the stack and copying it on assignment, versus passing a reference
on the stack and copying a reference radically changes the semantics of
my program.

When I'm programming in C#, I don't "care" about the distinction in the
sense that it gets in my way. I know how the types work and I use them,
in the same way that when I drive my car I don't "care" about changing
gears. I've driven standard shift cars for so long that I no longer
even think about about the process... I just do it.

I _do_ think about value versus reference types in the sense that I
think about the semantics of my program. Do I want a particular type to
act like a value, or act like a (reference) object? That's an important
design decision that I would have to make even in C++, when I decide
whether to pass an object on the stack, or pass its address. In a way,
it's worse in C++ because I have to make that decision every time I
call a method or assign a variable. In C# I make a design decision when
I write the class and then I get the "usual" behaviour by default from
then on.

As I said, we disagree. :)
 
P

Peter Olcott

Bruce Wood said:
OK, then we disagree. I prefer the way that it is now.

The issue is that abstracting away details can be good or bad. It can
be good when the details are getting in the way of understanding a
problem; abstract away the details, and the problem becomes clearer. It
can be bad when the details _matter_; abstract away the details and you
obscure the problem because the details were an important part of the
problem or its solution.

Abstracting away details is the primary means of every advance in programming
technology. The advance from machine language to assembly language abstracted
away the details of having to memorize opcodes. The advance from assembly
language to 3GLs abstracted away the details of dealing with unstructured
control flow.

The great advance from machine code to assembly language to "C" to C++ is that
there is very little loss of performance and much greater ease of use. C# has
seemed to have effectively accomplished this same thing, and taken it another
quarter step of ease of use through reduction in details. I think that C# could
progress along the same path that it is on and abstract out the {value type,
reference type} detail with no loss to performance.
 
B

Bruce Wood

Peter said:
Abstracting away details is the primary means of every advance in programming
technology. The advance from machine language to assembly language abstracted
away the details of having to memorize opcodes. The advance from assembly
language to 3GLs abstracted away the details of dealing with unstructured
control flow.

Motherhood and apple pie. Of course, of course, but you seem to have
missed the point....

OK, if all abstraction is progress, then let me recommend another
abstraction that will lead to tremendous progress: why not abstract
away the difference between byte, short, int, long, long long, float,
double, and decimal? Why not just have one "numeric" type that
encompasses all of them. This will clearly abstract away a lot of
trivial housekeeping and vastly improve programmer productivity.

Except that it won't, because those details, or at least the difference
between int, double, and decimal, are very important considerations
when writing code. Abstracting away the difference doesn't lead to
progress: it just leads to bloated programs and, in all likelihood,
buggy programs that don't deal with numerical values properly. Why?
Because we've abstracted away a critical detail.

You see how silly the argument is? Some abstraction is progress, but
some is not. Some abstraction makes things worse. Sorry, Peter, but
I've been programmer for far too long to be impressed by people
labeling their ideas "progressive" and trying to hang an argument on
that. I need you to demonstrate why C++'s type system (which is
essentially what you're promoting) makes programmers more productive
than C#'s type system, not play with words like "progress".
The great advance from machine code to assembly language to "C" to C++ is that
there is very little loss of performance and much greater ease of use. C# has
seemed to have effectively accomplished this same thing, and taken it another
quarter step of ease of use through reduction in details. I think that C# could
progress along the same path that it is on and abstract out the {value type,
reference type} detail with no loss to performance.

OK, let's try this again.... I STILL have to worry about value type vs
reference type in C++. Every time I decide to call Foo(x) instead of
Foo(&x), I've made a value-type-versus-reference-type decision. On
every call. Every time I write a copy constructor. Every time I declare
int x instead of int *x. I'm making those decisions all of the time in
C++.

In C#, I make that decision _once_: when I create the type. Then I
forget about it, because C# almost always does what I want to do after
that without any further verbage or decision-making.

Which language, then, makes it easier for me to write code? Which
abstraction, then, leads to greater programmer productivity? I claim
that it's C#'s way of doing things, which, as unfamiliar as it may be
to you, makes my life a whole lot easier and my code a whole lot
simpler. That, in fact is the bottom line: _the code is a lot simpler_.
At an abstract level it may not be as elegant, but in the trenches it's
simpler.

Heck, if you love abstraction for abstraction's sake, I can point you
to the most elegant language I ever used: LISP. A whole whack of people
in my uni failed the course, but for the few of us who "got it" it was
beautiful.

C# is designed to be usable by most programmers, all the way from
dabblers up to geniuses. You don't have to be terribly detail oriented
(which you must be in order to write C++) and you don't need to be a
great thinker (which you must be in order to write LISP). It's a
workmanlike language that gives up a little expressive power in order
to be accessible to the masses.

I claim that the abstraction you propose would sacrifice the latter.
 
P

Peter Olcott

Bruce Wood said:
Motherhood and apple pie. Of course, of course, but you seem to have
missed the point....

OK, if all abstraction is progress,

All progress is from abstraction does not entail that all abstraction results in
progress, simple non sequitur error.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top