C
christopher diggins
Rob Teixeira said:Ideal - if everything were in managed space. However, and I find this to be
one of Java's weaknesses, was it's interface to existing code (most of which
is, of course, unmanaged). Working with unmanaged code is sometimes better
suited to unsafe code blocks. However, it's not true that the GC is
circumvented by unsafe code. Unsafe code requires pinned variables in a
local scope. You have to do something horribly wrong to undermine the GC,
and that takes conscious effort on the part of the developer in most cases i
can think of. The crux of this issue is flexibility vs. forced robustness,
and keep that in mind, because I'll revisit it soon. In the meantime,
realize that unsafe code requires a special compiler switch. If you really
don't want your team working on large projects to use it, then don't allow
that switch to be used. Problem solved.
You make good points. I am working on an improved and more educated argument
with regards to unsafe contexts.
It may seem that way to you, but that may be due to your familiarity with
large branches of visible code doing all the work, and this is a different
paradigm. Many languages now use declarative elements and those that didn't
are recently seeing its adoption. However, the obfuscation part hardly
seems like a justifiable case for your argument against attributes, because
if we were to replace "attributes" with a compiler or library function, you
would essencially end up with the exact same results - a small tag of code
whose explicit details you are not privy to. In that respect, any language,
other than assembly with no macros, whether it has declaritive elements or
not would be guilty of the same crime.
Attributes allow modification of such aspects of the language such as
parameter passing. I argue that this obfuscates the software but it does
definitely increase the flexibility and expressiveness of the language. I
also need to mention I don't have a problem with fixed or built-in
attributes but rather with custom attributes. I will try to improve this
section of the critique.
garbage
collected and non-garbage
I see several problems with this argument. First of all is the inconsistancy
of your arguments. Remember the first point I made above that I said i would
revisit? You wanted the language to be less flexible in favor of enforcing
robustness (which you are happy to point out elsewhere in your dissertation
as well). Here, you are completely reversing your position. What MS has
done, is evaluate the top reasons in existing languages (primarily C/C++)
for bugs that both slip notice and cause tremendous problems. As you might
already have guessed, manual memory management is at the top of the list. GC
is the only way (that I know of, or that has been proven to date) to solve
this issue. This is a mojor point of robustness, and I'm unlcear as to why
you would be in favor of robustness above, but not here. Also, there is no
way to "turn off" GC in C# (otherwise, you wouldn't be forced to use it,
right? - seems like a contradictory statement there), and switching to
unsafe mode will not turn off the GC. I have a feeling you are also
confusing the issues between managed and unmanaged code here (which is
completely different from safe vs. unsafe code). As Inigo said to Vezzini,
"I do not think this means what you think it means."
You are oversimplifying that critiquing a language is a simple case of
robustness versus flexibility, and that we must choose one over the over all
the time for all language features. I was definitely confused with regards
to unsafe contexts and unmanaged code. I will be revising this.
at
unpredicatable moments
That is the nature of GC. The thing to remember is: you shouldn't use
destructors for deterministic management (eg. clean-up) of resource (that's
what Dispose is for). Destructors are there as a safety net in case a
consumer doesn't explicitly call Dispose. The Dispose pattern in my opinion
is valid, because the only way to get deterministic finalization, is if the
consumer knows he needs something to be managed "now" - so he MUST call
something... whether that something is a memory deallocation keyword or a
Dispose function is rather irrelevant. Bottom line: if you want to know that
clean-up code is being called "now", use Dispose. That is deterministic. Of
course, C# isn't the only language that uses a GC and deals with the same
issue, but I'm betting you don't like any of them either ;-)
Not all GC languages support non-deterministic destructors. This was a
choice of the language designers. Yes we can avoid using them, but will bob
in the next cubicle over remember too?
implementation.
I'm not sure your example fits the problem in this case. For starters, if
that's the only problem with objects being limited to the heap, then hell..
what's the problem?Your example only illustrates that you shouldn't
poorly design class libraries, and the importance of not carrying
implemenation details across boundries (which is why we deal in Interfaces
at that point). Secondly, I'd like to see an example of how this impacts the
consumer of said class library. As far as the user knows, he is passing a
parameter of a certain type to a member. If that parameter changes from
reference type to value type, does that really impact him? His code is going
to more or less look exactly the same from his point of view for that member
invokation.
But lets look at the flipside of the coin. Why would you want objects on the
stack to begin with? The primary reason for C/C++ programmers is that the
speed of heap allocation sucks in comparison to stack allocation. That
limitation doesn't exist in .NET. Heap allocation in .NET requires only the
increment of the allocation pointer, making it almost exactly the same as
stack allocation in terms of performance. To make reference types available
on the stack, you'd have to change GC infrastructure, which would
effectively make reference type allocation on the stack SLOWER. In addition,
value types are rather small in comparison with reference types, especially
if you consider strings, which make up the majority of data in a program. Do
you really want a programmer to accidentally put that on the stack? IMHO,
this isn't a sticking point at all for or against C#.
Not a very strong point that I made, I agree.
a
programmer has to be they
are passing things by
Actually, types don't reside on the stack or heap, variables do. Secondly,
types don't decide where the allocation goes, the runtime does.
Again, I'm curious as to what sorts of bugs this would cause.
I will try and update this as well.
a
problem of needing to simple
assignment. Boxing used
by reference.
Quite the contrary, IMO. Boxing/unboxing allows the programmer to need to
know a lot LESS about whether a type is a reference type or not in order to
use it in the context of a reference type (for example, the common
assignment of value types to a more general reference type reference). If it
were not for boxing/unboxing, the programmer would have to be far MORE aware
of the details of the type implementations and code such scenarios
SEPARATELY.
Hmm... I will need to look into that argument more deeply.
objects.
I understand what you want to do here - create a const instance of a class.
Fair enough, but the semantics could get ugly. First of all, this wouldn't
truly be CONST unless the compiler had a way to create literal instance data
imbedded in the program (as it can easily do for numbers and strings because
it's programmed to know how to do that). You can argue that serialization
might do the trick, but then you'd be limited to only serializable objects
being const. If the compiler is unable to do this, it would have to write
code behind the scenes to create an instance at runtime, which isn't truly
const and you'd be in no better position than using the "final" modifier in
Java, which only prevents you from modifying the value. But assuming you
were able to freez-dry a literal instance (and have a mechanism for the
programmer to specify the data for it in the first place), you could then
run into issues of possibly bypassing class behaviors associated with
constructors - and if you didn't, would it be truly const? This is one of
the reasons that value types get literals and reference types don't - value
types have no constructors.
I don't understand what you mean by "truly const". How could you bypass
class behaviours?
Also, the lack of literal declarations doesn't mean you can't have immutable
objects. You can certainly program immutable objects in C#. Again, you might
want to check your verbiage here. One thing doesn't necessarily mean the
other. And for the sake of a good critique, this can show lack of
understanding of the principals you are critiquing against.
Yes I realize the verbiage is weak there.
a
module or a program. It
is not directly related to objects
Not sure I agree completely with you here. First of all Classes and modules
are both holding containers for members (whether data or functions). The
difference is that modules cannot have distinct instances, and classes can
(or not, if you're dealing with static members) - in effect being perfectly
capable of performing both roles. Your gripe seems to be more about the
invokation syntax, which is purly compiler syntactical sugar and a few rules
about name resolution. For example, VB.NET allows you to create a "Module"
(thusly named), which (for the reasons i stated earlier) is really just a
static class that contains only static members (and you don't have to
specify that, and they can't be changed to instance members either). The VB
compiler also allows you to invoke said members without qualifying it to the
module's (class') name - in essence letting you "export" what appears to be
a global function to the consumer. That could certainly be in C# as well,
but honestly, it's a matter of personal preference at that point. And there
are a lot of people that prefer it the way it is rather than the way VB.NET
does it. For starters, you are introducing a level of ambiguity and issues
revolving around name resolution that otherwise wouldn't be there... and for
a guy who doesn't like ambiguity, this argument seems rather contradictory
with your previous line of thinking.
Invocation syntax has to do with a language specification rather than the
compiler. VB is not what I am discussing neither. My argument is not clear
and actually splits between two points. I will be fixing that. There are
separate issues with the lack of a module concept and with the inability to
export functions into the global namespace.
inheritance
does not encapsulate code
delegated to member fields. Every
the functions.
That is a completely false statement as worded. C# only allows you to
inherit from one class (no multiple inheritance is even possible).
Everything else *is* interface implementation. It is perfectly legitimate to
inherit an interface (or many) and not inherit any class at all. However,
I'll grant you that the member delegation is more limited than some other
languages, including VB.NET, which allows more explicit control over which
members map to which interfaces. Again, this argument as stated appears to
be extremely misinformed, lessening the validity of the critique.
Yes that statement is false.
constructed types may be used as generics.C# Missing Template Template Parameters
In C#, a generic type parameter cannot itself be a generic, although
Remember that generics in C# aren't macro-type replacements as is the case
in C++. There are resolution and type safty issues with regards to allowing
this. There was a rather good explanation in one of the blogs (probably
reachable via C# Corner).
Late Added Generics
By adding generics so late, among other things there is the pointless redundancy of
having two kinds of arrays. The int a[]; and array<int> a;
Not a good argument at all. First of all, few languages support generics to
begin with... and some roughly equated to C# have had generics on the
drawing board for years and haven't even gotten there yet. Considering the
priorities and the grand scope of .NET in its entirety (not just C#), the
progression has actually been very fast, especially if you take all the
functionality into account. Note that the language designers actually had a
draft of how generics would work within the scope of the CLR/CLI during
the .NET Beta. But where the argument really falls through is
that having two arrays is *not* pointlessly redundant. Why go through the
gernerics mechanism if you don't have to - especially for types which the
compiler already knows how to build type-safe arrays for?
addSpecial Primitives
Special rules exist for so-called "primitive" types. This significantly reduces the
expressiveness and flexibility of the language. The programmer can notnew
types that are at fully at par with existing primitives.
What on earth are you tryign to do with user-defined types that can only be
done with primitives?
Remember that the compiler has no earthly idea what to do with types nobody
has written yet...
You're going to need one hell of a good example to justify this one (in
fact, the whole page could do with a LOT more concrete examples to explain
what you're trying to get at). And again, C# isn't the only language to do
this. Most languages are like this... though again, i'm betting you don't
like any of them either.
Just because "other languages do this" doesn't make a feature a good idea. I
can still take up issue with the feature.
I'm sure you find it unreadable, but i'm betting C# developers find it
perfectly readable ;-)
This is like COBOL programmers saying they think pascal files are
unreadable, or vice-versa.
I'm not sure your cause-effect statement holds much water either.
And if it's truly that unreadable, what on earth is your suggestion? C# is
about as readable as any C-derived language will ever get. If you don't like
any C languages, why are you even critiquing C# specifically?
Not quite the same thing as COBOL vs Pascal readability. C# is virtually the
same (syntactically) as C++ but without any kind of header files. This saves
typing, but requires auto-generation of headers through one form or another.
Is not having header files an improvement of any kind? One may argue that C#
saves some typing and that auto-generation is sufficient, but there is
clearly no real advantage of not having header files.
encapsulation principle of objects.
Oh Dear Lord! The humanity!!! In all seriousness, if C# didn't allow it,
someone just like you would come in and complain that it didn't. All the
encapsulation overkill aside, there are legitimate reasons for exposing
public fields - primary of which is the ability to expose a field whose
value is unlimited in the scope of its range, without taking the performance
hit of going through an accessor/mutator method. And before you cry
"HERASY!", remember, best practices are there to stop people who can't think
from doing dumb things - not there to prevent people from thinking.
Best practices are usually ignored most by the people they are intended for.
Nonetheless, properties make public fields redundant.
leads
to a function which can
If a tree falls in the woods, and nobody is around to hear it, does it make
a sound? If you use proper exection handling, does it matter?
It doesn't really matter that much I suppose if one consistently assumes
that x.m = y is actually a function call.
As for the overall "critique", you need to cut out the personal opinion
issues, and stick to good empirical data analysis and facts. The problem
with programmers and techies in general is that they fall into an almost
religious stupor about defending or bashing technology, facts and scientific
analysis be damned. Sadly, that just makes hordes of fodder for marketers.
While I could see some potential in a few of your points, elminating the
personal preference issues for the sake of a truly unbiased and worth-while
critique would leave you with a rather slim list. Your page stopped being a
critique and started becoming a soapbox RANT at your last paragraph, so i
won't even justify that part's pathetic existance by commenting on it. So in
essence, your fear of being called a flame-bating troll is not only
possible, but extremely likely. When you add to that the fact that you
posted it in this group, one can only wonder why else you would even bring
it up? If you "honestly want to improve on the critique", then consider
carefully what i said - especially about the ranting. You ran off the tech
rails and right into the blind-pundit/marketer path (take your pick). People
can argue over a true critique, but its merits will weather those arguments
unabashed and faultless when it is founded in pure fact. I can't say that
about
yours. And if you are promoting a product under these ideals, I certainly
wouldn't do business with you. If you have a bone to pick with MS, you
certainly have the right to do so, but don't disguise it as anything less.
-Rob Teixeira [MVP]
I see your point, and I will remove the crap about M$ at the end of my
article.
Overall thanks for the huge amount of work put into this letter. I will be
significantly overhauling the critique and will post when it is done. Thanks
again Rob.