Another C# critique

  • Thread starter christopher diggins
  • Start date
C

christopher diggins

Rob Teixeira said:
Ideal - if everything were in managed space. However, and I find this to be
one of Java's weaknesses, was it's interface to existing code (most of which
is, of course, unmanaged). Working with unmanaged code is sometimes better
suited to unsafe code blocks. However, it's not true that the GC is
circumvented by unsafe code. Unsafe code requires pinned variables in a
local scope. You have to do something horribly wrong to undermine the GC,
and that takes conscious effort on the part of the developer in most cases i
can think of. The crux of this issue is flexibility vs. forced robustness,
and keep that in mind, because I'll revisit it soon. In the meantime,
realize that unsafe code requires a special compiler switch. If you really
don't want your team working on large projects to use it, then don't allow
that switch to be used. Problem solved.

You make good points. I am working on an improved and more educated argument
with regards to unsafe contexts.
It may seem that way to you, but that may be due to your familiarity with
large branches of visible code doing all the work, and this is a different
paradigm. Many languages now use declarative elements and those that didn't
are recently seeing its adoption. However, the obfuscation part hardly
seems like a justifiable case for your argument against attributes, because
if we were to replace "attributes" with a compiler or library function, you
would essencially end up with the exact same results - a small tag of code
whose explicit details you are not privy to. In that respect, any language,
other than assembly with no macros, whether it has declaritive elements or
not would be guilty of the same crime.

Attributes allow modification of such aspects of the language such as
parameter passing. I argue that this obfuscates the software but it does
definitely increase the flexibility and expressiveness of the language. I
also need to mention I don't have a problem with fixed or built-in
attributes but rather with custom attributes. I will try to improve this
section of the critique.
garbage
collected and non-garbage

I see several problems with this argument. First of all is the inconsistancy
of your arguments. Remember the first point I made above that I said i would
revisit? You wanted the language to be less flexible in favor of enforcing
robustness (which you are happy to point out elsewhere in your dissertation
as well). Here, you are completely reversing your position. What MS has
done, is evaluate the top reasons in existing languages (primarily C/C++)
for bugs that both slip notice and cause tremendous problems. As you might
already have guessed, manual memory management is at the top of the list. GC
is the only way (that I know of, or that has been proven to date) to solve
this issue. This is a mojor point of robustness, and I'm unlcear as to why
you would be in favor of robustness above, but not here. Also, there is no
way to "turn off" GC in C# (otherwise, you wouldn't be forced to use it,
right? - seems like a contradictory statement there), and switching to
unsafe mode will not turn off the GC. I have a feeling you are also
confusing the issues between managed and unmanaged code here (which is
completely different from safe vs. unsafe code). As Inigo said to Vezzini,
"I do not think this means what you think it means."

You are oversimplifying that critiquing a language is a simple case of
robustness versus flexibility, and that we must choose one over the over all
the time for all language features. I was definitely confused with regards
to unsafe contexts and unmanaged code. I will be revising this.
at
unpredicatable moments

That is the nature of GC. The thing to remember is: you shouldn't use
destructors for deterministic management (eg. clean-up) of resource (that's
what Dispose is for). Destructors are there as a safety net in case a
consumer doesn't explicitly call Dispose. The Dispose pattern in my opinion
is valid, because the only way to get deterministic finalization, is if the
consumer knows he needs something to be managed "now" - so he MUST call
something... whether that something is a memory deallocation keyword or a
Dispose function is rather irrelevant. Bottom line: if you want to know that
clean-up code is being called "now", use Dispose. That is deterministic. Of
course, C# isn't the only language that uses a GC and deals with the same
issue, but I'm betting you don't like any of them either ;-)

Not all GC languages support non-deterministic destructors. This was a
choice of the language designers. Yes we can avoid using them, but will bob
in the next cubicle over remember too?
implementation.

I'm not sure your example fits the problem in this case. For starters, if
that's the only problem with objects being limited to the heap, then hell..
what's the problem? :) Your example only illustrates that you shouldn't
poorly design class libraries, and the importance of not carrying
implemenation details across boundries (which is why we deal in Interfaces
at that point). Secondly, I'd like to see an example of how this impacts the
consumer of said class library. As far as the user knows, he is passing a
parameter of a certain type to a member. If that parameter changes from
reference type to value type, does that really impact him? His code is going
to more or less look exactly the same from his point of view for that member
invokation.
But lets look at the flipside of the coin. Why would you want objects on the
stack to begin with? The primary reason for C/C++ programmers is that the
speed of heap allocation sucks in comparison to stack allocation. That
limitation doesn't exist in .NET. Heap allocation in .NET requires only the
increment of the allocation pointer, making it almost exactly the same as
stack allocation in terms of performance. To make reference types available
on the stack, you'd have to change GC infrastructure, which would
effectively make reference type allocation on the stack SLOWER. In addition,
value types are rather small in comparison with reference types, especially
if you consider strings, which make up the majority of data in a program. Do
you really want a programmer to accidentally put that on the stack? IMHO,
this isn't a sticking point at all for or against C#.

Not a very strong point that I made, I agree.
a
programmer has to be they
are passing things by

Actually, types don't reside on the stack or heap, variables do. Secondly,
types don't decide where the allocation goes, the runtime does.
Again, I'm curious as to what sorts of bugs this would cause.

I will try and update this as well.
a
problem of needing to simple
assignment. Boxing used
by reference.

Quite the contrary, IMO. Boxing/unboxing allows the programmer to need to
know a lot LESS about whether a type is a reference type or not in order to
use it in the context of a reference type (for example, the common
assignment of value types to a more general reference type reference). If it
were not for boxing/unboxing, the programmer would have to be far MORE aware
of the details of the type implementations and code such scenarios
SEPARATELY.

Hmm... I will need to look into that argument more deeply.
objects.

I understand what you want to do here - create a const instance of a class.
Fair enough, but the semantics could get ugly. First of all, this wouldn't
truly be CONST unless the compiler had a way to create literal instance data
imbedded in the program (as it can easily do for numbers and strings because
it's programmed to know how to do that). You can argue that serialization
might do the trick, but then you'd be limited to only serializable objects
being const. If the compiler is unable to do this, it would have to write
code behind the scenes to create an instance at runtime, which isn't truly
const and you'd be in no better position than using the "final" modifier in
Java, which only prevents you from modifying the value. But assuming you
were able to freez-dry a literal instance (and have a mechanism for the
programmer to specify the data for it in the first place), you could then
run into issues of possibly bypassing class behaviors associated with
constructors - and if you didn't, would it be truly const? This is one of
the reasons that value types get literals and reference types don't - value
types have no constructors.

I don't understand what you mean by "truly const". How could you bypass
class behaviours?
Also, the lack of literal declarations doesn't mean you can't have immutable
objects. You can certainly program immutable objects in C#. Again, you might
want to check your verbiage here. One thing doesn't necessarily mean the
other. And for the sake of a good critique, this can show lack of
understanding of the principals you are critiquing against.

Yes I realize the verbiage is weak there.
a
module or a program. It
is not directly related to objects

Not sure I agree completely with you here. First of all Classes and modules
are both holding containers for members (whether data or functions). The
difference is that modules cannot have distinct instances, and classes can
(or not, if you're dealing with static members) - in effect being perfectly
capable of performing both roles. Your gripe seems to be more about the
invokation syntax, which is purly compiler syntactical sugar and a few rules
about name resolution. For example, VB.NET allows you to create a "Module"
(thusly named), which (for the reasons i stated earlier) is really just a
static class that contains only static members (and you don't have to
specify that, and they can't be changed to instance members either). The VB
compiler also allows you to invoke said members without qualifying it to the
module's (class') name - in essence letting you "export" what appears to be
a global function to the consumer. That could certainly be in C# as well,
but honestly, it's a matter of personal preference at that point. And there
are a lot of people that prefer it the way it is rather than the way VB.NET
does it. For starters, you are introducing a level of ambiguity and issues
revolving around name resolution that otherwise wouldn't be there... and for
a guy who doesn't like ambiguity, this argument seems rather contradictory
with your previous line of thinking.

Invocation syntax has to do with a language specification rather than the
compiler. VB is not what I am discussing neither. My argument is not clear
and actually splits between two points. I will be fixing that. There are
separate issues with the lack of a module concept and with the inability to
export functions into the global namespace.
inheritance
does not encapsulate code
delegated to member fields. Every
the functions.

That is a completely false statement as worded. C# only allows you to
inherit from one class (no multiple inheritance is even possible).
Everything else *is* interface implementation. It is perfectly legitimate to
inherit an interface (or many) and not inherit any class at all. However,
I'll grant you that the member delegation is more limited than some other
languages, including VB.NET, which allows more explicit control over which
members map to which interfaces. Again, this argument as stated appears to
be extremely misinformed, lessening the validity of the critique.

Yes that statement is false.
C# Missing Template Template Parameters
In C#, a generic type parameter cannot itself be a generic, although
constructed types may be used as generics.

Remember that generics in C# aren't macro-type replacements as is the case
in C++. There are resolution and type safty issues with regards to allowing
this. There was a rather good explanation in one of the blogs (probably
reachable via C# Corner).
Late Added Generics
By adding generics so late, among other things there is the pointless redundancy of
having two kinds of arrays. The int a[]; and array<int> a;

Not a good argument at all. First of all, few languages support generics to
begin with... and some roughly equated to C# have had generics on the
drawing board for years and haven't even gotten there yet. Considering the
priorities and the grand scope of .NET in its entirety (not just C#), the
progression has actually been very fast, especially if you take all the
functionality into account. Note that the language designers actually had a
draft of how generics would work within the scope of the CLR/CLI during
the .NET Beta. But where the argument really falls through is
that having two arrays is *not* pointlessly redundant. Why go through the
gernerics mechanism if you don't have to - especially for types which the
compiler already knows how to build type-safe arrays for?
Special Primitives
Special rules exist for so-called "primitive" types. This significantly reduces the
expressiveness and flexibility of the language. The programmer can not
add
new
types that are at fully at par with existing primitives.

What on earth are you tryign to do with user-defined types that can only be
done with primitives?
Remember that the compiler has no earthly idea what to do with types nobody
has written yet...
You're going to need one hell of a good example to justify this one (in
fact, the whole page could do with a LOT more concrete examples to explain
what you're trying to get at). And again, C# isn't the only language to do
this. Most languages are like this... though again, i'm betting you don't
like any of them either.

Just because "other languages do this" doesn't make a feature a good idea. I
can still take up issue with the feature.
I'm sure you find it unreadable, but i'm betting C# developers find it
perfectly readable ;-)
This is like COBOL programmers saying they think pascal files are
unreadable, or vice-versa.
I'm not sure your cause-effect statement holds much water either.
And if it's truly that unreadable, what on earth is your suggestion? C# is
about as readable as any C-derived language will ever get. If you don't like
any C languages, why are you even critiquing C# specifically?

Not quite the same thing as COBOL vs Pascal readability. C# is virtually the
same (syntactically) as C++ but without any kind of header files. This saves
typing, but requires auto-generation of headers through one form or another.
Is not having header files an improvement of any kind? One may argue that C#
saves some typing and that auto-generation is sufficient, but there is
clearly no real advantage of not having header files.
encapsulation principle of objects.

Oh Dear Lord! The humanity!!! In all seriousness, if C# didn't allow it,
someone just like you would come in and complain that it didn't. All the
encapsulation overkill aside, there are legitimate reasons for exposing
public fields - primary of which is the ability to expose a field whose
value is unlimited in the scope of its range, without taking the performance
hit of going through an accessor/mutator method. And before you cry
"HERASY!", remember, best practices are there to stop people who can't think
from doing dumb things - not there to prevent people from thinking.

Best practices are usually ignored most by the people they are intended for.
Nonetheless, properties make public fields redundant.
leads
to a function which can

If a tree falls in the woods, and nobody is around to hear it, does it make
a sound? If you use proper exection handling, does it matter?

It doesn't really matter that much I suppose if one consistently assumes
that x.m = y is actually a function call.
As for the overall "critique", you need to cut out the personal opinion
issues, and stick to good empirical data analysis and facts. The problem
with programmers and techies in general is that they fall into an almost
religious stupor about defending or bashing technology, facts and scientific
analysis be damned. Sadly, that just makes hordes of fodder for marketers.
While I could see some potential in a few of your points, elminating the
personal preference issues for the sake of a truly unbiased and worth-while
critique would leave you with a rather slim list. Your page stopped being a
critique and started becoming a soapbox RANT at your last paragraph, so i
won't even justify that part's pathetic existance by commenting on it. So in
essence, your fear of being called a flame-bating troll is not only
possible, but extremely likely. When you add to that the fact that you
posted it in this group, one can only wonder why else you would even bring
it up? If you "honestly want to improve on the critique", then consider
carefully what i said - especially about the ranting. You ran off the tech
rails and right into the blind-pundit/marketer path (take your pick). People
can argue over a true critique, but its merits will weather those arguments
unabashed and faultless when it is founded in pure fact. I can't say that
about
yours. And if you are promoting a product under these ideals, I certainly
wouldn't do business with you. If you have a bone to pick with MS, you
certainly have the right to do so, but don't disguise it as anything less.

-Rob Teixeira [MVP]

I see your point, and I will remove the crap about M$ at the end of my
article.

Overall thanks for the huge amount of work put into this letter. I will be
significantly overhauling the critique and will post when it is done. Thanks
again Rob.
 
E

Elder Hyde

Would be quite easy for you to come up with a Java critique as
well--they're quite similar now. Although the current version of Java
doesn't have some of the features you dislike, they will have it soon
(J2SE 1.5).

In fact, C# is "better" if you compare it against Java in some of your
points in that article. You have to use structs if you want to create
objects that exist on the stack? You can't even have structs on the
stack in Java! Everything is a class, too bad.

BTW, since you didn't criticize Java... have you ever wondered why they
added the C# features you didn't like to Java (metadata/attribute,
boxing-unboxing)? That, by itself, should tell you that a lot of people
see value in having those features.

I do agree with you on mutability, source file layout, and the property
stuff though.
 
J

Jon Skeet [C# MVP]

Don't bet on that "definitely not!".
Hackers are very resourcefull people you know.

Okay, let's just say that if the CLR doesn't have any bugs, you won't
get a buffer overflow. If the CLR has bugs in this respect, however,
nothing's guaranteed anyway.
I very rarely need these things like that so far.
I avoid custom made Structs, I prefer to use the classes instead, far more
flexibel in use.

Yes, I rarely create my own value types.
Ints, DateTime and other generic structs are logical to use, but so far even
in C++ I do not need custom structs unless using it as a memory mapper.
Everyone programs in a different way. ;-)

All I'm trying to point out is that everyone *does* use value types,
even if they don't create their own ones.
 
A

Andreas Huber

In my opinion, Constructors and Destructors should only perform
actions that guarantees that it will not crash.

You must be disapointed by the .NET framework then. There are many types
like e.g. FileStream, where the constructors can throw and the finalizers
perform actions that can fail. Why exactly do you think a constructor should
not propagate exceptions? Why is it better to first construct a type and
then call Open/Init/Whatever?

Regards,

Andreas
 
A

Andreas Huber

christopher said:
The argument that all features that increase flexibility are
neccessarily equally good falsely assumes that all potential for
programmer error are also equivalent.

I did not suggest that, did I? I also think that some features are hard to
abuse while others blow up at least once in just about everybodys face.
However, often the most dangerous features are also the most useful (e.g.
some programs would be extremely hard to write without multi-threading
support).
Some language features lead to
relatively insignficant bugs, while others lead to potential
disastrous consequences. For more on the differences between classes
of programmer error see my article on scale of magnitude for
Programmer Error in Programming Languages at
http://www.heron-language.com/scale-of-errors.html.

On your scale some multi-threading bugs would classify as second-worst. Are
you suggesting that a "proper" language should not support MT because it is
so dangerous?

Regards,

Andreas
 
R

Rob Teixeira [MVP]

C# (or more correctly, .NET CLR) doesn't allow two types of buffer overrun
issues, which to my knowledge are THE two exploits. I've yet to find a third
scenario that doesn't fit into one of the two.

Buffer overflows exist because "extra" data is allocated on the stack, and
overwrites the return execution address of the stack frame, causing the
function to return to an incorrect address that either crashes the program,
or more malevolently, goes to execute malicious code, or system code using
inappropriate thread security permissions.

However, .NET stack frames are FIXED in size. Only variables of known type
(and therefore size) are allowed on the stack. You cannot allocate ambiguous
or anonymous data chunks on the stack. Also, throughout the execution of the
function, the stack frame size cannot change. All variables slots are
pre-allocated and never changed.

The second kind of buffer overrun is induced by index parameters to buffers
(or more generically, arrays of "stuff") that aren't correctly checked for
exceeding array bounds. This prompts the code to overwrite data in some
unspecified segment of memory (if the buffer is on the stack, you get the
idea). So, basically, Hacker Joe sets your index param to some obscene high
or negative number, and your code overwrites all sorts of data it shouldn't.
..NET protects against this in two ways : first, all array access is
bounds-checked by default, but you can turn this off. Secondly, arrays
aren't allowed on the stack. Therefore, even if you were able to pass a bad
index off AND array bounds checking was turned off, AND you were using
unsafe code to access the array, you would still be completely unable to
overwrite the return execution pointer.

Given that, I highly doubt someone can come up with a good example of how to
cause a buffer overflow using managed code.

-Rob Teixeira [MVP]
 
R

Rob Teixeira [MVP]

inline:

christopher diggins said:
Attributes allow modification of such aspects of the language such as
parameter passing. I argue that this obfuscates the software but it does
definitely increase the flexibility and expressiveness of the language. I
also need to mention I don't have a problem with fixed or built-in
attributes but rather with custom attributes. I will try to improve this
section of the critique.

Does it help any that custom attributes can't change core behaviours of the
compiler? In other words, if i add a custom attribute, i need to provide a
library that "does" something using reflection against those attributes. The
core runtime/compiler have no clue and ignore the custom attributes. As the
library provider, I document the custom attribute usage. I don't see that as
being much different from providing library functionality without
attributes, except in the fact that it's declarative vs. imperative code.
You are oversimplifying that critiquing a language is a simple case of
robustness versus flexibility, and that we must choose one over the over all
the time for all language features. I was definitely confused with regards
to unsafe contexts and unmanaged code. I will be revising this.

Not necessarily. I'm saying that you value robustness, yet the GC, which is
arguably one of the greatest tools of robustness, is something you want to
avoid. That puzzles me. Manual allocation/deallocation of variables has got
to be one of, if not THE worst culprit of bugs.
Not all GC languages support non-deterministic destructors. This was a
choice of the language designers. Yes we can avoid using them, but will bob
in the next cubicle over remember too?

I agree with this statement. In fact, the .NET developers published two
lengthy articles in 2001 covering the GC (very insightful, if you haven't
read them). They actually did document how one could have both a GC and
deterministic finalizers. The problem is the complexity and performance
overhead added to the GC infrastructure for arguably little return.

The second part of what I'm saying is that while the "finalizer" itself is
not deterministic in its execution, the Dispose method is - so you are not
losing a deterministic clean-up method. A finalizer is basically there as a
safty net in case Bob from the next cubicle forgets to call Dispose. But
let's pretend for a moment that Bob is working in a language with manual
deallocation... now he forgets to call the deallocation keyword or function.
Bob's program is now in deep S#!t, and there is no safty net of the GC
calling a finalizer to help him lessen the impact of his mistake. His
process will leak resources until it is either shut down or crashes.
I don't understand what you mean by "truly const". How could you bypass
class behaviours?

To make a const statement, you have to do several things. First, you have to
provide the programmer with some sort of syntax that describes a literal
"value" of the object instance. This can be quite complicated in and of
itself. How are you going to deal with a single object that potentially has
an extremely deep object graph (and potentially circular references) as an
example? Secondly, you have to "freeze-dry" this literal value data into the
compiled program. If you use serialization to do this, then you can only
have literal const statements using serializable objects. Not a good
limitation IMO. To remove that limitation, you would have to find some other
mechanism of storing the literal data, but if that bypasses class
constructors, you are potentially bypassing important class behaviors built
into the constructors. If you don't bypass the constructor, then you are
creating a new instance of the class at runtime through normal means, which
makes the reference only "sort-of-immutable". This is one reason primatives
can have literal const values - they don't have constructors, and can be
reconstituted easily from compiled data, and thus don't need to worry about
these issues.
Invocation syntax has to do with a language specification rather than the
compiler. VB is not what I am discussing neither. My argument is not clear
and actually splits between two points. I will be fixing that. There are
separate issues with the lack of a module concept and with the inability to
export functions into the global namespace.

I cited VB.NET, because it too uses the .NET CLR/CLI and compiles to the
same set of IL instructions. This proves that C# could indeed have the
features as you described them. However, my main point is that there are
people, most developers and users of C# in particular it seems, that
*prefer* not have things done that way. To me, this is entirely a matter of
personal opinion.
Just because "other languages do this" doesn't make a feature a good idea. I
can still take up issue with the feature.

I agree, but I'm still curious about what you are trying to do with UDTs
that can only be done with primatives? You still haven't provided an
example. There are very legitimate reasons why C# and other languages impose
these limits :)
Not quite the same thing as COBOL vs Pascal readability. C# is virtually the
same (syntactically) as C++ but without any kind of header files. This saves
typing, but requires auto-generation of headers through one form or another.
Is not having header files an improvement of any kind? One may argue that C#
saves some typing and that auto-generation is sufficient, but there is
clearly no real advantage of not having header files.

Is there a clear advantage to *having* a header file? The C# devs and users
don't seem to think so.
The metadata is available and readable, just not in notepad (or your text
editor of choice), but i think that's hardly a good reason to enforce the
usage of header files :)
Could be this is just another case of personal preference, which goes back
to the analogy of programmers of one language not liking how the other
language reads.
It doesn't really matter that much I suppose if one consistently assumes
that x.m = y is actually a function call.

Or if one writes code that uses structured execption handling. I hope you
aren't implying that people would write a Try block, put code in it, end the
Try block, just to assign a field, then start another Try block after the
assignment :)
I see your point, and I will remove the crap about M$ at the end of my
article.

Overall thanks for the huge amount of work put into this letter. I will be
significantly overhauling the critique and will post when it is done. Thanks
again Rob.

You're welcome Christopher. I'm glad to see the more cooperative attitude.
Now if I could only convince the other open-sourcers to do the same, we
might actually get somewhere :)

Cheers,
-Rob
 
R

Rob Teixeira [MVP]

In my opinion, Constructors and Destructors should only perform actions that
guarantees that it will not crash.
Mainly initializing variables, allocating memory, copying,....

Then we have a second set of methods that actually open an close
files/resources/connection/....
This way, a file is closed, memory freed, connections closed,... outside the
destructor and it doesn't matter then when the destructor is then called.

I agree with the Destructor comments, and that's precisely what C# does (or
more correctly, prefers that programmers do).
The destructor should never raise exceptions, and should handle them
privately if they occur, but more imporantly, they are minimal pieces of
code used as a safety net in case somebody accidentally forgets to call a
Dispose or Close method. If you don't do that, then contained resouces are
never *properly* freed during the execution of your process, or worse yet -
even after your process is terminated in some cases. If you are dealing
exclusively with managed code, there is really no need to have destructors
IMO.

However, do be careful in how you word things. Initializing variables,
allocating memory, and copying *can* in fact raise exceptions.

-Rob Teixeira [MVP]
 
J

Jon Skeet [C# MVP]

You're welcome Christopher. I'm glad to see the more cooperative attitude.
Now if I could only convince the other open-sourcers to do the same, we
might actually get somewhere :)

Actually, open source folks tend to be very co-operative - otherwise
they don't get anywhere. That's certainly true in my experience,
anyway.
 
G

Guest

In my opinion, Constructors and Destructors should only perform
You must be disapointed by the .NET framework then. There are many types
like e.g. FileStream, where the constructors can throw and the finalizers
perform actions that can fail. Why exactly do you think a constructor should
not propagate exceptions? Why is it better to first construct a type and
then call Open/Init/Whatever?
Ok, with the use of exceptions constructors could do more things that might
fail.
The big problem lays in the fact that if something goes wrong, you have no
access to internal error information since the class does not exist since it
failed.
You have to rely on specialized exception classes that pass on enough
information. For small classes that generates only one unique error, this is
not a problem, but classes that could generate multiple errors (e.g.:
copying multiple files,...loading a big document and parse multiple fields
that might contain errors) could be a big problem.

Since I do not like to program in a mixed style, I try to use this general
rule to create constructors that normally do not fail. But sometimes you
have exceptions of course.

But destructors should not! (e.g.: close files) .
The class that opens files should have at least one exposed method to
release the resources used, for example close a file.
Some .NET components do not have this, and I had this very annoying problem
that when I opened the same file 2 times in a row, then it sometimes failed
the second time because it was still locked the first time. The destructor
was not called yet, so the file did not close when I tried to reopen it. It
did not show up in the debug version, but only the release version.
 
G

Guest

Is there a clear advantage to *having* a header file? The C# devs and
users
don't seem to think so.
The metadata is available and readable, just not in notepad (or your text
editor of choice), but i think that's hardly a good reason to enforce the
usage of header files :)
Could be this is just another case of personal preference, which goes back
to the analogy of programmers of one language not liking how the other
language reads.
Header files sucks, always synchronization problems with them, wrong search
paths, 2 locations to edit...
The Delphi way is better, you have your class description en then the
implementation section, but still 2 locations to edit.
The C# is far better in use, only one location to edit. And you have an
immediate overview what is actually inside that method.

One note: Visual Studion .NET C# have this outlining function and the
#region trick, to hide code so your classes become as readable as the
original use of header files. Hiding the implementation details. Reading C#
in notepad is maybe harder to read, these outlining functions in Visual
Studio makes programming a lot easier.
 
G

Guest

You DONT need to duplicate method signitures in this day and age, the reason
C++ has that is because of the bad design in the compilers.

Function prototypes are bad and are only there to fix compiler bad design.
Tools should work for us not us work for them.

You just are stuck in the dark ages and refuse to budge. I guess no job here
for you :D We need better Software Designers and not ones that stick to old
ways for the sake of it.
 
G

Guest

Are we in a flaming mood?????
Please make your point why you think that we are stuck in old ways?
Because both the original writer and I wrote that we don't prefer header
files at all.

And my message below clearly indicates that I prefer the C# way without
function prototyping.
 
G

Guest

There is no reason for function prototypes, this was only there to fix a bad
compiler design.

And yeah I got my flameproof suit on.

Every day I hear oh I miss the C .H function prototypes in the header files,
thats the OLD ways, there is NO point in them except bad compiler design.
Its not there to make it easier to give out developer information becuase
they contain private and protected information also, so that arguemnt is out
of the window. Its plain and simply a bad compiler design issue.
 
G

Guest

Yes, you are right in this, but why flame me????
I am the good guy, not the bad guy!!!!

I hate header files!
 
G

Guest

As for preprocessors, theyre evil, hard to debug and just not good in any
way to have mixed compiled code.

I prefer to use a const value rather than conditional compile, at least C#
has reduced this need by having defines limited to true or false values,
still I would prefer it not there at all.

The lack of .lib static linking was a bad thing to not have, thankfully its
back in 2.0.
 
A

Andreas Huber

[snip]
Ok, with the use of exceptions constructors could do more things that might
fail.
The big problem lays in the fact that if something goes wrong, you have no
access to internal error information since the class does not exist since it
failed.

That's why you can define your own exception classes. Put all error
information in the exception and there won't be a problem.
You have to rely on specialized exception classes that pass on enough
information.

Exactly. What's so bad about that?
For small classes that generates only one unique error, this is
not a problem, but classes that could generate multiple errors (e.g.:
copying multiple files,...loading a big document and parse multiple fields
that might contain errors) could be a big problem.

Why? Do it just like they did with the FileStream class. Define a new
exception for each error you want to give your clients the opportunity
to discriminate on. Expose all necessary info with properties and
you're done. I don't see any problems at all.
Since I do not like to program in a mixed style, I try to use this general
rule to create constructors that normally do not fail. But sometimes you
have exceptions of course.

What's mixed about throwing from a constructor?
But destructors should not! (e.g.: close files) .

I didn't say that, did I? I just said that finalizers sometimes have
to call functions that can fail (e.g. FileStream). If the function
fails the finalizer simply swallows the error. You claimed that
destructors should not call functions that can fail.
The class that opens files should have at least one exposed method to
release the resources used, for example close a file.
Some .NET components do not have this, ...

A really? Exactly which classes are you talking about?
... and I had this very annoying problem
that when I opened the same file 2 times in a row, then it sometimes failed
the second time because it was still locked the first time. The destructor
was not called yet, so the file did not close when I tried to reopen it. It
did not show up in the debug version, but only the release version.

That's one of the reasons why you have to call Dispose() on all
objects of classes that implement IDisposable...

Regards,

Andreas
 
C

christopher diggins

Thanks to the excellent response from this group, I have re-written the
critique significantly. It is much less of a soapbox now, and has fixed up
many technical flaws and inconsistencies. This new version is going to need
picking apart as well, and I would appreciate more comments.

The new version is at the same location as the old one :
http://www.heron-language.com/C-sharp-critique.html.

In regards to the common suggestion that I should write a comparison between
C# and Heron, perhaps I will in the future but I don't think it negates a
stand-alone C# critique.

Thanks to all for your time and input.
 
J

Jon Skeet [C# MVP]

christopher diggins said:
Thanks to the excellent response from this group, I have re-written the
critique significantly. It is much less of a soapbox now, and has fixed up
many technical flaws and inconsistencies. This new version is going to need
picking apart as well, and I would appreciate more comments.

The new version is at the same location as the old one :
http://www.heron-language.com/C-sharp-critique.html.

In regards to the common suggestion that I should write a comparison between
C# and Heron, perhaps I will in the future but I don't think it negates a
stand-alone C# critique.

Thanks to all for your time and input.

Two things spring to mind after only a very cursory glance

1) Garbage collection - you've got a sentence which doesn't finish
2) User defined type values can't be used as constant expressions, but
you can certainly have readonly variables:

readonly MyValueType foo = new MyValueType(...);
 
A

Andreas Huber

christopher said:
Thanks to the excellent response from this group, I have re-written
the critique significantly. It is much less of a soapbox now, and has
fixed up many technical flaws and inconsistencies. This new version
is going to need picking apart as well, and I would appreciate more
comments.

The new version is at the same location as the old one :
http://www.heron-language.com/C-sharp-critique.html.

In regards to the common suggestion that I should write a comparison
between C# and Heron, perhaps I will in the future but I don't think
it negates a stand-alone C# critique.

Thanks to all for your time and input.

AFAICT, array<complex<float>> will be legal ("although constructed types may
be used as generics").

Regards,

Andreas
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

CSharp Coding Standards 18

Top