C# coding guidelines: use "this." or not when referring to member fields/properties within the obje

  • Thread starter Thread starter Dave
  • Start date Start date
It all depends on the situation but I think it is rare that the class itself
needs to access the variable though the property. There is no point turning
a high speed variable access into a slower function call every time just in
case something might change in the future. If it does change and it becomes
appropriate to use the property then modify the code.

That's a recipe for disaster, and doesn't make a lot of sense to boot.
If the setter is trivial, it will be inlined -- hence there won't be a
slow function call at all! And if the setter isn't trivial, you had
better make sure to call it anyway.

You should only ever directly set a backer variable in the constructor
of if you specifically wish to bypass whatever the setter does, and in
that case this purpose should be documented.
 
| | > For me, a good reason is: "there may not be any validation now, but I
| > suspect there might be in the future".
|
| It all depends on the situation but I think it is rare that the class
itself
| needs to access the variable though the property. There is no point
turning
| a high speed variable access into a slower function call every time just
in
| case something might change in the future. If it does change and it
becomes
| appropriate to use the property then modify the code.
|
| Michael
|
|

But that's the whole point, there is no slower access when you access the
property, the JIT will inline the property getter, so the result is a simple
direct field access (memory access)
Consider following sample:

class Test
{
int i;
int v;
int I {
get { return i;}
}
static void Main()
{
Random rnd = new Random();

Test t = new Test();
t.i = rnd.Next();
t.v = rnd.Next();
int iii = t.I;
int ii = t.v;
int sum = ii+iii;
Console.WriteLine(sum);
}

The JIT produces the following when run in the debugger...


' move OBJREF t into eax
00cb0115 8b45f0 mov eax,[ebp-0x10]
ss:0023:0012f470=01275ae4
' move field i value into eax !!!!
00cb0118 8b4004 mov eax,[eax+0x4]
move eax into edx
00cb011b 8bd0 mov edx,eax
' move OBJREF into eax
00cb011d 8b45f0 mov eax,[ebp-0x10]
' move field v into eax
00cb0120 8b4008 mov eax,[eax+0x8]
' add i to v store result in eax
00cb0123 03c2 add eax,edx
' move result into esi
00cb0125 8bf0 mov esi,eax
' use value in esi as argument for Console.WriteLine..


you see, the property getter and setter (in this simple case, is nothing
else than syntactic sugar, there is no such thing at the machine code level
as a (slower) function call.


Willy.
 
I prefer an underscore prefix to using "this." for fields. The problem I had
was that I'd forget to use it consistently, and I'd see a field and think it
was a local variable.

I also like the underscore to id private fields. It lets you use the same name
for things.

private int _ndx;

public void SomeMethod(int ndx)
{
_ndx = ndx;
}

Public int Ndx
{
get
{
return _ndx;
}
set
{
_ndx = value;
}
}


Good luck with your project,

Otis Mukinfus
http://www.arltex.com
http://www.tomchilders.com
 
That said, I'm not sure I agree with Jon. The property setter may
include functionality that you explicitly *don't* wan't to invoke from
within the class, like raising ...Changed events and things like that.

And if you *explicitly* don't want that, then that's fine. I just think
it's worth making that decision specifically, but defaulting to using
the properties unless you've got a reason not to.

It all depends upon what the setter does and what your class is trying
to do.

Agreed.
 
Jon Skeet said:
Whereas I believe it's rare that it would need any potential speed
increases of *not* accessing the variable through the property.

I try to make things quick straight off the bat (without going overboard
obviously). If there are 2 methods that are practically the same I will go
for the faster method.
What makes you think that the property access would be slower? The JIT
is quite capable of inlining property access - and if the property
setter does significant work which would prevent inlining, then chances
are you want that validation etc to take place on every set anyway. I
believe that bypassing the property for the sake of micro-optimisation
is a bad idea.

Have you confirmed this? If the property doesn't get inlined then the speed
different with all the pushes and pops could be significant.
The time when it makes sense to bypass the property is when you really
*want* to bypass validation, for instance if you want to set multiple
interdependent variables, and changing any one of them alone to the new
value would be invalid. That should be the exception, not the rule.


Why store up work for later? It's just as *easy* to use the property as
the variable when you first write the code, and gives immediate
advantages in terms of simplicity of debugging, IMO.

I find that by writing the faster code to start with I save work optimising
later, which is going to be a significant saving over a few rare changes
(pretty much never in the last 10 years since I first wrote a property).

Michael
 
Michael C said:
I try to make things quick straight off the bat (without going overboard
obviously). If there are 2 methods that are practically the same I will go
for the faster method.

Whereas I go for the more maintainable, readable method. I prefer to
have code which clearly works and which maintenance engineers will be
able to understand.

Of course, if the difference proves itself to be *significant* and a
problem, that's a different matter - but otherwise the optimisation is
just premature in this kind of case.
Have you confirmed this? If the property doesn't get inlined then the speed
different with all the pushes and pops could be significant.

Yes, I've confirmed it, several times.
I find that by writing the faster code to start with I save work optimising
later, which is going to be a significant saving over a few rare changes
(pretty much never in the last 10 years since I first wrote a property).

Premature micro-optimisation is widely regarded as a bad idea. In this
case, you aren't even actually optimising anything anyway.

See http://www.interact-sw.co.uk/iangblog/2005/11/09/profiling for a
good article about this kind of thing.
 
Whereas I believe it's rare that it would need any potential speed
Whereas I go for the more maintainable, readable method. I prefer to
have code which clearly works and which maintenance engineers will be
able to understand.

Of course, if the difference proves itself to be *significant* and a
problem, that's a different matter - but otherwise the optimisation is
just premature in this kind of case.


The very best tip about optimization is: don't optimize.

If you compile from the command line, you can set a switch whether you want
the code optimized for size (small size but s--l--o--w) or for speed (BIG
and lightning fast). I deliberatly take it to the extreme, your code will be
quick and not-to-bulgy, but the point is that the compiler can do your
optimizations better and smarter than you can, and less error prone, while
you, as a programmer, can organize and document your code than the computer
can.

So focus on what you're best at, and leave the risky bussiness of
optimization to the computer that, again, will do a better job than you.
 
Martijn Mulder said:
The very best tip about optimization is: don't optimize.

Well, don't optimise before you know there's a problem. Optimise the
*architecture* rather than optimising the code.
If you compile from the command line, you can set a switch whether you want
the code optimized for size (small size but s--l--o--w) or for speed (BIG
and lightning fast).

That's true (or used to be) for C/C++, but it isn't true for C#.
There's just "optimise - yes or no" basically (and another switch for
how much debugging information you want, if any).
 
Jon Skeet said:
Whereas I go for the more maintainable, readable method. I prefer to
have code which clearly works and which maintenance engineers will be
able to understand.

You're assuming fast is unmaintainable, which can be the case if you let it
be but doesn't have to be.
Yes, I've confirmed it, several times.

Fair enough, it appear that is true.
Premature micro-optimisation is widely regarded as a bad idea.

Again it can be a bad idea if you do it wrong but *every* programmer will
chose some methods because they are faster than others. I find because I
take these speed into account my code runs faster straight off the bat than
other's at my work and Idon't have to spend time optimising it later.
In this
case, you aren't even actually optimising anything anyway.

We're going to have to agree to disagree. I've been favouring the direct
access for 10 years and have had absolutely zero problems with it. You're
not going to convince me a problem exists when one doesn't.

Michael
 
Michael C said:
You're assuming fast is unmaintainable, which can be the case if you let it
be but doesn't have to be.

I'm not assuming it is always unmaintainable - I'm saying that where
there's a choice between maintainable and fast, I will go for
maintainable until it's been proven that that piece of code is a
bottleneck.
Fair enough, it appear that is true.


Again it can be a bad idea if you do it wrong but *every* programmer will
chose some methods because they are faster than others. I find because I
take these speed into account my code runs faster straight off the bat than
other's at my work and Idon't have to spend time optimising it later.

My reasons for favouring readability over speed until the need for
micro-optimisation has been proven are the following:

1) Code spends much more time being maintained than written. Saving
time when someone maintains code is important.

2) Only a very small proportion of the code we write ends up being
performance critical.

3) The easier it is to understand the code, the easier it is to modify
the performance-critical sections when they've been identified.
We're going to have to agree to disagree. I've been favouring the direct
access for 10 years and have had absolutely zero problems with it. You're
not going to convince me a problem exists when one doesn't.

I believe that unless your code is as readable as it can be except in
places where performances has *proven* to be critical, that's a problem
in itself.

Your position is the kind of one which leads people to avoid using
exceptions anywhere, based on the myth that if you occasionally throw
exceptions, your performance will plummet. It's that kind of thing that
really hurts code quality.

Your position goes against encapsulation, too - there are often times
where you could possibly get an extra little bit of performance if you
didn't care about the object model purity. Your statements suggest that
you would willingly sacrifice design for performance in that case, even
when it hasn't been shown that the performance improvement will be
significant.

You should be aware that your position is a fairly widely criticised
one, too. It's not just me that thinks it's a bad idea. See
http://billharlan.com/pub/papers/A_Tirade_Against_the_Cult_of_Performan
ce.html
for another post against it, including the famous quote of Donald
Knuth: "We should forget about small efficiencies, about 97% of the
time. Premature optimization is the root of all evil."
 
Jon Skeet said:
I'm not assuming it is always unmaintainable - I'm saying that where
there's a choice between maintainable and fast, I will go for
maintainable until it's been proven that that piece of code is a
bottleneck.

I'm sure you do some optimisations as you code. I'm not suggesting you
should always write everything to make it as fast as possible, just take it
into account on the way. A good example would be to put more commonly used
items at the top of a switch statement.
My reasons for favouring readability over speed until the need for
micro-optimisation has been proven are the following:

1) Code spends much more time being maintained than written. Saving
time when someone maintains code is important.

2) Only a very small proportion of the code we write ends up being
performance critical.

3) The easier it is to understand the code, the easier it is to modify
the performance-critical sections when they've been identified.

Don't get me wrong, I don't write everything for speed at the expense of
maintainability. In my opinion there is no maintainability lost accessing
the variable directly so I would chose the faster method. In fact I think it
is more maintainable. How many times would you accidentally step into a get
and then a set while debugging just to increment an int?
Your position is the kind of one which leads people to avoid using
exceptions anywhere, based on the myth that if you occasionally throw
exceptions, your performance will plummet. It's that kind of thing that
really hurts code quality.

You're using a common newsgroup technique where you take something someone
says to the extreme and then conclude it is a bad idea because it is taken
to the extreme.
You should be aware that your position is a fairly widely criticised
one, too.

I agree, but that is with your over exaggeration of my position. I'm sure
most good programmers take speed into account to some degree when writing
code. Certainly an experienced coder will produce a faster app first time
than a beginner.

Michael
 
Michael C said:
I'm sure you do some optimisations as you code. I'm not suggesting you
should always write everything to make it as fast as possible, just take it
into account on the way.

There are *some* things I take account of, but not very many. I use
StringBuilder when I don't know how many things I'll be appending,
because I know the results of not doing so can be catastrophic to
performance, for example. I don't try to put things in big methods to
avoid making method calls though - indeed, I positively split big
methods up into smaller ones, even if those smaller ones won't be
called by anything else. That hurts performance a tiny amount (if the
methods are still too big to be inlined) but improves readability.
A good example would be to put more commonly used
items at the top of a switch statement.

I certainly don't do that. I put them in the most logical way for the
reader to see them. Have you ever come across a situation (in C# - not
a different language which may have different characteristics) where
the order of the cases has made a significant difference? Have you
benchmarked it?
Don't get me wrong, I don't write everything for speed at the expense of
maintainability. In my opinion there is no maintainability lost accessing
the variable directly so I would chose the faster method. In fact I think it
is more maintainable. How many times would you accidentally step into a get
and then a set while debugging just to increment an int?

I wouldn't accidentally do it, because I'd be doing step over instead
of step into. However, I've gone into why you *would* want to use
properties earlier in the thread. If you've provided some validation
etc in the property setter, not performing that validation should be
the exception rather than the rule.
You're using a common newsgroup technique where you take something someone
says to the extreme and then conclude it is a bad idea because it is taken
to the extreme.

Well, I'm going by what you've said. Things like:

<quote>
I find that by writing the faster code to start with I save work
optimising later, which is going to be a significant saving over a few
rare changes
</quote>

That suggests that speed is the major contributor to your design
decisions, rather than maintainability. If that's not your actual
position, then we may well have less to argue about - but I hope you
can see how statements like the one above have led me to my belief.
Perhaps I should have taken more notice of your "two methods which are
practically the same" to start with.

However, do still believe that things like rearranging switch cases for
the sake of performance emphasises the wrong thing, and even if you
don't take it to an extreme, it can lead others into an extreme
position.
I agree, but that is with your over exaggeration of my position. I'm sure
most good programmers take speed into account to some degree when writing
code. Certainly an experienced coder will produce a faster app first time
than a beginner.

Not necessarily. The difference may often be that the beginner's app
will be very quick - but not actually work properly. I'd hope that an
experienced coder would concentrate on getting something working and
maintainable, only worrying about the speed of individual sections when
those sections proved to be significantly.

The areas of speed I take into account are ones of complexity - I will
try to avoid doing something O(n^2) instead of O(n) unless I know that
n will be very small. However, that is usually a case of design and
architecture rather than actual implementation, IME.
 
Jon Skeet said:
Premature micro-optimisation is widely regarded as a bad idea. In this
case, you aren't even actually optimising anything anyway.

Sadly, reading through any of the C#/.NET groups easily proves you wrong
(about the wide regard, not about it being a bad idea).

Or maybe it is widely regarded as being a bad thing but the people who
regard it that way have given up trying to convince the rest :-(
 
I might point out as well that optimization, in a general sense, is the
process of conserving resources. Not only are processor usage and memory
consumption resources, but resources also encompasses the money that it
costs to develop and maintain software. That is, a software vendor can only
make money if it costs less to develop and maintain the software than the
income generated by the software. We do live in the real world, where
everything has limits, including time and money. In terms of money, human
resources are a cost factor over time. The more time it takes to develop and
maintain software, the more it costs in terms of money.

So, when speaking of optimization, one is always talking about a net gain.
One cannot logically separate or prioritize one resource over another. It is
the accumulated combination of all resources that factors into the net gain.
If one does not factor in the human resources, one is highly likely to go
out of business.

This is the basis for Jon's assertions regarding maintainability and
readability. These are factors which influence the net gain, and are
therefore legitimate optimizations as well.

It's all a balancing game. Achieving the correct balance of optimizations
results in the maximum profitability of the enterprise.

The article Jon points to is spot on. I have long maintained that extra time
spent up-front in the design process ultimately leads to an overall savings
of total resources spent in the development of software. I am usually slow
to start on a project, and like a train, pick up speed as I go along. But I
will often spend several days (at least, depending upon the size of the
project) just *thinking* about it, researching, sketching ideas and talking
about it with my peers before I write a lick of code. Once I get started, I
never stop *thinking* about optimization, and admittedly perhaps spend a wee
bit too much time over it (can't be sure), but tend to not sweat the small
stuff, and I also keep an eye on the amount of time I spend doing so, in
order to pull myself away from it when necessary (which is admittedly
all-too-often).

A good developer has this innate desire to write "elegant" code. But we must
never over-indulge that desire. This is one reason I put in so many hours
that I am not paid for. I have my pride. But I don't want my company to have
to pay for it!

--
HTH,

Kevin Spencer
Microsoft MVP
Professional Numbskull

Hard work is a medication for which
there is no placebo.
 
Sadly, reading through any of the C#/.NET groups easily proves you wrong
(about the wide regard, not about it being a bad idea).

You're making an assumption that because Jon discusses the low-level
characteristics of resource usage and details of technology in the platform,
that he is also overly concerned about them. This is not a logical
conclusion. While it might seem to imply this, it does not necessarily imply
this. Jon is a C# MVP, and this is a newsgroup about C#. People ask
questions about these issues, and Jon answers them. The fact that he answers
them, and that he is diligent about the details, does not mean that he
believes one should be overly-anal about optimization. In fact, I believe
that it's important to understand these low-level details of the technology
in order to excel at it, not because one is necessarily going to be
overly-anal in the development process, but because understanding the
technology intimately lends to the overall understanding and therefore
efficiency in using it.

--
HTH,

Kevin Spencer
Microsoft MVP
Professional Numbskull

Hard work is a medication for which
there is no placebo.
 
Jon Skeet said:
There are *some* things I take account of, but not very many. I use
StringBuilder when I don't know how many things I'll be appending,
because I know the results of not doing so can be catastrophic to
performance, for example. I don't try to put things in big methods to
avoid making method calls though - indeed, I positively split big
methods up into smaller ones, even if those smaller ones won't be
called by anything else. That hurts performance a tiny amount (if the
methods are still too big to be inlined) but improves readability.

I would never write a big function for performance reasons :-)
I certainly don't do that. I put them in the most logical way for the
reader to see them. Have you ever come across a situation (in C# - not
a different language which may have different characteristics) where
the order of the cases has made a significant difference? Have you
benchmarked it?

If there was a genuine reason to put them in a certain order, such as the
event handler for a menu then I would. But quite often there is no real
order.
I wouldn't accidentally do it, because I'd be doing step over instead
of step into. However, I've gone into why you *would* want to use
properties earlier in the thread. If you've provided some validation
etc in the property setter, not performing that validation should be
the exception rather than the rule.

The problem is you can't always step over. If the code is

DoSomething(MyProp);

then you can't step over MyProp and into DoSomething (afaik). Besides I
always forget to step over until I'm in.
<quote>
I find that by writing the faster code to start with I save work
optimising later, which is going to be a significant saving over a few
rare changes
</quote>

I've been programming in vb6 for many years and it's much easier in vb6 to
write incredibly inefficient code. For example IIf(x=1, 5 , 10) is 50 times
slower (I tested this once) than using an if statement because the first
returns a variant and the second just deals in ints. Vb6 is full of such
traps and it's very easy to write page after page of inefficient code so
I've become much more aware of such issues. I didn't realise until I had
this conversation just how much more efficient C# is (in some ways at
least). I could easily think of 50 examples in vb6 but struggled to find one
for C#. And the one I came up with wasn't very good :-) I good example in
vb6 was I found out at one stage that "If len(SomeString) = 0" was much
faster than "If SomeString = "" " so I started using len. The 2 methods were
practically identical so why not use the faster.
That suggests that speed is the major contributor to your design
decisions, rather than maintainability. If that's not your actual
position, then we may well have less to argue about - but I hope you
can see how statements like the one above have led me to my belief.
Perhaps I should have taken more notice of your "two methods which are
practically the same" to start with.

Don't worry, I don't go out of my way and make really bad designs to make
minor optimisations.
Not necessarily. The difference may often be that the beginner's app
will be very quick - but not actually work properly. I'd hope that an
experienced coder would concentrate on getting something working and
maintainable, only worrying about the speed of individual sections when
those sections proved to be significantly.

Generally the beginners code is slower and more buggy (and took them up to
10 times longer to write it). If you find performance issues with a more
experienced coder's work it usually harder to optimise it.
The areas of speed I take into account are ones of complexity - I will
try to avoid doing something O(n^2) instead of O(n) unless I know that
n will be very small. However, that is usually a case of design and
architecture rather than actual implementation, IME.

That makes sense, the more loops a piece of code is inside the more critical
things become.

Michael
 
Kevin Spencer said:
You're making an assumption that because Jon discusses the low-level
characteristics of resource usage and details of technology in the platform,
that he is also overly concerned about them.

I don't think Nick was saying that *I* was overly concerned - just that
there are plenty of people on the newsgroups who are :(
 
[cut]
A good developer has this innate desire to write "elegant" code. But we
must never over-indulge that desire. This is one reason I put in so many
hours that I am not paid for. I have my pride. But I don't want my company
to have to pay for it!

Interesting.

Turn this around and imagine your company publicly saying "We don't give our
programmers enough time or money to write code that they are proud of."

It doesn't sound so good if you put it like that does it?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top