StringBuilder question

  • Thread starter Thread starter Guest
  • Start date Start date
To be honest, I'm not a big fan of that article. The idea of writing
the fastest possible code all the time is not one which appeals to me -
I far prefer to write *elegant* code which is easy to read, write and
maintain, and optimise any code which actually *needs* to be as fast as
possible.
I absolutely agree... I've had many arguments over this with your average C
or C++ programmer, who are typically obsessed with writing faster code
rather than maintainable code.

Most of the time is considerably cheaper to buy faster hardware than it is
to waste weeks or even months of developer time maintaining messy and
bug-ridden "hand-optimized" code.
Now, having said that, I'd have to look at exactly what is meant by
"constant folding" as far as the JIT is concerned. There may well be
things that the JIT can do which the compiler can't - they probably
have very different ideas of what a constant is.
Probably the only way of telling is to ngen the code and analyze the
assembly. Or just find a blog of someone responsible for the JIT and ask
them...
 
John Wood said:
I absolutely agree... I've had many arguments over this with your average C
or C++ programmer, who are typically obsessed with writing faster code
rather than maintainable code.

Most of the time is considerably cheaper to buy faster hardware than it is
to waste weeks or even months of developer time maintaining messy and
bug-ridden "hand-optimized" code.
Absolutely.
Probably the only way of telling is to ngen the code and analyze the
assembly. Or just find a blog of someone responsible for the JIT and ask
them...

ngen doesn't do the same optimisations, unfortunately. However, it's
slightly easier than that anyway - you can use cordbg and set it to
still optimise. That way you can see *all* the optimisations the JIT
will do.
 
However, it's
slightly easier than that anyway - you can use cordbg and set it to
still optimise. That way you can see *all* the optimisations the JIT
will do.
That's a good idea. I'll have to try that some time.
 
I've got to switch newsreaders. Outlook Express had lost track of this
whole thread.

I have to say that I agree with you. I try not to be wasteful, but unless
there is a particular operation that requires the extra microseconds,
readable and maintainable is better. It helps reduce bugs -- both
initially, and during enhancements. Customers like fast software, but but
they really like software that works.


John Wood said:
To be honest, I'm not a big fan of that article. The idea of writing
the fastest possible code all the time is not one which appeals to me -
I far prefer to write *elegant* code which is easy to read, write and
maintain, and optimise any code which actually *needs* to be as fast as
possible.
I absolutely agree... I've had many arguments over this with your average C
or C++ programmer, who are typically obsessed with writing faster code
rather than maintainable code.

Most of the time is considerably cheaper to buy faster hardware than it is
to waste weeks or even months of developer time maintaining messy and
bug-ridden "hand-optimized" code.
Now, having said that, I'd have to look at exactly what is meant by
"constant folding" as far as the JIT is concerned. There may well be
things that the JIT can do which the compiler can't - they probably
have very different ideas of what a constant is.
Probably the only way of telling is to ngen the code and analyze the
assembly. Or just find a blog of someone responsible for the JIT and ask
them...
 
Yeah but I daresay 99.9% of the time those kinds of optimizations won't make
the application appear one iota "faster" to the user. That article was
talking of differences of a few NANOseconds. Please!

J.Marsch said:
I've got to switch newsreaders. Outlook Express had lost track of this
whole thread.

I have to say that I agree with you. I try not to be wasteful, but unless
there is a particular operation that requires the extra microseconds,
readable and maintainable is better. It helps reduce bugs -- both
initially, and during enhancements. Customers like fast software, but but
they really like software that works.


John Wood said:
I absolutely agree... I've had many arguments over this with your
average
C
or C++ programmer, who are typically obsessed with writing faster code
rather than maintainable code.

Most of the time is considerably cheaper to buy faster hardware than it is
to waste weeks or even months of developer time maintaining messy and
bug-ridden "hand-optimized" code.

Probably the only way of telling is to ngen the code and analyze the
assembly. Or just find a blog of someone responsible for the JIT and ask
them...
and
 
Stringbuilder is most useful when you are in a loop, ie..

foreach (DictionaryEntry entry in m_hash)
{
sb.Append( Utils.SqlClean( entry.Value ) );
sb.Append( " = " );
sb.Append( quoteChar );
sb.Append( Utils.SqlClean( entry.Value ) );
sb.Append( quoteChar );
}



*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!
 
Yes, exactly as I said in my last post. I agreed with Jon Skeet about
making code readable and maintainable over squeezing out every last bit of
performance at the cost of readability. (unless there is an operation that
requires the extra time). I do find that it's pretty rare that you have
something that needs to be so tight. When I have seen situations like that,
it's usually server-side code, where the performance cost of an operation is
multiplied by the number of concurrent users. I've seen situations there
where you make some compromises to wring extra cycles, but even in those
situations, I lean towards elegance.
 
Sorry, I wasn't meaning to sound like I was disagreeing with you, if that's
the way you took it. I would add also that in the case of server code the
"fix it with hardware" is an even more reasonable solution. You can buy
some pretty big performance jumps in hardware for a week of our salaries.
 
No pain taken -- from my read, I wasn't sure whether you were interpreting
my original post differently than I had intended, so I was just trying to
clear it up. Yah, you can do a lot with hardware these days -- palm-tops
are running higher mhz than my first development machine. The only downside
is if you are writing commercial software, customers hate to hear that they
have to buy a faster server to run your latest software.

On the one hand, I see their point -- they want to have some confidence that
the code is efficient, and that we are not crutching on hardware to push
costs on to them (costs that would otherwise be incurred by us in the course
of performance tuning our software).

On the other hand, it seems perfectly reasonable to me that if you want to
do more stuff in the same amount of time, you should expect to need a faster
hardware.

That's kind of a delicate balance, isn't it? Overall, though I do find that
I strongly lean more towards elegance over performance. If seems like if
you keep it like that, then in the rare situations where you have to write
some tricky code, you can encapsulate (blackbox)/comment/chart/document it
well enough that you accomplish your performance goals without sacrificing
too much clarity (due to the rarity of the issue)

Anyway, I think we're both on the same side of the line on this one.

Cheers.

-- Jeremy
 
Back
Top