[...]
However, if the new string that you are going to construct is going
to
be considerably smaller in comparison to the old string, then I would
say to
just chuck the old StringBuilder.
Why?
Surely none of the strings involved are going to be long enough for their
actual memory usage to be significant, especially for the presumably short
period of time they will be around. Starting with a whole new
StringBuilder means that you have to start shuffling memory around again
as the string gradually increases in size. On the other hand, if you
reuse the previously used StringBuilder, you already know that you needed
to allocate that much memory, and you can avoid unnecessarily doing even
more array resizing as you build the follow-up command string.
Or look at it another way: if it's a problem for the size of the
StringBuilder in the second string, when that size was inherited from the
first string, then it was already a problem for that size with the first
string.
Philosophically...
I have empathy for the general idea of using minimal resources. But I've
seen too many examples of when an attempt to use minimal resources was at
best pointless and overly complicated things, and at worst resulted in
poor performance or even failure when presented with the worst-case
scenario, because that worst-case scenario wasn't adequately tested
(because the code was "smart" enough to try to avoid it most of the time).
My two favorite examples:
1) The old "alloca" statement that allowed for allocating
variable-sized data (usually strings) on the stack. In theory, especially
useful in the days when stack space was at a premium. In practice, if you
didn't test with your "alloca" statements always allocating the maximum
size possible, you could find yourself in hard-to-reproduce stack-overflow
errors. And of course, if the code works fine with all of the "alloca"
statements allocating the maximum, then why not just always put a
fixed-sized buffer on the stack?
2) Everquest vs Asheron's Call. EQ dedicated a server (or cluster
anyway, don't recall the specifics) to a given zone. AC load-balanced so
that the entire virtual world was processed by a single cluster of
servers, divided up according to where the players were in the world. In
EQ, even when no players were in a particular spot, the server kept
chugging along, while in AC the servers would simply unload parts of the
world where no players were. In theory, AC's method was "better" because
it was more efficient use of the hardware, but in practice EQ had much
less problems with servers getting overloaded, because the server's had to
be sized in advance to deal with the maximum load. AC had worst-case
scenarios in which the player load couldn't be perfectly balanced, and
they'd wind up with too many players crowded onto a single server that
just wasn't powerful enough to handle the load.
I realize that we're just talking about a StringBuilder and its string
here, not nearly the kind of complicated situations described above. But
the same design principles follow, IMHO. Optimizing isn't always the
right thing to do, because it's hard to know what the right thing to
optimize is. Code needs to be *correct* first, then optimal, and it's
easier to get the code correct when one is not busy trying to optimize
it. It is always not always intuitively obvious what the actual "optimal"
design is. What may seem to be the most efficient or optimal design may
in fact turn out to be less efficient in practice.
Pete