Hilton said:
Yeah, true, but not. O(N^2) can sometimes be faster than O(N log N)
Differences in absolute times aside, I never said one should always use
O(N log N) over O(N^2).
, so
sometimes the inefficient algorithm is faster.
Yes, and often the efficient algorithm costs more to implement in
developer hours. But these are highly dependent on the specific
scenario. They cannot be resolved as a general solution; a general test
of performance isn't going to answer the question as to what the best
implementation for a specific situation is.
If anything, that simply reinforces my point: the first tier of
optimization has a lot more to do with overall use of programmer time
than it does with specific performance scenarios.
Anyway, I digress. Let me
give you an example, on the Pocket PC if you combine the Location and Size
calls into a single Bounds method call and change Controls.Add to
Parent=this, you get significant performance increase. These aren't
algorithmic changes, just very trivial code changes which provide huge
increases in performance.
But in situations where the user never notices those increases, it's not
worth your time to investigate the performance differences.
In the Location/Size example, it should be immediately apparent to the
programmer who is paying attention that setting two properties, each of
which might force an update of the instance, is going to cost more than
setting a single property that encapsulates both. You don't need to
profile the code to know that it's less expensive to batch up
layout-related assignments.
In the Add() method vs Parent property, the difference is less apparent,
but again in many cases the user will never know the difference. It's
my opinion that it's a problem that two apparently equivalent techniques
produce significantly different performance results (assuming they
do...you're not specific enough for me to comment on that). But it's
not practical to go around performance-testing all of the possible
mechanisms you might use the framework.
The framework itself _should_ minimize these differences. But even
inasmuch as it doesn't, it's not practical to write a performance test
every time you add some new use of a framework element.
See above, also there was my DateTimePicker experience. Adding a bunch of
these to a page required 45 seconds! After optimizing the code,
it now takes about 5 seconds. No algorithm change. It's just not true that
only changing algorithms gets you big changes.
I never said it was.
Furthermore, what led you to optimize the code? Did you actually
profile all of the possible implementations in a separate test harness
to determine precise performance differences between the various
techniques available to you, before even implementing the overall
behavior desired?
Or did you, as I think is more likely, write the code, identify a
performance issue, and investigate how to improve the issue?
I would almost never do the former. I've never argued against the latter.
[...]
As Patrick and Jon have both said, once you have a complete
implementation, then it makes sense to identify and address any potential
performance problems. At that point, you will know what areas of the code
are actually affecting the user experience, and you will be able to
measure changes in the implementation in a way that takes into account the
context of those changes.
Well this arguement breaks down in many ways, here are a few:
1. This breaks the whole OOP concept; i.e. the object might work fine now
but when I come to reusing it, its performance sucks.
That does not "break the whole OOP concept". The OOP concept works just
fine, even if you don't address every possible performance scenario in
your initial implementation.
In fact, it is impossible to anticipate every use of an object, and one
must be prepared to resolve potential issues in the future through
fixing the existing implementation. Not only does this not "break the
whole OOP concept", the "whole OOP concept" is based around
encapsulation that allows for such repairs without affecting existing
users of an object.
2. You might not be working on the project when the performance issues
become apparent.
So? More the reason for the code to be maintainable first, performant
second.
3. You move the code to a different platform and it is painfully slow. You
could argue that this is a 'complete implementation' and you can now
optimize. But why not just do it right the first time.
Because you have no way to know what is "right". If there's a
performance difference that is platform dependent, there is every
possibility that different techniques are required for different
platforms. Optimizing the code on a given platform could very well
result in the least-optimal results on another.
Let me summarize here because I think I'm being a little misunderstood. I
do not suggest micro-optimizing everything. I strongly encourage
readability. I try comment every method and non-obvious lines of code. I
even line up the "=" on different lines (a topic for another thread). My
point is, is that we need to 'build-in' both performance and quality and not
leave it to suddenly rear its ugly head when the project is nearing
completion, deadlines aren't being met, and QA gets thrown a huge project to
test and then we have to start putzing around with the performance issues as
well as the bugs.
If you've "done it right the first time", performance issues are easily
addressed. High-level architecture, correct abstractions, maintainable
implementations all lead to flexible, easily-fixable code.
Obviously, there are certain aspects of performance that can be
addressed during the initial implementation. No one is suggesting
otherwise. We're talking about a specific class of performance
optimizations here though. Such as the difference between instantiating
a new collection versus clearing an existing one.
These kinds of "optimizations" cannot even necessarily be predicted
accurately outside of the real-world application, but beyond that they
often optimize areas of code that represent a tiny fraction of the total
cost of execution. It's inefficient and in many cases
counter-productive to worry about those kinds of optimizations until
they have been demonstrated to be actual problems in the final user
experience.
Pete