Release is slower than debug... What's happening?

T

ThunderMusic

Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000 rows
batches.

If I build the service in debug mode and run it, I get descent performances.
If I build it in release mode, I would expect it to at least stay as fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.

How can this happen?

Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key

Thanks

ThunderMusic
 
T

ThunderMusic

oups
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.

should read

- SqlBulkCopy Class using a IDataReader implementation as a source which
performance has been verified.
 
B

Ben Voigt [C++ MVP]

ThunderMusic said:
Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000
rows batches.

If I build the service in debug mode and run it, I get descent
performances. If I build it in release mode, I would expect it to at least
stay as fast and maybe be faster, but I actually lose performance. I lose
about 10% performance.

How can this happen?

Are you measuring first run performance, or steady state?

The first run requires that the JIT compiler be run, and in release mode the
JIT has to look for optimizations, hence taking longer. After the code is
compiled, however, the incremental cost of rerunning it is usually greatly
reduced by the optimizations.
 
T

ThunderMusic

Even after many runs, it's still slower.

Ben Voigt said:
Are you measuring first run performance, or steady state?

The first run requires that the JIT compiler be run, and in release mode
the JIT has to look for optimizations, hence taking longer. After the
code is compiled, however, the incremental cost of rerunning it is usually
greatly reduced by the optimizations.
 
G

Guest

ThunderMusic,
This simply "does not compute". Are you actually installing this windows
service after it has been built in a debug build configuration? And you are
comparing the performance against a different Windows Service installation
that was done with a release build?

What I'd do (unless you want to get into some heavy-duty profiling) is use
the Stopwatch class to take some start / finish timings around critical
operations that occur when your service runs, and write the results as
appended lines to a text file.

Examining this, if you have instrumented it properly, should give more
insight.
Peter
 
A

Aneesh P

ThunderMusic,
This simply "does not compute". Are you actually installing this windows
service after it has been built in a debug build configuration? And you are
comparing the performance against a different Windows Service installation
that was done with a release build?

What I'd do (unless you want to get into some heavy-duty profiling) is use
the Stopwatch class to take some start / finish timings around critical
operations that occur when your service runs, and write the results as
appended lines to a text file.

Examining this, if you have instrumented it properly, should give more
insight.
Peter

--
Site: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
Short urls & more: http://ittyurl.net








- Show quoted text -

I was using same kind of functionality, but in the reverse way,
loading data from SQL to CRM Server and then to MSMQ. Again from queue
to a CSV file. The performance is considerably better when I built the
windows service in release mode. I've found it out using the
aforementioned Stopwatch class. I'd venture to suggest the same.
 
T

ThunderMusic

Hi,
Thanks for the answers...

1) Yes, it's the same service I'm testing, in debug, then in release (or in
release then in debug, no importance, I get the same results) and always on
the same computer.
2) The thing is, We are already logging when the application reach an
important point (with Timestamps). So using Stopwatch here will only add
overhead to the process. Actually, we are taking a 4000000 rows CSV and copy
it to a SQL Table by taking care the datatypes are good. So we optimized our
process and managed to get a good performance (by replacing most of the
generics by native arrays, using temporary variables instead of always
getting the current value from the file reader, et al.) maybe we optimized
"too much" and the JIT cannot find a way to optimize intelligently... Would
it be possible?

Thanks

ThunderMusic
 
G

Guest

If I build the service in debug mode and run it, I get descent performances.
If I build it in release mode, I would expect it to at least stay as fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.

How can this happen?

A stab in the dark ... Do you handle many exceptions? My experience is that
executing a 'try' gives no noticeable performance hit, but executing a
'catch' is very noticeable. So... maybe the non-catch part of your code runs
slower in debug (as intuition dictates), and maybe the catch parts of your
code run much slower in release (not sure why, perhaps there is more to
untangle). As I said, a stab in the dark.
 
J

Jon Skeet [C# MVP]

AMercer said:
A stab in the dark ... Do you handle many exceptions? My experience is that
executing a 'try' gives no noticeable performance hit, but executing a
'catch' is very noticeable. So... maybe the non-catch part of your code runs
slower in debug (as intuition dictates), and maybe the catch parts of your
code run much slower in release (not sure why, perhaps there is more to
untangle). As I said, a stab in the dark.

Catching an exception *in the debugger* takes a vast amount of time.
Catching an exception in a release build, not running in the debugger,
takes relatively little time. Unless you're throwing and catching
thousands and thousands of exceptions, there shouldn't be any
significant performance hit.
 
T

ThunderMusic

No, I'm not throwing many exceptions. Actually, with the test we ran, none
were thrown. so it must be in the non-catch part... ;)

We still experience the problem independantly from being executed after a
reboot or after running a while. We tried to measure with StopWatch, but
nothing relevant came out of this because when run in a close loop,
StopWatch tend to add overhead and it almost doubled our processing time.
(in the file we are testing, we do a 4000000 iteration 'for' (rows, records)
with an inner 13 iteration 'for' (columns, fields). So if we try to measure
the total time doing parse and total time for writing in the array to send
to bulk copy and the total time of anything, the StopWatch make make it go
awry

Would you have any other ideas so we can find what is going on?

Thanks

ThunderMusic
 
W

Willy Denoyette [MVP]

ThunderMusic said:
No, I'm not throwing many exceptions. Actually, with the test we ran, none
were thrown. so it must be in the non-catch part... ;)

We still experience the problem independantly from being executed after a
reboot or after running a while. We tried to measure with StopWatch, but
nothing relevant came out of this because when run in a close loop,
StopWatch tend to add overhead and it almost doubled our processing time.
(in the file we are testing, we do a 4000000 iteration 'for' (rows,
records) with an inner 13 iteration 'for' (columns, fields). So if we try
to measure the total time doing parse and total time for writing in the
array to send to bulk copy and the total time of anything, the StopWatch
make make it go awry

Would you have any other ideas so we can find what is going on?

Thanks

ThunderMusic


I don't get is, when using StopWatch doubles the processing time, it looks
like you have your StopWatch calls at the wrong places, so I'm currious
where you inserted the StopWatch calls, my guess is for each iteration,
which is wrong when each iteration executes only a few statements. Mind to
post some code?
Also, did you compare the GC counters like Time Spent in GC and the number
of Collections done for each generational heap when running both release and
debug versions?

Willy.
 
T

ThunderMusic

Willy Denoyette said:
I don't get is, when using StopWatch doubles the processing time, it looks
like you have your StopWatch calls at the wrong places, so I'm currious
where you inserted the StopWatch calls, my guess is for each iteration,
which is wrong when each iteration executes only a few statements. Mind to
post some code?
Also, did you compare the GC counters like Time Spent in GC and the number
of Collections done for each generational heap when running both release
and debug versions?

Willy.

For the StopWatch, we tried to nail down a section of the code where the
processing is slower in release, but because of the structure of the code,
we cannot place it elsewhere than inside a loop because all the work is done
in a loop. So it's really hard to know exactly where we must look.

I cannot post code because it would be a violation of my contract because I
can't just simplify the code mainly because the problem could be anywhere in
it.

We didn't look for the GC counters... I'll look into that and come back with
the results.

Thanks

ThunderMusic
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top