GC causing variable run-time ?

O

OliviuG

We have a .Net 1.1 application and witnessing huge variance in performance,
and suspect the GC is the culprit.

The same operation can take 6mins initially, then 2:30 mins, then 3 mins.
The variance is bigger on slower machines. On a 4 core 2.36 workstation the
variance is limited and the same operation takes around 1:40.

We are running the "workstation concurrent" GC algorithm.

When inducing full collections in between runs, we consistently seeing the
performance degrading after that, which made us think that by forcing a
collection we must be wiping the historic data that the GC maintains, making
forget about the previous allocation pattern.
We have looked carefully at the app code, and ruled out latency or locking.

The operation in question allocates a lot of objects(hundreds of thousands).
A big majority of these survive for the duration of the operation and die
shortly after that.

The problem is that we are not able to correlate this variable run-time with
the performance counters. More or less we end up with the same number of Gen
0/1/2 collections, but the total times it takes can vary by 300%.
In a typical scenario, we get around 500 gen1/40gen2/1-2 gen2.

We have tried to look at %Time spent in GC, but sometimes it shows non-sense
numbers like tens of thousands, which makes it impossible to compare the
average between different attempts.

Has anyone witness this before ?
Is there any other efficient way to measure the time spent GC-ing which
could either confirm or infirm our assumption ?
Is the GC algorithm hardware specific ?
Assuming that the GC is the culprit, what can we do about it ?
 
A

Arne Vajhøj

OliviuG said:
We have a .Net 1.1 application and witnessing huge variance in performance,
and suspect the GC is the culprit.

The same operation can take 6mins initially, then 2:30 mins, then 3 mins.
The variance is bigger on slower machines. On a 4 core 2.36 workstation the
variance is limited and the same operation takes around 1:40.

We are running the "workstation concurrent" GC algorithm.

I don't think GC is the problem. 3.5 out of 6 minutes used on GC
does not sound plausible.

I would look for something being read from disk the first time and
being in memory (cache) the second time.

Arne
 
O

Oliviu Gavrilescu

Thanks for the reply.

We considered and analyzed this scenario as well, but didn't seem to be the
case.
The app works with quite a huge heap, 12millions object/700MB managed heap
in this scenario, so we thought the variance could be caused by the GC
because of this reason.

We get running times like this:

6 mins
4 mins
2 mins
2 mins
.... Change the allocation pattern or induce a full GC
9 mins
3 mins
3 mins

On a different hardware spec, same build, same OS, same patches, etc

1:30
1:40
1:50
1:40
etc
 
A

Arne Vajhøj

Oliviu said:
We considered and analyzed this scenario as well, but didn't seem to be the
case.

How have you verified that ?
The app works with quite a huge heap, 12millions object/700MB managed heap
in this scenario, so we thought the variance could be caused by the GC
because of this reason.

GC'ing that in a simple setup on my 3 year old PC takes only a
half second.

I can not imagine any circumstances where it will take 2 minutes.

Arne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top