Garbage collectable pinned arrays!

A

Atmapuri

Hi!
Frankly, when you require that much control over how memory is used,
I'd consider writing unmanaged code instead.

I thought Microsoft is willing to listen to the problems of the customers
and work together to improve both their products and customer
satisfaction.

Thanks!
Atmapuri
 
A

Atmapuri

Hi!
I'd very much like to know where you have this from.

Common knowledge? said:
I tried the following code:

Object o = 10;
Debug.WriteLine(o);
GCHandle h = GCHandle.Alloc(o, GCHandleType.Pinned);
Debug.WriteLine(o);
Text = ((Int32)h.AddrOfPinnedObject()).ToString("X8");
h.Free();

// Just for security
GC.KeepAlive(o);
GC.KeepAlive(h);

I am using array's larger than 80 bytes up to 100KBytes.
Pinning 4byte large objects or arrays is handled vastly
differently by GC. Time something like this:

for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[testArrayLength];
testArray[2] = 2;
}

Such that GCIterCont*testArrayLength is a constant value.
When testArrayLength reaches 1024 elements (80kBytes) you will see a big
jump in the cost of the allocation. All timings must be normalized with
(GCIterCont*testArrayLength). That will give allocation cost
per element as a function of the array length.

Then add a second series to the chart:

for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[testArrayLength];
GCHandle h = GCHandle.Alloc(testArray, GCHandleType.Pinned);
testArray[2] = 2;
h.Free();
}

and compare it with the first series. Beyond 1024 elements,
there won't be any difference in the cost. But before array
length of 1024, the difference will be large and fairly constant.

Thanks!
Atmapuri
 
W

Willy Denoyette [MVP]

Atmapuri said:
Hi!


It does copying and it does cause "fragmentation" and performance hit
due to fragmentation. Both at once. (But that fragmentation issue, is
actually the same fragmenation issue as with all unmanaged code apps
including
Windows.)

However, the copying is from small object heap to the large object heap,
And the fragmentation is in the large object heap, because the small
object
heap which is compactable of course can not be fragmented.
What makes you believe that a small object is moved to the LOH when it gets
pinned, any proof of evidence?

You can check that arrays are copied by allocating ever
larger arrays and pinning them down and measuring the time it takes
to do that for each array size.

Array's are created at new when enlarged, and finaly when the exeed
85KBytes, they get allocated on the LOH, but this has nothing to with
pinning.
You will see that beyond certain array size, the pinning cost becomes
zero. The timings must be normalized with the array length however,
otherwise
it is harder to see. When the pinning cost becomes zero this means that
the array is so large that it is now allocated on the large object heap
from the start and needs not to be copied there anymore.
What are you talking about ? What's the pinning cost? How did you measure.
If however, there would be a language feature which would allow you
to specify that the array could be allocated on the large object
heap regardless of its size.... you could save yourself a lot of
copy operations when interfacing unmanaged code.

The LOH is meant to be used for large objects, and is never compacted, that
means it's a candidate number one for fragmentation issues. The generations
0, 1 and 2 heap gets compacted and is quasi free from fragmentation issues,
unless you keep objects pinned for a (too) large period of time.
Currently the GC decides where the array goes:
- in the small object heap which is compactable
- large object heap which is not compactable

Please give us a C# language feature where the programmer
decides where the arrays go, because only the programmer
knows how will they be used.


To resume, it doesn't copy the array when pinning, it simply sets a bit in
the object header and it stores the reference to the object in the "Object
Reference" table, this tells GC that the object is pinned and should not be
moved. When the CLR unpins the object it resets the header bit "nulls" the
reference in the "Object Reference" table.

Willy.
 
W

Willy Denoyette [MVP]

Atmapuri said:
Hi!


The heap as I understand is not compactable, which in this
case means the "large object heap". This is what I meant
under "heap".

Well, this is the LOH, the GC heap is the Gen0,1 and 2 heap.

Of course.

I want an option:

- for the arrays to have a fixed address. (!!)
- still be collectable (!!)

If the arrays are not pinned they will still not be collected, if
all references to the array also have not been invalidated.

You don't want your objects to go away when someone is still holding
references to it don't you?
If the array has a fixed address and its only reference is passed
to unmanaged code, You can ensure that it will not be collected
by putting GC.KeepAlive(array) after the unmanaged code call.
Which is exactly what is done by the pinning action, what makes you think
this is cheaper?

By removing the need to pin the array for every call to unmanaged
code, you gain speed.

No, you don't because somehow you must pin the object to prevent premature
collection. It's not because an object that is not pinned, is not referenced
by the unmanaged code.

Consider following sample:

int[] ia = new ia[200000]; // large object stored on the LOH at a fixed
location (not moveable.
UnmanagedFunctionThatTakesAnArrayOfInts(ia);

if the interop layer would not pin the array, the GC would be free to
collect the object and as such invalidate the pointer you have passed to
unmanaged code. The reason for this is that the JIt doesn't know about
unmanaged code, as a result he *marks* the reference "ia" as a candidate for
collection (signals the GC that the object ia refers to may be collected at
the moment of the call.

Willy.
 
W

Willy Denoyette [MVP]

Atmapuri said:
Hi!
I'd very much like to know where you have this from.

Common knowledge? said:
I tried the following code:

Object o = 10;
Debug.WriteLine(o);
GCHandle h = GCHandle.Alloc(o, GCHandleType.Pinned);
Debug.WriteLine(o);
Text = ((Int32)h.AddrOfPinnedObject()).ToString("X8");
h.Free();

// Just for security
GC.KeepAlive(o);
GC.KeepAlive(h);

I am using array's larger than 80 bytes up to 100KBytes.
Pinning 4byte large objects or arrays is handled vastly
differently by GC. Time something like this:

for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[testArrayLength];
testArray[2] = 2;
}

Such that GCIterCont*testArrayLength is a constant value.
When testArrayLength reaches 1024 elements (80kBytes) you will see a big
jump in the cost of the allocation. All timings must be normalized with
(GCIterCont*testArrayLength). That will give allocation cost
per element as a function of the array length.

Then add a second series to the chart:

for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[testArrayLength];
GCHandle h = GCHandle.Alloc(testArray, GCHandleType.Pinned);
testArray[2] = 2;
h.Free();
}

and compare it with the first series. Beyond 1024 elements,
there won't be any difference in the cost. But before array
length of 1024, the difference will be large and fairly constant.

Thanks!
Atmapuri



It makes no sense to post incomplete code snips, if you want to illustrate
an issue post the whole code, so please give us the complete sample.

Willy.
 
A

Atmapuri

Hi!
What makes you believe that a small object is moved to the LOH when it
gets pinned, any proof of evidence?

It gets moved for sure. Here is proof: Fixed cost of pining per element.

If I pin the array the cost of pinning is linearly proportional to the
length of array for double arrays shorter than 1024 elements.
Array's are created at new when enlarged, and finaly when the exeed
85KBytes, they get allocated on the LOH, but this has nothing to with
pinning.

LOH or no LOH, they have to be moved somewhere. What other
algorithm would have processing cost linearly proportional to the
length of the array?
What are you talking about ? What's the pinning cost? How did you measure.

1.) You allocate an array.
2.) Pin it down.
3.) Measure the time it takes to do #1 and #2 for array of doubles
with sizes from 100 to 10 000 elements.
4.) Normalize the timings with the amount of array elements
allocated to get a time/element value.

Repeat all that but without pin down and compare the two series.
The LOH is meant to be used for large objects, and is never compacted,
that means it's a candidate number one for fragmentation issues. The
generations 0, 1 and 2 heap gets compacted and is quasi free from
fragmentation issues, unless you keep objects pinned for a (too) large
period of time.

Yes. True, but you see that what you see as so bad, is the primary
way to allocate memory in unmanaged code and works great. It does
not work that bad at all. Depends on the allocation pattern. In case of
pinning, if you copy once and then copy back, there is 0 fragmentation.

Fragmentation only occurs, if the order in which objects are
allocated and deallocated is not the same.
To resume, it doesn't copy the array when pinning, it simply sets a bit in
the object header and it stores the reference to the object in the "Object
Reference" table, this tells GC that the object is pinned and should not
be moved. When the CLR unpins the object it resets the header bit "nulls"
the reference in the "Object Reference" table.

That is true only for "small" objects. Not for reasonably sized arrays.

Thanks!
Atmapuri
 
A

Atmapuri

Hi!
No, you don't because somehow you must pin the object to prevent premature
collection. It's not because an object that is not pinned, is not
referenced by the unmanaged code.

Consider following sample:

int[] ia = new ia[200000]; // large object stored on the LOH at a fixed
location (not moveable.
UnmanagedFunctionThatTakesAnArrayOfInts(ia);

if the interop layer would not pin the array, the GC would be free to
collect the object and as such invalidate the pointer you have passed to
unmanaged code. The reason for this is that the JIt doesn't know about
unmanaged code, as a result he *marks* the reference "ia" as a candidate
for collection (signals the GC that the object ia refers to may be
collected at the moment of the call.

That is not the case, if you put

GC.KeepAlive(ai);

at the end of your code, which not costs "Nothing" (!!!)

Thanks!
Atmapuri
 
B

Ben Voigt [C++ MVP]

The array is automatically allocated on the heap once it exceeds
Arrays that exceed a certain size (85Kb currently), are moved to the
Large Object Heap, but they aren't pinned by this. The LOH is not
compacted, but that doesn't mean that the objects cannot get
collected.

All objects on the LOH are pinned at all times, by the nature of the heap
(non-compacting).

It would be useful to request that a particular buffer not be subject to
relocation by the GC. Probably the easiest way to do this would be to place
it in the LOH. The OLE task allocator or HGlobal allocator, both of which
are already exposed by the Marshal class in a typeless way, would be other
options. It could be as simple as adding a T[]
Marshal.AllocCoTaskMem<T>(int elementCount) override.
 
B

Ben Voigt [C++ MVP]

The LOH is meant to be used for large objects, and is never
compacted, that means it's a candidate number one for fragmentation
issues. The generations 0, 1 and 2 heap gets compacted and is quasi
free from fragmentation issues, unless you keep objects pinned for a
(too) large period of time.

And for interop buffers with a long lifetime, the cost of pinning is very
high, yet the fragmentation impact wouldn't be a problem at all.

For example, I have an application which uses glVertexPointer. I need to
interop that buffer every single frame for the life of my program, there's
no reason it shouldn't sit in the LOH and avoid the overhead of pinning
(both direct cost and extra fragmentation of Gen0 heap, because the buffer
is pinned it can never move to Gen2).
 
L

Lasse Vågsæther Karlsen

Atmapuri said:
Hi!


Common knowledge? <g>

Since we're several people here arguing that pinning does not, to our
knowledge, copy data, I would say that it doesn't sound to me like it's
common knowledge.
I tried the following code:

Object o = 10;
Debug.WriteLine(o);
GCHandle h = GCHandle.Alloc(o, GCHandleType.Pinned);
Debug.WriteLine(o);
Text = ((Int32)h.AddrOfPinnedObject()).ToString("X8");
h.Free();

// Just for security
GC.KeepAlive(o);
GC.KeepAlive(h);

I am using array's larger than 80 bytes up to 100KBytes.
Pinning 4byte large objects or arrays is handled vastly
differently by GC. Time something like this:

for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[testArrayLength];
testArray[2] = 2;
}

Such that GCIterCont*testArrayLength is a constant value.

I don't exactly understand what you're timing here. It looks to me as
you're timing things up to the point where GC kicks in. If you keep
testArrayLength constant, you allocate that sized array "GCIterCount" times.

Please write complete code so that you give us something I can just copy
and paste into a compiler and try.
When testArrayLength reaches 1024 elements (80kBytes) you will see a big
jump in the cost of the allocation. All timings must be normalized with
(GCIterCont*testArrayLength). That will give allocation cost
per element as a function of the array length.

I tried this:

for (Int32 index = 100; index < 1000000; index += 1000)
{
DateTime dt1 = DateTime.Now;
Int32[] values = new Int32[index];
DateTime dt2 = DateTime.Now;
for (Int32 j = 0; j < 1000000; j++)
{
GCHandle h = GCHandle.Alloc(values);
h.Free();
}
DateTime dt3 = DateTime.Now;
Console.WriteLine(String.Format("{0}: {1}", index, (dt3 -
dt1).TotalSeconds));
}

This gave me fairly constant times for all the pinning involved,
regardless of the size of the array.

Please poke holes in the code and tell me what I'm doing wrong.
 
L

Lasse Vågsæther Karlsen

Lasse said:
Atmapuri said:
Hi!


Common knowledge? <g>

Since we're several people here arguing that pinning does not, to our
knowledge, copy data, I would say that it doesn't sound to me like it's
common knowledge.
I tried the following code:

Object o = 10;
Debug.WriteLine(o);
GCHandle h = GCHandle.Alloc(o, GCHandleType.Pinned);
Debug.WriteLine(o);
Text = ((Int32)h.AddrOfPinnedObject()).ToString("X8");
h.Free();

// Just for security
GC.KeepAlive(o);
GC.KeepAlive(h);

I am using array's larger than 80 bytes up to 100KBytes.
Pinning 4byte large objects or arrays is handled vastly
differently by GC. Time something like this:

for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[testArrayLength];
testArray[2] = 2;
}

Such that GCIterCont*testArrayLength is a constant value.

I don't exactly understand what you're timing here. It looks to me as
you're timing things up to the point where GC kicks in. If you keep
testArrayLength constant, you allocate that sized array "GCIterCount"
times.

Please write complete code so that you give us something I can just copy
and paste into a compiler and try.
When testArrayLength reaches 1024 elements (80kBytes) you will see a
big jump in the cost of the allocation. All timings must be normalized
with
(GCIterCont*testArrayLength). That will give allocation cost
per element as a function of the array length.

I tried this:

for (Int32 index = 100; index < 1000000; index += 1000)
{
DateTime dt1 = DateTime.Now;
Int32[] values = new Int32[index];
DateTime dt2 = DateTime.Now;
for (Int32 j = 0; j < 1000000; j++)
{
GCHandle h = GCHandle.Alloc(values);
h.Free();
}
DateTime dt3 = DateTime.Now;
Console.WriteLine(String.Format("{0}: {1}", index, (dt3 -
dt1).TotalSeconds));
}

This gave me fairly constant times for all the pinning involved,
regardless of the size of the array.

Please poke holes in the code and tell me what I'm doing wrong.

I realize that the date calculation here should be dt3 - dt2, not - dt1,
however changing this does not change the end result.

Here's a snippet of my results (with dt3-dt2):

122100: 0,297616
123100: 0,31328
124100: 0,297616
125100: 0,297616
126100: 0,297616
127100: 0,281952
128100: 0,31328
129100: 0,297616
130100: 0,31328
131100: 0,297616
132100: 0,31328
133100: 0,297616
134100: 0,297616
135100: 0,31328
136100: 0,297616
137100: 0,297616
138100: 0,297616
139100: 0,328944
140100: 0,297616
141100: 0,297616
142100: 0,31328
143100: 0,31328
144100: 0,297616
145100: 0,31328

The values are around 0.3 regardless of how large or small the array is,
so clearly (to me), pinning seems to have a reasonable constant cost.
 
A

Atmapuri

Hi!
It makes no sense to post incomplete code snips, if you want to illustrate
an issue post the whole code, so please give us the complete sample.

I used the free Lite version of TeeChart (www.steema.com) for charting.
Complete source below.

I would like the eradicate the difference between the green and blue
line for arrays smaller than 1000 elements.

This is possible when using (wishfull thinking):

double[] a = new fixed double[Length];

where "fixed" means the address does not change.
It is there, because the array will be passed to unmanaged
memory multiple times.

Thanks!
Atmapuri

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Runtime.InteropServices;
namespace WindowsApplication6
{
public partial class Form1 : Form
{
private int xLength = 0;
private int resLength = 0;
private int GCIterCount = 5;
public Form1()
{
InitializeComponent();
}
private double aFunction(int Iterations)
{
double a = 1;
double[] testArray;
int counter = Environment.TickCount;
if (Iterations == 0) throw new Exception("Iterations == 0");
for (int i=0;i<Iterations;i++)
{
for (int k = 0; k < GCIterCount; k++)
{
testArray = new double[resLength];
testArray[2] = 1;
}
}
int result = Environment.TickCount - counter;
return result;
}
private double aFunction1(int Iterations) {
double a = 1;
double[] testArray;
GCHandle aH;
int counter = Environment.TickCount;
for (int i=0;i<Iterations;i++) {
for (int k = 0; k < GCIterCount; k++) //GC Loop
{
testArray = new double[resLength];
aH = GCHandle.Alloc(testArray,GCHandleType.Pinned);
testArray[1] = 2;
aH.Free();
}
}
return Environment.TickCount - counter;
}
private double FindMax(double[] a) {
double result = -1000;
foreach(double val in a) {
if (val > result) result = val;
}
return result;
}
private void button1_Click(object sender, System.EventArgs e) {
this.Cursor = Cursors.WaitCursor;
try
{
int InitialSize = 10;
resLength = InitialSize;
xLength = resLength;
int IterStep = 2;
int InitialIters = 1000000/(InitialSize/10);
int iters = InitialIters;
int Range = 18;
double[] a1 = new double[Range];
double[] a2 = new double[Range];
double[] a3 = new double[Range];
double[] a4 = new double[Range];
for (int i = 0; i < Range; i++)
{
GCIterCount = 0;
a1 = aFunction(iters);
label1.Text = "Progress : " + ((int)(((double)i + 0.2) / Range *
100)).ToString() + " %";
Refresh();
GCIterCount = System.Int32.Parse(EditBox.Text);
a2 = aFunction(iters);
label1.Text = "Progress : " + ((int)(((double)i + 0.4) / Range *
100)).ToString() + " %";
Refresh();
GCIterCount = 0;
a3 = aFunction1(iters);
label1.Text = "Progress : " + ((int)(((double)i + 0.6) / Range *
100)).ToString() + " %";
Refresh();
GCIterCount = System.Int32.Parse(EditBox.Text);
a4 = aFunction1(iters);
label1.Text = "Progress : " + ((int)(((double)i + 0.8) / Range *
100)).ToString() + " %";
Refresh();
iters /= IterStep;
xLength *= IterStep;
resLength = xLength;
}
label1.Text = "Progress : 100 %";
tChart1.Axes.Left.Automatic = false;
foreach (Steema.TeeChart.Styles.Series s in tChart1.Series) s.Clear();
for (int i = 0; i < a1.Length; i++)
{
if (a1 == 0) tChart1.Series[0].Add();
else tChart1.Series[0].Add(a1, (10 * Math.Pow(IterStep, i)).ToString());
}
for (int i = 0; i < a2.Length; i++)
{
if (a2 == 0) tChart1.Series[1].Add();
else tChart1.Series[1].Add(a2, (10 * Math.Pow(IterStep, i)).ToString());
}
for (int i = 0; i < a3.Length; i++)
{
if (a3 == 0) tChart1.Series[2].Add();
else tChart1.Series[2].Add(a3, (10 * Math.Pow(IterStep, i)).ToString());
}
for (int i = 0; i < a4.Length; i++)
{
if (a4 == 0) tChart1.Series[3].Add();
else tChart1.Series[3].Add(a4, (10 * Math.Pow(IterStep, i)).ToString());
}
tChart1.Axes.Left.SetMinMax(0,1.1*Math.Max(Math.Max(Math.Max(FindMax(a1),FindMax(a2)),FindMax(a3)),FindMax(a4)));
} finally {
this.Cursor = Cursors.Default;
}
}
private void EditBox_TextChanged(object sender, EventArgs e)
{
GCIterCount = System.Int32.Parse(EditBox.Text);
}
}
}
 
J

Jon Skeet [C# MVP]

Atmapuri said:
I thought Microsoft is willing to listen to the problems of the customers
and work together to improve both their products and customer
satisfaction.

a) I'm not a Microsoft or representative of them in any way
b) I would hope that Microsoft would have enough sense to tell their
customers when they're using technologies in a way they weren't
designed for

You're taking a *managed* environment and saying "Actually, I want to
manage this myself." That doesn't strike me as a sensible scenario to
provide extra support for.
 
J

Jon Skeet [C# MVP]

Atmapuri said:
I used the free Lite version of TeeChart (www.steema.com) for charting.
Complete source below.

Did you try compiling it? It's not at all complete.

It would also be helpful to do it in a console app. We don't need
charts etc - raw numbers are fine. Console apps are generally
considerably shorter.
 
J

Jon Skeet [C# MVP]

Atmapuri said:
It gets moved for sure. Here is proof: Fixed cost of pining per element.

You have an interesting concept of proof. You have surmised one
possible cause for the cost, but given no evidence for it.

If I don't turn up for work one day, is that "proof" that I've been hit
by a bus? That's one explanation, but not the only one.

If you're going to claim that the cost is due to copying, you should
produce some actual evidence that copying is involved. Do you have a
single piece of documentation to suggest that there's any copying going
on?
 
W

Willy Denoyette [MVP]

Ben Voigt said:
And for interop buffers with a long lifetime, the cost of pinning is very
high, yet the fragmentation impact wouldn't be a problem at all.

Why do you think the costs of pinning are larger when passing buffers with
long life time? And how did you measure the costs of pinning. What kind of
GCmode are you using?
For example, I have an application which uses glVertexPointer. I need to
interop that buffer every single frame for the life of my program, there's
no reason it shouldn't sit in the LOH and avoid the overhead of pinning
(both direct cost and extra fragmentation of Gen0 heap, because the buffer
is pinned it can never move to Gen2).
Are you talking about PInvoke interop here?


Willy.
 
W

Willy Denoyette [MVP]

Atmapuri said:
Hi!


It gets moved for sure. Here is proof: Fixed cost of pining per element.

How does this proove that the object gets moved to the LOH or whatever?
Pass and array to unmanaged code and look at it's address in the debugger,
you'll see that it's value is the address of the first element of the array.

If I pin the array the cost of pinning is linearly proportional to the
length of array for double arrays shorter than 1024 elements.

Pinning comes with a cost, we know this, but why do you keep telling that
the object get's moved (or copied)?
LOH or no LOH, they have to be moved somewhere. What other
algorithm would have processing cost linearly proportional to the
length of the array?

I don't know, that's why you need to measure the exact costs of the pinning,
but this is not because the object gets moved somewhere!.
1.) You allocate an array.
2.) Pin it down.
3.) Measure the time it takes to do #1 and #2 for array of doubles
with sizes from 100 to 10 000 elements.
4.) Normalize the timings with the amount of array elements
allocated to get a time/element value.

Repeat all that but without pin down and compare the two series.
What you are measuring here is the cost of the allocation of a pinned Object
Handle, this is quite expensive, however this is not what happens when you
are passing an argument to unmanaged code using PInvoke, the arguments
passed are only pinned when the GC kicks off, the GC (and the JIT)
cooperates with the interop layer for this. So you are only paying for this
when the GC runs, but the costs of pinning is low compared to the actual GC
cost.

Again, write a small program that calls a C function that takes an array,
and look at the native code when calling that function, you won't see any
"pinning" at all.
The array will only get pinned (by the CLR's interop layer) when you force a
GC run on another thread when you are actually executing your function. But
again, watch out for the context, the semantics of the call and the GC mode
and the CLR and JIT version, the interop layer may do things other than
pinning. Like I said the JIT64 may copy the array to an internal buffer
before passing to unmanaged code.

Yes. True, but you see that what you see as so bad, is the primary
way to allocate memory in unmanaged code and works great. It does
not work that bad at all. Depends on the allocation pattern. In case of
pinning, if you copy once and then copy back, there is 0 fragmentation.

Fragmentation only occurs, if the order in which objects are
allocated and deallocated is not the same.

Not exactly, fragmentation occurs when the GC leaves some gaps because it
cannot compact regions of the GC heap (the gen0, 1 and 2 heaps). The order
of deallocation is not determined by the order of allocation, some objects
may live longer than others, besides the order of de-allocation is
non-deterministic.

That is true only for "small" objects. Not for reasonably sized arrays.

The PInvoke layer doesn't make any distinction between LO's and non-LO's
when pinning (this is what we are talking about isn't it?), "pinning" must
be done to:
1) prevent the object from moving when the GC comes along, and
2) prevent premature GC, whether the object is on the LOH (fixed address)
is a non issue.
Besides, it's not because, the LOH is not currently compacted that MS
cannot decide to implement this in a later version, or they could decide to
change the threshold to an higher value for instance.

Willy.
 
W

Willy Denoyette [MVP]

Ben Voigt said:
All objects on the LOH are pinned at all times, by the nature of the heap
(non-compacting).

This is not what pinned means in this context. The objects on the LOH (all
version of the CLR) are at a fixed address for their life time, but that
doesn't mean they are pinned. Pinning is an explicit action that allocated
an Object Handle and stores it in the GCHandles table and set's the "pinned"
bit (don't know exactly where this bit is set as this changes from one
release to another).
If you want to see pinning in action you'll have to run your code in a
native debugger and load sos.
For instance consider following snip:

...
double[] a2 = new double[20000];
a2[a2.Length-1]= 0.7889;
// Break here and watch the Handles table, using !sos.gchandles
GCHandle h = GCHandle.Alloc(soa, GCHandleType.Pinned);
// break again and watch the handles table again, you will see
// 1) the: Pinned Handles: value that was incremented
// 2) and object reference is added to the table, object
reference which is pointing to the actual array object.
...
The same thing happens when "PInvoking", but now the pinning is only done
when the GC kicks in. Note that you need to explicitly pin when passing a
buffer (or callback) to unmanaged when there is a chance that your buffer
gets accessed after the call returned. Do you know exactly what gets done in
the unmanaged function? You better do!, the function can kick off another
thread pass the address of the buffer and return. The object passed-in is no
longer protected when the GC comes along, the thread that accesses the
object is subject to AV exceptions!

It would be useful to request that a particular buffer not be subject to
relocation by the GC. Probably the easiest way to do this would be to
place it in the LOH. The OLE task allocator or HGlobal allocator, both of
which are already exposed by the Marshal class in a typeless way, would be
other options. It could be as simple as adding a T[]
Marshal.AllocCoTaskMem<T>(int elementCount) override.

But now you are allocating from the unmanaged heap (COM heap or CRT heap or
whatever). So now you will incur the costs of copying back and forth, again
this depends on the semantics, but might be a solution when you need to pass
large data chunks to unmanaged land.


Willy.
 
A

Atmapuri

Hi!
Did you try compiling it? It's not at all complete.

Darn partial classes <g>

It is complete, except for the TeeChart component required.
The Form1.Designer.cs below. Otherwise it would be
best, I just send you a complete project, but dont know where
to.
It would also be helpful to do it in a console app. We don't need
charts etc - raw numbers are fine. Console apps are generally
considerably shorter.

It is rather hard to observe trends in text. You can always
print out the arrays passed to the chart series and comment
out the chart references. This is only the last part of the


Thanks!
Atmapuri

namespace WindowsApplication6
{
partial class Form1
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed;
otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.StartButton = new System.Windows.Forms.Button();
this.tChart1 = new Steema.TeeChart.TChart();
this.fastLine1 = new Steema.TeeChart.Styles.FastLine();
this.fastLine2 = new Steema.TeeChart.Styles.FastLine();
this.fastLine3 = new Steema.TeeChart.Styles.FastLine();
this.fastLine4 = new Steema.TeeChart.Styles.FastLine();
this.label1 = new System.Windows.Forms.Label();
this.EditBox = new System.Windows.Forms.TextBox();
this.label2 = new System.Windows.Forms.Label();
this.SuspendLayout();
//
// StartButton
//
this.StartButton.Location = new System.Drawing.Point(476, 364);
this.StartButton.Name = "StartButton";
this.StartButton.Size = new System.Drawing.Size(75, 23);
this.StartButton.TabIndex = 0;
this.StartButton.Text = "Start";
this.StartButton.UseVisualStyleBackColor = true;
this.StartButton.Click += new System.EventHandler(this.button1_Click);
//
// tChart1
//
//
//
//
this.tChart1.Aspect.ElevationFloat = 345;
this.tChart1.Aspect.RotationFloat = 345;
this.tChart1.Aspect.View3D = false;
//
//
//
//
//
//
this.tChart1.Axes.Bottom.Automatic = true;
//
//
//
this.tChart1.Axes.Bottom.Grid.Style =
System.Drawing.Drawing2D.DashStyle.Dash;
this.tChart1.Axes.Bottom.Grid.ZPosition = 0;
//
//
//
this.tChart1.Axes.Depth.Automatic = true;
//
//
//
this.tChart1.Axes.Depth.Grid.Style =
System.Drawing.Drawing2D.DashStyle.Dash;
this.tChart1.Axes.Depth.Grid.ZPosition = 0;
//
//
//
this.tChart1.Axes.DepthTop.Automatic = true;
//
//
//
this.tChart1.Axes.DepthTop.Grid.Style =
System.Drawing.Drawing2D.DashStyle.Dash;
this.tChart1.Axes.DepthTop.Grid.ZPosition = 0;
//
//
//
this.tChart1.Axes.Left.Automatic = true;
//
//
//
this.tChart1.Axes.Left.Grid.Style = System.Drawing.Drawing2D.DashStyle.Dash;
this.tChart1.Axes.Left.Grid.ZPosition = 0;
//
//
//
this.tChart1.Axes.Right.Automatic = true;
//
//
//
this.tChart1.Axes.Right.Grid.Style =
System.Drawing.Drawing2D.DashStyle.Dash;
this.tChart1.Axes.Right.Grid.ZPosition = 0;
//
//
//
this.tChart1.Axes.Top.Automatic = true;
//
//
//
this.tChart1.Axes.Top.Grid.Style = System.Drawing.Drawing2D.DashStyle.Dash;
this.tChart1.Axes.Top.Grid.ZPosition = 0;
//
//
//
this.tChart1.Header.Lines = new string[] {
"TeeChart"};
//
//
//
//
//
//
this.tChart1.Legend.Shadow.Visible = true;
//
//
//
//
//
//
this.tChart1.Legend.Title.Font.Bold = true;
//
//
//
this.tChart1.Legend.Title.Pen.Visible = false;
this.tChart1.Location = new System.Drawing.Point(17, 27);
this.tChart1.Name = "tChart1";
this.tChart1.Series.Add(this.fastLine1);
this.tChart1.Series.Add(this.fastLine2);
this.tChart1.Series.Add(this.fastLine3);
this.tChart1.Series.Add(this.fastLine4);
this.tChart1.Size = new System.Drawing.Size(533, 324);
this.tChart1.TabIndex = 1;
//
//
//
//
//
//
this.tChart1.Walls.Back.AutoHide = false;
//
//
//
this.tChart1.Walls.Bottom.AutoHide = false;
//
//
//
this.tChart1.Walls.Left.AutoHide = false;
//
//
//
this.tChart1.Walls.Right.AutoHide = false;
//
// fastLine1
//
//
//
//
this.fastLine1.LinePen.Color = System.Drawing.Color.Red;
//
//
//
//
//
//
this.fastLine1.Marks.Callout.ArrowHead =
Steema.TeeChart.Styles.ArrowHeadStyles.None;
this.fastLine1.Marks.Callout.ArrowHeadSize = 8;
//
//
//
this.fastLine1.Marks.Callout.Brush.Color = System.Drawing.Color.Black;
this.fastLine1.Marks.Callout.Distance = 0;
this.fastLine1.Marks.Callout.Draw3D = false;
this.fastLine1.Marks.Callout.Length = 10;
this.fastLine1.Marks.Callout.Style =
Steema.TeeChart.Styles.PointerStyles.Rectangle;
//
//
//
this.fastLine1.Marks.Shadow.Visible = true;
//
//
//
//
//
//
this.fastLine1.Marks.Symbol.Shadow.Visible = true;
this.fastLine1.Title = "Empty";
//
//
//
this.fastLine1.XValues.DataMember = "X";
this.fastLine1.XValues.Order =
Steema.TeeChart.Styles.ValueListOrder.Ascending;
//
//
//
this.fastLine1.YValues.DataMember = "Y";
//
// fastLine2
//
//
//
//
this.fastLine2.LinePen.Color = System.Drawing.Color.Green;
//
//
//
//
//
//
this.fastLine2.Marks.Callout.ArrowHead =
Steema.TeeChart.Styles.ArrowHeadStyles.None;
this.fastLine2.Marks.Callout.ArrowHeadSize = 8;
//
//
//
this.fastLine2.Marks.Callout.Brush.Color = System.Drawing.Color.Black;
this.fastLine2.Marks.Callout.Distance = 0;
this.fastLine2.Marks.Callout.Draw3D = false;
this.fastLine2.Marks.Callout.Length = 10;
this.fastLine2.Marks.Callout.Style =
Steema.TeeChart.Styles.PointerStyles.Rectangle;
//
//
//
this.fastLine2.Marks.Shadow.Visible = true;
//
//
//
//
//
//
this.fastLine2.Marks.Symbol.Shadow.Visible = true;
this.fastLine2.Title = "GC Only";
//
//
//
this.fastLine2.XValues.DataMember = "X";
this.fastLine2.XValues.Order =
Steema.TeeChart.Styles.ValueListOrder.Ascending;
//
//
//
this.fastLine2.YValues.DataMember = "Y";
//
// fastLine3
//
//
//
//
this.fastLine3.LinePen.Color = System.Drawing.Color.Yellow;
//
//
//
//
//
//
this.fastLine3.Marks.Callout.ArrowHead =
Steema.TeeChart.Styles.ArrowHeadStyles.None;
this.fastLine3.Marks.Callout.ArrowHeadSize = 8;
//
//
//
this.fastLine3.Marks.Callout.Brush.Color = System.Drawing.Color.Black;
this.fastLine3.Marks.Callout.Distance = 0;
this.fastLine3.Marks.Callout.Draw3D = false;
this.fastLine3.Marks.Callout.Length = 10;
this.fastLine3.Marks.Callout.Style =
Steema.TeeChart.Styles.PointerStyles.Rectangle;
//
//
//
this.fastLine3.Marks.Shadow.Visible = true;
//
//
//
//
//
//
this.fastLine3.Marks.Symbol.Shadow.Visible = true;
this.fastLine3.Title = "Empty";
//
//
//
this.fastLine3.XValues.DataMember = "X";
this.fastLine3.XValues.Order =
Steema.TeeChart.Styles.ValueListOrder.Ascending;
//
//
//
this.fastLine3.YValues.DataMember = "Y";
//
// fastLine4
//
//
//
//
this.fastLine4.LinePen.Color = System.Drawing.Color.Blue;
//
//
//
//
//
//
this.fastLine4.Marks.Callout.ArrowHead =
Steema.TeeChart.Styles.ArrowHeadStyles.None;
this.fastLine4.Marks.Callout.ArrowHeadSize = 8;
//
//
//
this.fastLine4.Marks.Callout.Brush.Color = System.Drawing.Color.Black;
this.fastLine4.Marks.Callout.Distance = 0;
this.fastLine4.Marks.Callout.Draw3D = false;
this.fastLine4.Marks.Callout.Length = 10;
this.fastLine4.Marks.Callout.Style =
Steema.TeeChart.Styles.PointerStyles.Rectangle;
//
//
//
this.fastLine4.Marks.Shadow.Visible = true;
//
//
//
//
//
//
this.fastLine4.Marks.Symbol.Shadow.Visible = true;
this.fastLine4.Title = "GC with pinning";
//
//
//
this.fastLine4.XValues.DataMember = "X";
this.fastLine4.XValues.Order =
Steema.TeeChart.Styles.ValueListOrder.Ascending;
//
//
//
this.fastLine4.YValues.DataMember = "Y";
//
// label1
//
this.label1.AutoSize = true;
this.label1.Location = new System.Drawing.Point(20, 374);
this.label1.Name = "label1";
this.label1.Size = new System.Drawing.Size(48, 13);
this.label1.TabIndex = 2;
this.label1.Text = "Progress";
//
// EditBox
//
this.EditBox.Location = new System.Drawing.Point(252, 370);
this.EditBox.Name = "EditBox";
this.EditBox.Size = new System.Drawing.Size(100, 20);
this.EditBox.TabIndex = 3;
this.EditBox.Text = "5";
this.EditBox.TextChanged += new
System.EventHandler(this.EditBox_TextChanged);
//
// label2
//
this.label2.AutoSize = true;
this.label2.Location = new System.Drawing.Point(198, 374);
this.label2.Name = "label2";
this.label2.Size = new System.Drawing.Size(45, 13);
this.label2.TabIndex = 4;
this.label2.Text = "GC Iters";
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(570, 402);
this.Controls.Add(this.label2);
this.Controls.Add(this.EditBox);
this.Controls.Add(this.label1);
this.Controls.Add(this.tChart1);
this.Controls.Add(this.StartButton);
this.Name = "Form1";
this.Text = "Form1";
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.Button StartButton;
private Steema.TeeChart.TChart tChart1;
private Steema.TeeChart.Styles.FastLine fastLine1;
private Steema.TeeChart.Styles.FastLine fastLine2;
private Steema.TeeChart.Styles.FastLine fastLine3;
private Steema.TeeChart.Styles.FastLine fastLine4;
private System.Windows.Forms.Label label1;
private System.Windows.Forms.TextBox EditBox;
private System.Windows.Forms.Label label2;
}
}
 
J

Jon Skeet [C# MVP]

Atmapuri said:
Darn partial classes <g>

It is complete, except for the TeeChart component required.
The Form1.Designer.cs below. Otherwise it would be
best, I just send you a complete project, but dont know where
to.

Well, my email address is in every post.
It is rather hard to observe trends in text. You can always
print out the arrays passed to the chart series and comment
out the chart references. This is only the last part of the

Well I certainly find it easy to observe trends so long as we're only
talking about 10 data points - which should be easily enough to
demonstrate your point.

Lasse produced a set of figures in one of his posts, and the trend
there was very obvious. Furthermore, the code he produced was nice and
short (not quite complete, but very close).

There really is no need for graphs here - but there *is* a need for a
short but complete program to examine.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top