B
Ben Voigt [C++ MVP]
For example, I have an application which uses glVertexPointer. I
The "duration of the call" isn't sufficient. The data has to stay at the
same address from glVertexPointer when opengl stores the address, until
glDrawElements when it reads from the arrays (and I have glColorPointer in
between and there are about six other arrays that could be configured in an
exponential number of combinations, so "opengl should have combined all
access to the array into one single call" just doesn't cut it). So I *need*
a pinning pointer if I'm to use an array allocated using .NET. Furthermore,
since the Gen0/LOH distinction is an implementation detail, simply combining
all my buffers into a single large object would be asking for severe
breakage in a future version of .NET, not to mention failing code reviews
left and right.
In C++/CLI, I can request an immobile buffer by using "new" instead of
gcnew.
What's under discussion is having the same ability with other .NET
languages.
Zilch. You allocate your buffers once and use them, reuse them, reuse them
again. At least that's what I do.
But if the runtime pinning (or fixing, or pointer tabling) is so efficient,
why does AllocateNativeOverlapped exist, why not use the standard mechanism?
The very existance of that function as an internal CLR implementation is
proof of the OP's requirement for a new feature (though it doesn't prove
things are being copied or otherwise, actually copying might be better than
allowing Gen0 to fragment).
Interop does all you need to pin the buffer when the GC kicks of,
this way, the buffer is protected for the duration of the call, but
that doesn't mean it is pinned during , it doesn't have to, it only
needs to get pinned when the GC runs!
The "duration of the call" isn't sufficient. The data has to stay at the
same address from glVertexPointer when opengl stores the address, until
glDrawElements when it reads from the arrays (and I have glColorPointer in
between and there are about six other arrays that could be configured in an
exponential number of combinations, so "opengl should have combined all
access to the array into one single call" just doesn't cut it). So I *need*
a pinning pointer if I'm to use an array allocated using .NET. Furthermore,
since the Gen0/LOH distinction is an implementation detail, simply combining
all my buffers into a single large object would be asking for severe
breakage in a future version of .NET, not to mention failing code reviews
left and right.
This won't change a bit, interop between managed and unmanaged is the
same whether you use C++/CLI or another managed language. All you
have is somewhat greater control (and responsability) when using
C++/CLI, but whenever you need to pass managed "buffers" to unmanaged
you need to watch for the GC.
In C++/CLI, I can request an immobile buffer by using "new" instead of
gcnew.
What's under discussion is having the same ability with other .NET
languages.
Horrible fragmentation? Ever looked at the native heap fragmentation
when using OVERLAPPED in unmanaged code?
Zilch. You allocate your buffers once and use them, reuse them, reuse them
again. At least that's what I do.
But if the runtime pinning (or fixing, or pointer tabling) is so efficient,
why does AllocateNativeOverlapped exist, why not use the standard mechanism?
The very existance of that function as an internal CLR implementation is
proof of the OP's requirement for a new feature (though it doesn't prove
things are being copied or otherwise, actually copying might be better than
allowing Gen0 to fragment).