Async Sockets, Buffer Pinning

  • Thread starter Thread starter Greg Young
  • Start date Start date
G

Greg Young

Ok so I think everyone can agree that creating buffers on the fly in
an async socket server is bad ... there is alot of literature
available on the problems this will cause with the heap. I am looking
at a few options to get around this.

1) Have a BufferPool class that hands out ArraySegment<byte> portions
of a larger array (large enough that it would be in the LOH). If all
of the array is used create another big segment.

2) Create a bunch of smaller arrays for use by the bufferpool class
and have it hand them back

In both 1 & 2 I would probably have the connection use their buffer
for the duration of the connection. I would internally hold a list of
the free blocks. When a connection was done ith its buffer it would
have to release it back to this pool. My thought is that #2 might be
better for dealing with cases where I want to shrink the number of
buffers allocated from the previous maximum if needed.

In general I lean towards #1 ... but figured I would check if I might
be missing something.

Thanks in advance,

Greg Young
 
As I understand it, we have two issue with async buffers. The .net buffers
and the system buffers used by the hardware for the io. IIRC, they did some
work in .Net 2.0 to help with clr side of things to make it much better -
are you still running into an issue here? However, we still have the issue
of the hardware using system read/write buffers and using up nonpaged memory
which limits the number of working sockets and also means a pinvoke buffer
copy I think (is this right?). It would be cool if they could somehow use
the actual .Net buffer and write directly into that from the hardware. Or
are they doing that already?

--
William Stacey [MVP]

| Ok so I think everyone can agree that creating buffers on the fly in
| an async socket server is bad ... there is alot of literature
| available on the problems this will cause with the heap. I am looking
| at a few options to get around this.
|
| 1) Have a BufferPool class that hands out ArraySegment<byte> portions
| of a larger array (large enough that it would be in the LOH). If all
| of the array is used create another big segment.
|
| 2) Create a bunch of smaller arrays for use by the bufferpool class
| and have it hand them back
|
| In both 1 & 2 I would probably have the connection use their buffer
| for the duration of the connection. I would internally hold a list of
| the free blocks. When a connection was done ith its buffer it would
| have to release it back to this pool. My thought is that #2 might be
| better for dealing with cases where I want to shrink the number of
| buffers allocated from the previous maximum if needed.
|
| In general I lean towards #1 ... but figured I would check if I might
| be missing something.
|
| Thanks in advance,
|
| Greg Young
|
|
 
That makes sense. Thanks. So the hardware is using buffers regardless out
of non-paged pool. When we post a buffer (from the heap), there is still a
copy from hw buffer to pinned buffer?

--
William Stacey [MVP]

| Hello, William!
|
| WSM> As I understand it, we have two issue with async buffers. The .net
| WSM> buffers and the system buffers used by the hardware for the io.
IIRC,
| WSM> they did some work in .Net 2.0 to help with clr side of things to
make
| WSM> it much better - are you still running into an issue here? However,
| WSM> we still have the issue of the hardware using system read/write
| WSM> buffers and using up nonpaged memory which limits the number of
| WSM> working sockets and also means a pinvoke buffer copy I think (is this
| WSM> right?). It would be cool if they could somehow use the actual .Net
| WSM> buffer and write directly into that from the hardware. Or are they
| WSM> doing that already?
|
| AFAIK hardware is using its own buffers for network I/O.
|
| In unmanaged sockets we specify user buffer in the sockets call ( e.g send
or WSASend ).
| Then when winsock call comes to kernel mode - data is copied from that
user buffer to
| internal driver buffer.
|
| The same is with recv ( WSAReceive ) - when data is received driver copies
data
| to user supplied buffer.
|
| In managed world using Reflector we can see that Socket.Send method has
| fixed (byte* numRef1 = buffer)
| {
| num1 =
UnsafeNclNativeMethods.OSSOCK.send(this.m_Handle.DangerousGetHandle(),
numRef1 + offset, size, socketFlags);
| }
|
| that is, unmanaged winsock function obtains direct pointer on our managed
buffer, which is pinned until call ends.
| The same is valid for async call, however, pinning occurs in more complex
way.
|
| --
| Regards, Vadym Stetsyak
| www: http://vadmyst.blogspot.com
 
Hello, William!
You wrote on Fri, 4 Aug 2006 12:53:44 -0400:

WSM> That makes sense. Thanks. So the hardware is using buffers
WSM> regardless out of non-paged pool. When we post a buffer (from the
WSM> heap), there is still a copy from hw buffer to pinned buffer?

When data is received - underlying driver copies it to user supplied buffer,
in
case of .NET it will be our pinned buffer.

There are drivers out there that can use user mode buffers, but
generally drivers do not trust user allocated buffers :8-)
 
Basically you are correct, you have "user mode" buffers and "kernel mode"
buffers and somehow you need to copy the buffers across the user-kernel
boundary (exceptions are SAN network stacks who will use "Windows Socket
Direct"). However,the layered network stack (kernel) is much more complex
than that, each component in the stack can have it's own buffers allocated
to store it's data or some components can user shared cached memory buffers.
Anyway the kernel mode buffers are not shared with user mode (Winsock)
buffers, so there is at least one copy function needed to pass data across
the boundary.

Willy.

| Hello, William!
| You wrote on Fri, 4 Aug 2006 12:53:44 -0400:
|
| WSM> That makes sense. Thanks. So the hardware is using buffers
| WSM> regardless out of non-paged pool. When we post a buffer (from the
| WSM> heap), there is still a copy from hw buffer to pinned buffer?
|
| When data is received - underlying driver copies it to user supplied
buffer,
| in
| case of .NET it will be our pinned buffer.
|
| There are drivers out there that can use user mode buffers, but
| generally drivers do not trust user allocated buffers :8-)
|
| --
| Regards, Vadym Stetsyak.
| Blog: http://vadmyst.blogspot.com
|
|
 
Back
Top