Asynchronous server implementation using either byte array or memory stream.

R

Roger Down

Hi all... :)

I'm in the process of coding up some "high-speed" asynchronous server. The
server must handle many simultaneous connections. The amount of data to be
transported on each connection is not very big, typically 1000 bytes, but
the server must process the data using the fastest techniques available.


I've had a look on the "Asynchronous Server Socket Example" located here:
http://msdn2.microsoft.com/en-us/library/fx6588te(VS.80).aspx


1) Is this a good starting point regarding how to implement a good async tcp
server ? Anything you would have changed on the connection parts ?


Since the data to be transported is an array of various bytes, I have
changed the StateObject to something like this:

------------------------------------
public class StateObject
{
// Client socket.
public Socket workSocket = null;

// Size of receive buffer.
public const int BufferSize = 1024;

// Receive buffer.
public byte[] buffer = new byte[BufferSize];

// Total buffer.
public MemoryStream ms = new MemoryStream();
}
------------------------------------

So in the receive part of the code, I plan to write the bytes read from the
buffer to the memory stream. Something like this:

------------------------------------
state.ms.Write(state.buffer,0,bytesRead);
state.workerSocket.BeginReceive(state.buffer, 0, state.buffer.Length, 0,
receiveCallback, state);
------------------------------------


2 ) Is this a good approach for dynamically add buffer data ?


The total amount of bytes to be received will be made available in the first
part of the byte array, so another possibility would be to allocate a second
buffer. This would make the state object look like this:

------------------------------------
* * *

// Receive buffer.
public byte[] buffer = new byte[BufferSize];

// Total buffer. Unknown upon class creation.
public byte[] totalbuffer;
}
------------------------------------

So at some point during the first receive, I allocate the total buffer to
the correct size, and uses Buffer.BlockCopy to move data between those two
arrays.


3 ) Could this be an alternative ?


At some point the received data is going to be converted to a byte array
anyway, so perhaps the last suggestion is the best ?


Best of regards... :)
 
C

Carl Daniel [VC++ MVP]

Roger said:
Hi all... :)

I'm in the process of coding up some "high-speed" asynchronous
server. The server must handle many simultaneous connections. The
amount of data to be transported on each connection is not very big,
typically 1000 bytes, but the server must process the data using the
fastest techniques available.

I've had a look on the "Asynchronous Server Socket Example" located
here: http://msdn2.microsoft.com/en-us/library/fx6588te(VS.80).aspx

I'd recommend taking a look at this blog posting by Chrius Mullins (C# MVP)

http://www.coversant.com/Coversant/Blogs/tabid/88/EntryID/10/Default.aspx

-cd
 
C

Chris Mullins [MVP]

[Building a big socket app]

I see Carl already pointed you to one of my blog entries that goes into alot
of detail on this topic. I would recommending reading a few other blogs that
I've written, as most of them exactly cover this topic.

You might want want to look through JD Conley's blog, as he also covers
this. Specificaly, the entires to look at are:

Heap Fragmentation and how to prevent it:
http://www.coversant.com/Coversant/Blogs/tabid/88/EntryID/9/Default.aspx
http://www.coversant.com/Coversant/Blogs/tabid/88/EntryID/36/Default.aspx

Debugging these beasts:
http://www.coversant.com/Coversant/Blogs/tabid/88/EntryID/28/Default.aspx

Don't use the threadpool:
http://www.coversant.com/Coversant/Blogs/tabid/88/EntryID/8/Default.aspx

Deploying your app on x64, x86 and IA64:
http://www.coversant.com/Coversant/Blogs/tabid/88/EntryID/16/Default.aspx
 
P

Peter Duniho

I've had a look on the "Asynchronous Server Socket Example" located here:
http://msdn2.microsoft.com/en-us/library/fx6588te(VS.80).aspx

1) Is this a good starting point regarding how to implement a good async
tcp server ? Anything you would have changed on the connection parts ?

It seems like a fine starting point to me. At least, I didn't see
anything obviously wrong in the sample. :)
[...]
So in the receive part of the code, I plan to write the bytes read from
the buffer to the memory stream. Something like this:

------------------------------------
state.ms.Write(state.buffer,0,bytesRead);
state.workerSocket.BeginReceive(state.buffer, 0, state.buffer.Length, 0,
receiveCallback, state);
------------------------------------

Do you really mean that in your code, the call to the Write() method will
be followed immediately by the call to the Read() method?

I'm just speculating, but usually this is a symptom of poor design. That
is, there's not a one-to-one correlation between writes and reads with
TCP, and so code that puts those together generally is making an
assumption it shouldn't be making (usually that assumption is that you'll
get all of the data you're expecting in a single read, or at least in
reads that match the writes from the other end...this is an incorrect
assumption).

Without seeing the whole design, I can't say for sure, but you should at
least consider the possibility, if that's really how your code is
structured. If you've really based your code on the sample you mentioned,
this shouldn't be an issue, but if you've changed the sample somehow in
the above way, that would be an issue.
2 ) Is this a good approach for dynamically add buffer data ?

The total amount of bytes to be received will be made available in the
first part of the byte array, so another possibility would be to
allocate a second buffer. This would make the state object look like
this:

------------------------------------
* * *

// Receive buffer.
public byte[] buffer = new byte[BufferSize];

// Total buffer. Unknown upon class creation.
public byte[] totalbuffer;
}
------------------------------------

So at some point during the first receive, I allocate the total bufferto
the correct size, and uses Buffer.BlockCopy to move data between those
two arrays.

That's fine, but then you also need logic to reset the output buffer once
it's been processed, *and* you need logic to deal with the possibility
that in the same read when you have completed one processing unit (that
is, a single group of data with which you can actually do some work), you
may also have the data for the beginning of the next processing unit.

In fact, depending on how large a processing unit is and how large your
receive buffer is, you could find yourself having to deal with some
relatively large number of processing units in a single read. The larger
the receive buffer is, and the smaller a processing unit is, the more
likely this would be. For example, if you are sending groups of data that
are only on the order of 100 bytes or so, but your receive buffer is 1K,
you could conceivably find yourself holding 10 processing units in a
single read.

There are a variety of ways to deal with this. For low-bandwidth
applications where efficiency isn't important, you might find it useful to
only read as much data at a time as you can deal with for a single
processing unit. So even if your buffer is 1K, only ask for as much as
you know will be in the next processing unit. Initially, this would only
be whatever you have to read to know the size of the processing unit, and
then it would be up to the remaining bytes left for the current processing
unit.

This is a terrible design for efficiency, but it's relatively simple to
implement and allows you to deal with just one or two buffers (depending
on how you implement it).

A better (more efficient, that is) solution would be to maintain a list of
processing units. As you receive data, you add new data to the list. At
the start of any receive, the list should have at most a single unit,
partially filled. During a receive, you'll add data to the partially
filled unit, and if you complete a processing unit, you'll start a new
processing unit. At the end of a receive, you'll dequeue all of the
completed processing units to a thread that will handle actually dealing
with the data, leaving again at most one partially filled processing unit
in the list.

You'll note that in this design, the list doesn't need to be contained in
the StateObject class, since between receives you only ever need to
maintain a single partially-filled processing unit. So the StateObject
can hold that, and the receive callback can build and manipulate the list
locally. The dequeuing to another thread could involve a less-ephemeral
list that's given to that thread or, if the order of processing doesn't
actually matter, you could even just enqueue each processing unit to the
thread pool and let the thread pool management deal with managing the list
(such as it is).

A third possibility would be to interrupt transfer of data in the receive
callback every time you complete a processing unit, and process the data
right then, in that thread. I wouldn't advise doing that, since it would
block the network i/o in a similar (but not exactly the same) way as the
first method mentioned above. But as a low-bandwidth, low-efficiency
compromise, it would at least be better than the first method.
3 ) Could this be an alternative ?

At some point the received data is going to be converted to a byte array
anyway, so perhaps the last suggestion is the best ?

Could what be an alternative? What's the last suggestion?

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top