TCP buffer splitting question ?

G

Guest

At the TCP server side I'm using:

///////////////////////////////////////////////////////////////////////////////
IPAddress DEFAULT_SERVER = IPAddress.Parse("127.0.0.1");
IPEndPoint ipNport = new IPEndPoint(DEFAULT_SERVER, 31001);
TcpListener m_server = new TcpListener(ipNport);

while( true )// For ever for this example
{
Socket m_clientSocket = m_server.AcceptSocket();
m_clientSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, 65535);
Thread m_clientListenerThread = new Thread(new
ThreadStart(SocketListenerThreadStart));
m_clientListenerThread.Start();
}

private void SocketListenerThreadStart()
{
int size = 0;
byte [] byteBuffer = new byte[131072];// 128 Kilobyte.

while( true )// For ever for this example
{
size = m_clientSocket.Receive(byteBuffer);
}
}
///////////////////////////////////////////////////////////////////////////////

And a the client side:

///////////////////////////////////////////////////////////////////////////////
Thread m_clientThread = new Thread( new ThreadStart(ClientThreadStart) );
m_clientThread.Start();

private void ClientThreadStart()
{
Socket clientSocket = null;
int blockSize = 65535; // 64 KByte - 1 byte

IPEndPoint ep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 31001);
clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);
clientSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.SendBuffer, 65535);

clientSocket.Connect(ep);

byte [] rowData = new byte[ 1024 * 1024 ];
int blockCount = rowData.Length/blockSize + ((rowData.Length % blockSize
== 0) ? 0 : 1);

// Transmitting the buffer in blocks.
int length, sentSize;
for( i=0; i < blockCount ; ++i )
{
if( i < (blockCount - 1) )
{
length = blockSize;
}
else
{
length = rowData.Length - (i * blockSize);
}
sentSize = clientSocket.Send(rowData, i*blockSize, length,
System.Net.Sockets.SocketFlags.None);

if( sentSize < length )
{
// ERROR ?!
}
}
clientSocket.Close();
}
///////////////////////////////////////////////////////////////////////////////

The Send at the client side always return the same size that I asked for,
BUT the Receive on the server side almost all the time return size of smaller
size.
Is that how it should work? Why the receive socket does not buffer as I
asked (65535) ?
Can I improve it?
 
W

Willy Denoyette [MVP]

Sharon said:
At the TCP server side I'm using:

///////////////////////////////////////////////////////////////////////////////
IPAddress DEFAULT_SERVER = IPAddress.Parse("127.0.0.1");
IPEndPoint ipNport = new IPEndPoint(DEFAULT_SERVER, 31001);
TcpListener m_server = new TcpListener(ipNport);

while( true )// For ever for this example
{
Socket m_clientSocket = m_server.AcceptSocket();
m_clientSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, 65535);
Thread m_clientListenerThread = new Thread(new
ThreadStart(SocketListenerThreadStart));
m_clientListenerThread.Start();
}

private void SocketListenerThreadStart()
{
int size = 0;
byte [] byteBuffer = new byte[131072];// 128 Kilobyte.

while( true )// For ever for this example
{
size = m_clientSocket.Receive(byteBuffer);
}
}
///////////////////////////////////////////////////////////////////////////////

And a the client side:

///////////////////////////////////////////////////////////////////////////////
Thread m_clientThread = new Thread( new ThreadStart(ClientThreadStart) );
m_clientThread.Start();

private void ClientThreadStart()
{
Socket clientSocket = null;
int blockSize = 65535; // 64 KByte - 1 byte

IPEndPoint ep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 31001);
clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);
clientSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.SendBuffer, 65535);

clientSocket.Connect(ep);

byte [] rowData = new byte[ 1024 * 1024 ];
int blockCount = rowData.Length/blockSize + ((rowData.Length % blockSize
== 0) ? 0 : 1);

// Transmitting the buffer in blocks.
int length, sentSize;
for( i=0; i < blockCount ; ++i )
{
if( i < (blockCount - 1) )
{
length = blockSize;
}
else
{
length = rowData.Length - (i * blockSize);
}
sentSize = clientSocket.Send(rowData, i*blockSize, length,
System.Net.Sockets.SocketFlags.None);

if( sentSize < length )
{
// ERROR ?!
}
}
clientSocket.Close();
}
///////////////////////////////////////////////////////////////////////////////

The Send at the client side always return the same size that I asked for,
BUT the Receive on the server side almost all the time return size of
smaller
size.
Is that how it should work? Why the receive socket does not buffer as I
asked (65535) ?
Can I improve it?

NO you can't control this, this is how TCP/IP works. It's a streaming
protocol, your messages are put in message segments (88 min. up to ~1500
bytes max.) and segments are sent over the wire until a whole window is sent
after which the sender waits for an ACK, the receiving end will pack
segments and push them up the stack to your application, now when your
application is reading from the socket at a faster pace then the data
arrival, the TCP stack will deliver the actual received data as present in
it's internal buffers.
This is why I told you before that you might be wasting memory by allocating
large application buffers, you won't be able to receive let's say 1MB of
data in one Receive unless you are spending much time between receives. What
you should do is read the data as fast as you can and leave the rest to the
TCP/IP stack (Winsock, AFD kernel, NDIS driver etc.), if you need to handle
the data in between receives you could opt for an asynchronous design, as
such you can optimize your resources by reading from a socket while another
thread processes the data.
Some more remarks: - the size of sent bytes at the API level is the amount
accepted by the socket layer (the top of the stack), it's by no way the
amount physically transmitted to the other end.
- When I'm refering to the other end, you should think of routers and
switches etc, these are the intermediate end-points and they play also an
important role in the message delivery, they can fragment the data even
further, or restrict the bandwidth consumption.

Willy.
 
G

Guest

Thanks, I'm more calm thanks to your info.

Now I have encountered with a new and most disturbing problem. After
implementing all advices and knowledge I found in here I moved on and found
the following problem:

I have cases (too many of them) that result in data loss.
Some inforamation on my test configuration:
I have to PC's which are the same (Dell Dimension 2400).
PC1 is the tcp server, PC2 is the tcp client, they two configured to 65535
bytes tco buffer/window, and working in synchronous manner while for each
client the server start a dedicated thread.
The client send the data size in the first block and this way the server
knows when to stop processing the data from this client by accumulating the
retuned valued from the Socket.Receive(..) till it gets to the size received
in the first block.

When the server is handling a single client, no data is lost. But when I
switch the PC's so PC2 is the server and PC1 is the client, data is lost
causing the server to keep on waiting on the Socket.Receive(..) while the
client is done.

When the server (on PC1) is handling several clients (multiple threads
generated in PC2) again data is lost and some of the clients data is missing
in the server side again causing the server to keep on waiting on the
Socket.Receive(..) while all clients are done sending all their data.

I'm really don't know how fix that or what am I doing wrong.

Any advice will be more then welcome.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top