TcpClient buffer size limit?

G

Guest

I'm using TcpClient and getting the Stream by:
TcpClient tcpclnt = new TcpClient();
. . .
Stream stm = tcpclnt.GetStream();

Now, when I'm trying to send a big buffer via stm.Write(...) I get an
exception saying that the buffer is too big.

I tried to set the tcpclnt.ReceiveBufferSize to the size of the buffer I"m
trying to send, but the exception is still thrown, there is no transmit
buffer I can set.

How can I solve this problem beside splitting my buffer and transmit it by
chunks?
What is the limit of the buffer?
 
I

Ignacio Machin \( .NET/ C# MVP \)

hi sharon,


weird error, could you post the code section ?

Write do not need to create a new buffer, it;s already created.

cheers,
 
G

Guest

OK, here is the code:

//////////////////
// The client:
//////////////////
TcpClient tcpclnt = new TcpClient();
tcpclnt.Connect("ip address", 8001);
byte [] rowData = new byte[104857600];// 100 MByte
tcpclnt.ReceiveBufferSize = rowData.Length;
Stream stm = tcpclnt.GetStream();
stm.Write(rowData, 0, rowData.Length); // throwing an EXCEPTION !!!
// Receiving Ack.
ack = new byte[100];
k = stm.Read(ack, 0, ack.Length);
tcpclnt.Close();

//////////////////
// The Serevr:
//////////////////
ASCIIEncoding encoding = new ASCIIEncoding();
IPAddress ipAd = IPAddress.Parse("ip address");// Same IP address as the
clients uses.
TcpListener myList = new TcpListener(ipAd, 8001);
myList.Start();
Socket sock = myList.AcceptSocket();
byte [] rowData = new byte[104857600];// 100 MByte
recLen = sock.Receive(rowData); // throwing an EXCEPTION !!!
sock.Close();
myList.Stop();
///////////////////////////

Any idea ?
 
I

Ignacio Machin \( .NET/ C# MVP \)

hi sharon

I got the same error, this is the first time I see it, how many memory do
you have? I only have 512 and I got reported around 800 when I create both
buffers but before send it, maybe the buffer is too big, I think that you
should do a google search to see if anybody has got this error before


Sorry not be able to help you further :(

cheers,
 
G

Guest

Hi,

I also have 512 MByte of memory.
What do you mean when you say: "I got reported around 800 when I create both
buffers but before send it" ??? Only the receive buffer can be set. 800
what? Did you try to set the buffer to 800 MByte or what?

I get the error not when I set the ReceiveBufferSize, but when I call stream
Read/Write when my large buffer which is over 100 MByte.

I'll appreciate any clue solving this issue.
 
H

Helge Jensen

Sharon wrote:

Are you *really* sending 100M, or have you just made the buffer
"big-enough"?

Apparently stm.Write invokes the OS on the buffer directly, and the OS
doesn't support a block that big. I Have not seen this before, and
would expect .NET to handle it inside Stream.Write.
stm.Write(rowData, 0, rowData.Length); // throwing an EXCEPTION !!!

// Warning: untested code
int blockSize = 8192;
int blockCount = rowdata.Length/blockSize
+ rowdata.Length % blockSize == 0 ? 0 : 1;
for ( int i = 0; i < blockCount ; ++i ) {
int length;
if ( i < blockCount - 1 )
length = blockSize;
else
length = rowdata.Length - (i-1)*blockSize;
stm.Write(rowData, i*blockSize, length);
}

The above code could easily have been in .Write instead.
// Receiving Ack.
ack = new byte[100];
k = stm.Read(ack, 0, ack.Length);

I assume you just cutted some code checking on the ack for readbility?

BTW: nobody promised you that a read of a 100 byte buffer will return
100 bytes, even if they have arrived. It might return just 1, and then
return the remaining bytes later.
byte [] rowData = new byte[104857600];// 100 MByte
recLen = sock.Receive(rowData); // throwing an EXCEPTION !!!

Same propblem as above, although I would have probably just limited the
count if I was the implementer of .Receive, afterall Receive is allowed
to return any number of available bytes, or 0 iff the stream is closed.

Here, there is *no* *way* that the OS will return the entire 100MB data
in one read. You need to loop, as when writing.
sock.Close();

Missing ACK? :)
 
I

Ignacio Machin \( .NET/ C# MVP \)

Hi,

800 MB I meant, that is what the PF Usage reported ( in windows task manager
, performance tab )

You create both buffer before the transmition ( where you get the error ), I
put a breakpoint there and check the memory usage.

cheers,
 
I

Ignacio Machin \( .NET/ C# MVP \)

Hi,

Apparently stm.Write invokes the OS on the buffer directly, and the OS
doesn't support a block that big. I Have not seen this before, and would
expect .NET to handle it inside Stream.Write.

That's my same concern , I have never used a buffer this big, but I would
assume that Stream would take care of that for me, hopefully somebody from
MS would clarify this issue



cheers,
 
W

Willy Denoyette [MVP]

Ignacio Machin ( .NET/ C# MVP ) said:
Hi,



That's my same concern , I have never used a buffer this big, but I would
assume that Stream would take care of that for me, hopefully somebody
from MS would clarify this issue



cheers,
Ignacio,

The error has nothing to do with the framework, the buffer size is limited
by the underlying Winsock protocol stack. The maximum size of the buffer is
determined by the provider using a (undocumented) complex algorithm based on
things like available RAM, socket type, bandwidth number of active
connections...., a too high size returns WSAENOBUFS (10055) Winsock error
with "An operation on a socket could not be performed because the system
lacked sufficient buffer space or because a queue was full. " as error
message.

I don't understand the reasoning behind OP's decision to reserve 100 MB for
this, anyway 100MB is just way too high.

Willy.
 
I

Ignacio Machin \( .NET/ C# MVP \)

Hi,

Yes, I understand where and why the error is showning, ITOH I would have
expect that the framework take this into consideration , by either capturing
the exception and then doing a loop ( what the OP would have to do ) or by
providing a method stating what the maximun buffer can be ( tentatively or
course ).
In the current status it's unpredictible how big the buffer can be.

Of course I do not understand neither why the need of such a HUGE buffer
being used by the OP , but you could scale it back to a PPC with 64MB of ram
and now your constrains are real

cheers,

--
Ignacio Machin,
ignacio.machin AT dot.state.fl.us
Florida Department Of Transportation
 
W

Willy Denoyette [MVP]

Ignacio Machin ( .NET/ C# MVP ) said:
Hi,

Yes, I understand where and why the error is showning, ITOH I would have
expect that the framework take this into consideration , by either
capturing the exception and then doing a loop ( what the OP would have to
do ) or by providing a method stating what the maximun buffer can be (
tentatively or course ).
In the current status it's unpredictible how big the buffer can be.

Of course I do not understand neither why the need of such a HUGE buffer
being used by the OP , but you could scale it back to a PPC with 64MB of
ram and now your constrains are real

cheers,

There is nothing the Framework can do other than you could do, it has no way
to determine the max. buffer size other than trying to send and reducinfg
the size until it succeeds, not exactly what I call a neet solution.
Optimum buffer sizes for synchronous transfers are something less than the
SO_SNDBUF and SO_RCVBUF values (returned by a call to GetSockOpt), these
have a default of 8192, but a value of 16386 or 32672 are sometimes used for
bulck transfers like used by network back-up programs. But as you see these
are far below what OP is trying to use.

Willy.
 
G

Guest

Ok, I did some tests and here is the results:
The Socket ReceiveBuffer and SendBuffer can be set by:
int block size = 8192 or 65536 or 131072 or 1048576 etc.
rvcSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, blockSize);
and
sndSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.SendBuffer, blockSize);

byte buffer = new byte[8192 or 65536 or 131072 or 1048576 etc.];
The Socket.Send(buffer) does split the buffer and does multiple send to
ensure the complete buffer transmission.
Nevertheless, if the buffer is tool large (like 100Mbyte in my case), the
sending and receiving will throw an exception.

I tried to find the size limit that cause the exception, but with no success.
And this is a very bad thing for me. Why?!
- Because if at run time I'll catch the exception and do buffer splitting
over and over till no exception is thrown, I'll will loose to much time that
I do not have. So in order to save time I need to do it only once, and use
this found buffer size for all image frame I'm sending. The trouble with this
solution is that a single peak (down peak) will result in a small blocks
transmission for all future frames, and that is no good ether. So maybe I'll
combine the two methods by calculating the buffer size by catching the too
large buffer exception every time interval that also need to be figured out.

So that's what I found.

Any other idea will be welcome.
 
W

Willy Denoyette [MVP]

Sharon said:
Ok, I did some tests and here is the results:
The Socket ReceiveBuffer and SendBuffer can be set by:
int block size = 8192 or 65536 or 131072 or 1048576 etc.
rvcSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, blockSize);
and
sndSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.SendBuffer, blockSize);

byte buffer = new byte[8192 or 65536 or 131072 or 1048576 etc.];
The Socket.Send(buffer) does split the buffer and does multiple send to
ensure the complete buffer transmission.
Nevertheless, if the buffer is tool large (like 100Mbyte in my case), the
sending and receiving will throw an exception.

I tried to find the size limit that cause the exception, but with no
success.
And this is a very bad thing for me. Why?!
- Because if at run time I'll catch the exception and do buffer splitting
over and over till no exception is thrown, I'll will loose to much time
that
I do not have. So in order to save time I need to do it only once, and use
this found buffer size for all image frame I'm sending. The trouble with
this
solution is that a single peak (down peak) will result in a small blocks
transmission for all future frames, and that is no good ether. So maybe
I'll
combine the two methods by calculating the buffer size by catching the too
large buffer exception every time interval that also need to be figured
out.

So that's what I found.

Any other idea will be welcome.

Setting the buffer size like you are doing by SetSocketOption to a value
which is larger than 64Kb is only possible when running on XP or W2K3 and
only when connecting to other systems that support RFC 1323 Window Scaling,
and only when the other system accepts this value during initial connect.
Note that setting a value larger than 64Kb will silently be accepted without
an error indication, however, the effective size will be lower than
requested.
I'm still not clear what you like to achieve, you are telling us how you are
doing things, but not why you are doing them, what issues do you have, what
is the context you are running in, what kind of connection are you using
(speed, latency...). I'm also unclear about your code, it looks like you are
using simple synchronous socket IO instead of applying an asynchronous
application design, maybe that's the reason why you are so eager to extend
the application buffer.
As I told you before using the default per interface type TCP registry
settings and an application buffer size of 64K you can easily saturate a
single Gigabit network interface.

Willy.

Willy.
 
G

Guest

Hi Willy,

Ok' I'll try to make my case clearer:
I'm developing an application that uses a Frame Grabber that produce 100
mega byte frames.
The application need to distribute these frames to several computers, frame1
to pc1, frame2 to pc2, frame3 to pc3 and so on.
The frames are generated in very high speed, therefore the frames
transmission must also be very fast.
So I'm looking for the optimum TCP connection and configuration to achieve
the fastest speed.
We are also using (in the near future) GLAN, dual Xeon PC's with Windows XP.

From the tests I did, I found that a bigger TCP buffer making the
transmission faster (of course there is a limit for that).

Yes, I'm using simple synchronous socket IO and not an asynchronous. Why
should I prefer the the asynchronous transmission? Will it make the frames
transmission faster?

I'm not sure I understand what you mean when you say "using the default per
interface type TCP registry settings and an application buffer size of 64K
you can easily saturate a single Gigabit network interface."
Can you please elaborate on that?

Any suggestion will be welcome.
 
W

Willy Denoyette [MVP]

Sharon said:
Hi Willy,

Ok' I'll try to make my case clearer:
I'm developing an application that uses a Frame Grabber that produce 100
mega byte frames.
The application need to distribute these frames to several computers,
frame1
to pc1, frame2 to pc2, frame3 to pc3 and so on.
The frames are generated in very high speed, therefore the frames
transmission must also be very fast.
So I'm looking for the optimum TCP connection and configuration to achieve
the fastest speed.
We are also using (in the near future) GLAN, dual Xeon PC's with Windows
XP.

From the tests I did, I found that a bigger TCP buffer making the
transmission faster (of course there is a limit for that).

Yes, I'm using simple synchronous socket IO and not an asynchronous. Why
should I prefer the the asynchronous transmission? Will it make the frames
transmission faster?
You should use an asynchronous pattern if you need to prepare some further
data while the actual data is being transmitted, or you when you have to
process the received data while receiving the next buffer.
That doesn't mean one is faster than the other, it's just a matter for
finding the perfect balance between CPU occupation, network throughput
without wasting precious resources like user/kernel memory.

I'm not sure I understand what you mean when you say "using the default
per
interface type TCP registry settings and an application buffer size of 64K
you can easily saturate a single Gigabit network interface."
Can you please elaborate on that?
Yes, the TCP stack in XP and higher is self tuned, that means that a number
of parameters are adjusted automatically depending on the interface type and
characteristics and available RAM memory.
For instance a Gigabit interface will have a default TcpWindowSize of 64K at
the winsock API level, while a 10-100MBit interface has a window size of
17Kb (note that this is a maximum, windows will adapt this value depending
on how routers are configured, or to the receiving side's TcpWindowSize if
it's smaller).
Now without adapting the defaults (by adding the parameters to the
registry), you can saturate a Gigabit LAN interface when transferring large
chunks of data between two TCP/IP endpoints. For instance I achieved a
transfer rate of ~112MBytes/sec. (95% of the aggregate bandwidth) between
two PC's running XP connected through a GB switch.

Willy.
 
G

Guest

Hello again Willy,

Thanks for the info.

To finalize this issue:
I do not need the asynchronous pattern because the send and receive are done
in a dedicated thread and it's more simple to use.
P.S. : The computers that will run my application will use a dedicated GLAN,
meaning; the GLAN will contain only my application PC's.

BUT, I very much care about the speed.
You referred to RFC 1323, so I read it and now I wish to know the following:
(1) Does the Windows XP support the window scaling and time stamps by
default, or should I set the registry with
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\Tcp1323Opts = 3 (DWORD) ???
(2) Are there any other registry setting that I can set to make the TCP data
transfer faster?
(3) What socket parameters should I set to make it run faster ( like using
the Socket.SetSocketOption(...) ) ???

I'll very much appreciate as much details as you can in your reply.

Any other info that you think I should know will be more then welcome.
 
W

Willy Denoyette [MVP]

See inline ***

Willy.

Sharon said:
Hello again Willy,

Thanks for the info.

To finalize this issue:
I do not need the asynchronous pattern because the send and receive are
done
in a dedicated thread and it's more simple to use.
P.S. : The computers that will run my application will use a dedicated
GLAN,
meaning; the GLAN will contain only my application PC's.

BUT, I very much care about the speed.
You referred to RFC 1323, so I read it and now I wish to know the
following:
(1) Does the Windows XP support the window scaling and time stamps by
default, or should I set the registry with
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\Tcp1323Opts
= 3 (DWORD) ???

***Yes, XP supports this. but this option is only meant to be used on high
speed high latency networks (Satelite, some ATM and Cable networks). When
used on a LAN, the transfer rate can be a little lower because of the
slightly higher payload when using the timestamps.
(2) Are there any other registry setting that I can set to make the TCP
data
transfer faster?
*** Faster than what? Did you run any benchmarks yet? Like I told you
before, you should be able to achieve a 95-99% aggregate throughput without
changing anything to the defaults and by using an application buffer size of
16Kb - 64Kb , if you don't achieve this, you must have a problem elsewere ad
this must be solved before you start changing whatever registry parameter or
socket option or buffer size. Take care of your adapter settings, pay
attention to the speed settings, select Full Duplex, sometimes Auto speed
detect doesn't select the optimum speed, and please MEASURE before you
change anything. And , don't assume that a larger data buffer than 64 Kb at
the application level will help you to increase the speed.
(3) What socket parameters should I set to make it run faster ( like using
the Socket.SetSocketOption(...) ) ???
*** Again faster than what? The only socket options that have some influence
on a LAN are the SO_SNDBUF and SO_RECVBUF, the defaults are 8KB for both.
You can set these up to 64Kb (on both ends!!) and measure the rate change.
Any value larger than this won't probably be honoured. Using a network
monitor you can inspect the frames and packet window sizes ( level 2 and 3)

More info about registry TCP/IP parameters:
http://www.microsoft.com/technet/pr...003/technologies/networking/tcpip03.mspx#ECAA
 
G

Guest

Hi Willy,

I did do some tests (see previous reply). And on the company LAN I achieved
about 7MByte per second.
When I say faster, I mean as fast as possible.

The setting that you mentioned: Full Duplex, and that sometimes Auto speed
detect doesn't select the optimum speed.
Is this parameters is set also by Socket.SetSocketOption(...) as I set the
buffer size rcvSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, 65535);
sndSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, m_65535); ???
( I don't know the SO_SNDBUF and SO_RECVBUF...)
 
W

Willy Denoyette [MVP]

Inline

Willy.

Sharon said:
Hi Willy,

I did do some tests (see previous reply). And on the company LAN I
achieved
about 7MByte per second.
7 MB at the application level on a company 100Mb LAN (I guess), one could do
better (up to ~10-11MBytes), sure it depends on the topology, are there any
routers/switches involved? How are they configured, were you the only user?
IMO it's not a good idea to run such tests on a company LAN.
When I say faster, I mean as fast as possible.

The setting that you mentioned: Full Duplex, and that sometimes Auto speed
detect doesn't select the optimum speed.
Is this parameters is set also by Socket.SetSocketOption(...) as I set the
buffer size rcvSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, 65535);

No you can't use SetSocketOption to change this, you can set them through
the network properties configuration utility though. You can also set them
using System.Management and WMI or the Win32 API's.
sndSocket.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer, m_65535); ???
( I don't know the SO_SNDBUF and SO_RECVBUF...)
SocketOptionName.ReceiveBuffer sets the SO_RECVBUF
SocketOptionName.ReceiveBuffer sets the SO_SENDBUF
Note that you should set both on both ends, else the protocol will select
the lowest (8KB) after negotiation.
 
G

Guest

Hi Willy,

OK, I'm advancing... (-;

I don't know the company LAN configuration, but all test are done under the
same condition, so I'll simply select the best/faster one.
Of course, the final test will be done in a clean and dedicated
environment/network.
In this stage' I didn't even started writing the real application, I'm in
the stage of writing test applications to learn about the .NET Tcp
capabilities (it's my first try).

Can you point me to how to get to the properties configuration utility and
the System.Management and WMI you mentioned?

I'm setting the Socket buffer size as follow:
* The client which send the data, set the socket send buffer to 65535 byte
and send this info together with the overall frame size to the receiver in a
small block of 8 byte (2 integers).
* The server (receiver) inspect this block and then set it's Socket receive
buffer size.

Is this ok, or the server/receiver is setting the socket buffer too late?
Should the receiver set the socket buffer size before the communication
starts (immediately after the TcpListener.AcceptSocket() )?

Final question: Should I use the TcpListener.AcceptSocket() or
TcpListener.AcceptTcpClient(), I don't see the real difference.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top