Socket weirdness

  • Thread starter William Stacey [MVP]
  • Start date
D

Dave Sexton

Hi William,

Thanks for your response.
| 1. How often will a blocking Send block, and for how long?
socket.SendTimeout (also paired ReceiveTimeout)

That doesn't answer "How often will a blocking Send block?", although it does answer, "for how long?".
| 2. I understand this depends on the size of the buffer, so how big is the
kernel buffer?
socket.SendBufferSize (and ReceiveBufferSize)

I was looking for an actual value but I found it in the docs for TcpClient.SendBufferSize on MSDN: "The default value is 8192
bytes".

You said in another branch of this thread (a few times) that Send will block for an ACK if it hasn't received one after a certain
number of packets have been sent without an ACK and that you believe that number to be 2. Did I understand that correctly?

Given that the IP header + TCP header is 256 bytes [http://en.wikipedia.org/wiki/Transmission_Control_Protocol], + the 10 bytes of
data being sent by Send from the example in the OP means that the example was sending 266 bytes per packet, with one packet each
iteration. Is that correct? (Does the send buffer size include the size of the headers as well?)

Therefore, the first Send was obviously not filling up the 8192 byte send buffer. The first Send returned immediately because it
did not wait for ACK, even if it was the "Blocking" type. The second Send failed because the RST was already in the stack by the
time it was executed. Good so far?

I revised your example yet again (I added the code to the end of my post):

1. I set Socket.Blocking to false.
2. Number of bytes was changed to 1, for each iteration calling Send.
3. Hard-coded "1" as the length of bytes being sent in the Send method.
4. Disregarded the return value from Send.
5. I removed the 100 millisecond wait completely, and even removed the call to WriteLine, replacing it with code that compares the
SocketError to ConnectionReset and simply breaks when equal.
6. Increased the number of iterations to 100.

Second Send still fails with ConnectionReset. I though for sure that I could get the example, some how, to Send more than once
before failing, even if it would be only twice due to your wait-for-ACK explanation, but I could not. I assume having the server
and client run on the same machine (and within the same process even) creates no latency in the server's response with RST so the
second Send will always fail no matter what I do. I wonder if testing this code on seperate machines would produce the expected
results: At least two Sends complete before a subsequent Send fails with ConnectionReset. Goran's illustration, along with
everyone's explanations, seem to indicate that its possible and even likely in a truly distributed application.
| 3. Is the size of the buffer affected by the Nagle algorithm in any way?
Don't think so, but not sure.

Upon further reading it seems that Nagle affects the size of the packets. I guess if the buffer size is not affected then the Nagle
algorithm has to work within that constraint. I thought the Nagle algorithm might have been playing a part in the behavior of your
example. I see now that Nagle might prevent Send from sending immediately, and that is contridicatry to the results that the second
Send failed, so I don't believe, any longer, that Nagle is playing any part.
| 4. Does the size of the buffer fluxuate, or can it be changed
programmatically?
See above.

Got it, thanks.
| 5. If a blocking Send isn't waiting for a response from the server why not
just write the buffer directly into unmanaged memory (or
| pin a copy) and return immediately to the caller? i.e., why block at all?

It does not block if buffer space is available. If space is available,
it copies the user buf and returns N. Non-blocking socket mode gets a > little more complex. It will copy upto the point it has space for and
return N or something < N, then your code needs to send the rest of the buf.

So blocking Send will always return the number of bytes sent as long as SocketError is Success, otherwise it seems to return zero.
| I take it that this is what BeginSend does?

BeginSend does not copy user buffer, but keeps it pinned and driver
uses user buffer directly. Another reason why BeginSend can be more
efficient as no buffer copy overhead. In a busy system, this can be a
drain. Not sure if there is every a case where is does a copy and releases
the users buffer?

Interesting. So the buffer is volatile and must be synchronized. This means that either a copy of the buffer has to be made before
calling BeginSend, a different buffer must be used each time or that write access to the buffer must be synchronized with all calls
to BeginSend, although that does defeat the purpose of an asyncronous method.

So BeginSend returns immediately to the caller and never waits on ACK. A non-blocking Send only buffers as much as it can at the
time it's called (and never waits on ACK?), and a blocking Send will buffer everything before returning to the caller but sometimes
waits on ACK first before returning to the caller.

Does EndSend behave like a blocking or non-blocking Send with respect to the return value and whether it waits on ACK?
| 6. The example in the OP attempts to send 10 bytes a few times,
synchronously, and it seems that the second Send always failed after
| RST in my testing. Will increasing or decreasing the number of bytes sent
in the first Send cause this behavior to change? In
| other words, if the first send no longer blocks (if it currently is
blocking, depending on the size of the buffer and the number of
| bytes sent), is it possible that the second Send will not always fail
because the time it has taken to normally Send has decreased
| even if the time it takes to receive the RST has remained the same?

Interestingly, if you set SendBufferSize to 0 in the code we are talking
about, on my tests, the *first send does throw the error. So it would seem,
it is blocking for the ACK because of this zero buffer. Try it out and see
if you see the same.

I verified your results but I also tried setting the SendBufferSize to 10 and the byte array to 11 but the second Send failed, like
usual. This means that overflowing the buffer doesn't necessarily cause Send to wait for an ACK. Maybe a zero buffer does, but
that really doesn't prove anything.
| I only ask the last question because it seems to me that this behavior is
really unpredictable and that no real example can be
| written that will function identically on each individual computer. In
other words, it's impossible to understand this behavior
| only through testing.

But I think we are talking about an error in "our" protocol so not sure this
matters. The connection is implicitly shutdown half-way by server. Server
can send and client can receive - all good. Client should not be sending
anyway sence it should "know" the state of the protocol - hence the error in
our protocol. Client only knows explicitly, after it trys a send and gets
the ACK with the RST set.

Yes, somebody (I think Alan) mentioned that as well in a previous post. I just assumed that it would be valid, even if it would be
a poor design, to have the client of one's protocol figure out if the server had shut down receiving for any reason by trying to
send data. I guess it would be better in that case to send a few bits to the client warning it of such an event. Even so, it's
nice to understand this aspect of TCP just to be well-rounded and this topic still applies to the OP, if I understood it correctly
(Socket weirdness)!

My reasoning for that last statement was to say, simply that it's difficult to infer the mechanisms used by TCP that cause the
behavior of your example without the help of some shared knowledge.
| RTFM is acceptible ;) Just, where is the manual exactly? I'll check out
TCP I as William recommended, but if there is a genuine
| manual that describes the protocol on the web somewhere I'd like to know.

You can also read the RFCs (i.e. 793, 3168)
ftp://ftp.rfc-editor.org/in-notes/rfc793.txt

Perfect! Thanks. (I have to admit that I didn't even think to look for an RFC. I feel ashamed.)


Revised code sample where the second call to Send always fails (at least when the client and server are executed within the same
process):

private static readonly EventWaitHandle waitForAsyncSend = new EventWaitHandle(false, EventResetMode.ManualReset);

private static void SocketTest()
{
// Server.
TcpListener l = new TcpListener(IPAddress.Any, 9001);
l.Start();
new Thread(delegate()
{
using (Socket socket = l.AcceptSocket())
{
socket.Shutdown(SocketShutdown.Receive);

WriteLine("Server shutdown receive.");

waitForAsyncSend.WaitOne();

// expecting blocks of 1 byte each
WriteLine("Server about to poll for data");

// examine first batch
if (socket.Poll(8000000, SelectMode.SelectRead))
{
byte[] buffer = new byte[1];

try
{
int read = socket.Receive(buffer);

WriteLine("Server read bytes: " + read);
}
catch (SocketException ex)
{
if (ex.ErrorCode == 10053)
{
WriteLine("Server read error: " + ex.SocketErrorCode.ToString());
}
else
throw ex;
}
}

WriteLine("Closing client connection");
}

WriteLine("Server stopping");
l.Stop();
}).Start();

// Client.
byte[] buf = new byte[1];

using (Socket s = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp))
{
s.Blocking = false;

WriteLine("Blocking mode:{0}", s.Blocking);
s.BeginConnect(IPAddress.Loopback, 9001, delegate(IAsyncResult result)
{
s.EndConnect(result);

Thread.Sleep(1000);

SocketError se = SocketError.Success;
int i = 0;

for (; i < 100; i++)
{
s.Send(buf, 0, 1, SocketFlags.None, out se);

if (se == SocketError.ConnectionReset)
break;
}

WriteLine("Failed iteration: " + i);

/*
// Note the different results with async send.
IAsyncResult ar = s.BeginSend(buf, 0, buf.Length, SocketFlags.None, out se, null, null);
WriteLine("Non-blocking SocketError: " + se.ToString());

if (ar != null)
read = s.EndSend(ar); // ar is null.

WriteLine("Non-blocking bytes written to kernel:{0}", read);
*/

waitForAsyncSend.Set();
}, null);

waitForAsyncSend.WaitOne();

Thread.Sleep(500);

Console.WriteLine("Click 'Enter' to exit");
Console.ReadLine();
}
}

private static readonly object sync = new object();

private static void WriteLine(string message)
{
lock (sync)
{
Console.WriteLine(message);
Console.WriteLine();
}
}

private static void WriteLine(string format, params object[] args)
{
lock (sync)
{
Console.WriteLine(format, args);
Console.WriteLine();
}
}
 
W

William Stacey [MVP]

| That doesn't answer "How often will a blocking Send block?", although it
does answer, "for how long?".

It depending what is going on (i.e. how full the send buffer is). If it has
space, it will not block.

| You said in another branch of this thread (a few times) that Send will
block for an ACK if it hasn't received one after a certain
| number of packets have been sent without an ACK and that you believe that
number to be 2. Did I understand that correctly?

Yes. Not sure where I read that. I will have to look again. Even if it is
not 2, it some small number.

| Given that the IP header + TCP header is 256 bytes
[http://en.wikipedia.org/wiki/Transmission_Control_Protocol], + the 10 bytes
of
| data being sent by Send from the example in the OP means that the example
was sending 266 bytes per packet, with one packet each
| iteration. Is that correct? (Does the send buffer size include the size
of the headers as well?)

The buffer is "user" data buffer only. The headers are added as it flows
down the stack. Your writes do not translate into packets, there could be 1
or more for multiple sends.

| Therefore, the first Send was obviously not filling up the 8192 byte send
buffer. The first Send returned immediately because it
| did not wait for ACK, even if it was the "Blocking" type. The second Send
failed because the RST was already in the stack by the
| time it was executed. Good so far?

Yes.

| Second Send still fails with ConnectionReset. I though for sure that I
could get the example, some how, to Send more than once
| before failing, even if it would be only twice due to your wait-for-ACK
explanation, but I could not. I assume having the server

I can send multiple times (it varies) with below. The results are not
consistent and depend all variables (i.e. what the system is doing). Many
times, I could get all 20 sends.

private void button8_Click(object sender, EventArgs e)
{
// Server.
TcpListener l = new TcpListener(IPAddress.Any, 9001);
l.Start();
new Thread(
delegate()
{
using (Socket ss = l.AcceptSocket())
{
ss.Shutdown(SocketShutdown.Receive);
}
Thread.Sleep(2000);
l.Stop();
}).Start();

// Client.
using (Socket s = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp))
{
int sent = 0;
try
{
byte[] buf = new byte[1];
//s.SendBufferSize = 0;
s.Connect(IPAddress.Parse("192.168.1.3"), 9001);
for (int i = 0; i < 20; i++)
{
sent += s.Send(buf);
}
Console.WriteLine("Sent:{0}", sent);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine("Sent before error:{0}", sent);
}
}
}

| So blocking Send will always return the number of bytes sent as long as
SocketError is Success, otherwise it seems to return zero.

So will non-blocking (so blocking mode is not a factor here). Either mode
will return the number of bytes written to the internal buffer - that is
all.
The difference is, blocking, will wait, for all bytes to be copied to the
internal buffer, non-blocking will not. I would not use non-blocking
myself.
Natuarally, none of this has anything to do with how many bytes are actually
sent on the wire. That can/will vary depending on what the stack is doing.

| Interesting. So the buffer is volatile and must be synchronized.

Big yes. This is one error people make (including myself) with async io all
the time. Overwritting your own buffer in this way can be a very hard bug
to find - and it appears at random because of scheduler, etc.

| This means that either a copy of the buffer has to be made before
| calling BeginSend, a different buffer must be used each time or that write
access to the buffer must be synchronized with all calls
| to BeginSend, although that does defeat the purpose of an asyncronous
method.

Right.

| So BeginSend returns immediately to the caller and never waits on ACK.
| A non-blocking Send only buffers as much as it can at the
| time it's called (and never waits on ACK?), and a blocking Send will
buffer everything before returning to the caller but sometimes
| waits on ACK first before returning to the caller.

Neither wait. non-blocking/blocking in terms of sending only has to do with
writing to the buffer.

| Does EndSend behave like a blocking or non-blocking Send with respect to
the return value and whether it waits on ACK?
AFAICT, neither wait on ACK at all. They either wait on buffer space or the
stack to send our user buffer. Underneath, tcp maybe waiting on ACKs, and
that can get indirectly relalized as a wait to your user call, but really we
are just waiting on buffers to get processed. So it all revolves around
buffers at this level.

--wjs
 
D

Dave Sexton

Hi William,

Great response. You've answered my questions clearly.

I have a much better understanding now of blocking/non-blocking Sends, BeginSend, ACK and RST, and TCP/IP in general.

I had to modify your example a bit to get it to work on my computer. The accepted socket was being shutdown and then disposed too
fast so I placed a call to Thread.Sleep(50) before Shutdown(SocketShutdown.Receive) and used an EventWaitHandle afterwards to
synchronize its disposal. I also increased the number of iterations to 1000 in the client code and on a really slow computer it
makes it to ~300 iterations before failing with ConnectionReset.

I can produce the same results in my example by removing the call to Thread.Sleep(1000) in the client code before the loop starts
and by adding a call to Thread.Sleep(50) before Shutdown(SocketShutdown.Receive) in the server code. (I failed to heed the notion
of simplistic testing in my example. :( )

Thanks to everyone for your answers to my sometimes repetitive line of questioning :)

--
Dave Sexton

William Stacey said:
| That doesn't answer "How often will a blocking Send block?", although it
does answer, "for how long?".

It depending what is going on (i.e. how full the send buffer is). If it has
space, it will not block.

| You said in another branch of this thread (a few times) that Send will
block for an ACK if it hasn't received one after a certain
| number of packets have been sent without an ACK and that you believe that
number to be 2. Did I understand that correctly?

Yes. Not sure where I read that. I will have to look again. Even if it is
not 2, it some small number.

| Given that the IP header + TCP header is 256 bytes
[http://en.wikipedia.org/wiki/Transmission_Control_Protocol], + the 10 bytes
of
| data being sent by Send from the example in the OP means that the example
was sending 266 bytes per packet, with one packet each
| iteration. Is that correct? (Does the send buffer size include the size
of the headers as well?)

The buffer is "user" data buffer only. The headers are added as it flows
down the stack. Your writes do not translate into packets, there could be 1
or more for multiple sends.

| Therefore, the first Send was obviously not filling up the 8192 byte send
buffer. The first Send returned immediately because it
| did not wait for ACK, even if it was the "Blocking" type. The second Send
failed because the RST was already in the stack by the
| time it was executed. Good so far?

Yes.

| Second Send still fails with ConnectionReset. I though for sure that I
could get the example, some how, to Send more than once
| before failing, even if it would be only twice due to your wait-for-ACK
explanation, but I could not. I assume having the server

I can send multiple times (it varies) with below. The results are not
consistent and depend all variables (i.e. what the system is doing). Many
times, I could get all 20 sends.

private void button8_Click(object sender, EventArgs e)
{
// Server.
TcpListener l = new TcpListener(IPAddress.Any, 9001);
l.Start();
new Thread(
delegate()
{
using (Socket ss = l.AcceptSocket())
{
ss.Shutdown(SocketShutdown.Receive);
}
Thread.Sleep(2000);
l.Stop();
}).Start();

// Client.
using (Socket s = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp))
{
int sent = 0;
try
{
byte[] buf = new byte[1];
//s.SendBufferSize = 0;
s.Connect(IPAddress.Parse("192.168.1.3"), 9001);
for (int i = 0; i < 20; i++)
{
sent += s.Send(buf);
}
Console.WriteLine("Sent:{0}", sent);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine("Sent before error:{0}", sent);
}
}
}

| So blocking Send will always return the number of bytes sent as long as
SocketError is Success, otherwise it seems to return zero.

So will non-blocking (so blocking mode is not a factor here). Either mode
will return the number of bytes written to the internal buffer - that is
all.
The difference is, blocking, will wait, for all bytes to be copied to the
internal buffer, non-blocking will not. I would not use non-blocking
myself.
Natuarally, none of this has anything to do with how many bytes are actually
sent on the wire. That can/will vary depending on what the stack is doing.

| Interesting. So the buffer is volatile and must be synchronized.

Big yes. This is one error people make (including myself) with async io all
the time. Overwritting your own buffer in this way can be a very hard bug
to find - and it appears at random because of scheduler, etc.

| This means that either a copy of the buffer has to be made before
| calling BeginSend, a different buffer must be used each time or that write
access to the buffer must be synchronized with all calls
| to BeginSend, although that does defeat the purpose of an asyncronous
method.

Right.

| So BeginSend returns immediately to the caller and never waits on ACK.
| A non-blocking Send only buffers as much as it can at the
| time it's called (and never waits on ACK?), and a blocking Send will
buffer everything before returning to the caller but sometimes
| waits on ACK first before returning to the caller.

Neither wait. non-blocking/blocking in terms of sending only has to do with
writing to the buffer.

| Does EndSend behave like a blocking or non-blocking Send with respect to
the return value and whether it waits on ACK?
AFAICT, neither wait on ACK at all. They either wait on buffer space or the
stack to send our user buffer. Underneath, tcp maybe waiting on ACKs, and
that can get indirectly relalized as a wait to your user call, but really we
are just waiting on buffers to get processed. So it all revolves around
buffers at this level.

--wjs
 
G

Goran Sliskovic

William Stacey [MVP] wrote:
....
So will non-blocking (so blocking mode is not a factor here). Either mode
will return the number of bytes written to the internal buffer - that is
all.
The difference is, blocking, will wait, for all bytes to be copied to the
internal buffer, non-blocking will not. I would not use non-blocking
myself.
....

I would check this, as in straight win32api send can return < buffer
length supplied. It's up to programmer to issue send again with rest of
the buffer. .NET framework could do this automatically, but
documentation does not say much, so just to be on the safe side (this is
unpleasant to debug, especially when deployed)...

Regards,
Goran
 
W

William Stacey [MVP]

TMK the only time a blocking socket will return < count is when you use the
SocketError out overload and there is an error, otherwise it will return
count or exception. If you see different, that would be good to know.

--
William Stacey [MVP]

| William Stacey [MVP] wrote:
| ...
| > So will non-blocking (so blocking mode is not a factor here). Either
mode
| > will return the number of bytes written to the internal buffer - that is
| > all.
| > The difference is, blocking, will wait, for all bytes to be copied to
the
| > internal buffer, non-blocking will not. I would not use non-blocking
| > myself.
| ...
|
| I would check this, as in straight win32api send can return < buffer
| length supplied. It's up to programmer to issue send again with rest of
| the buffer. .NET framework could do this automatically, but
| documentation does not say much, so just to be on the safe side (this is
| unpleasant to debug, especially when deployed)...
|
| Regards,
| Goran
 
G

Goran Sliskovic

William said:
TMK the only time a blocking socket will return < count is when you use the
SocketError out overload and there is an error, otherwise it will return
count or exception. If you see different, that would be good to know.

Problem is that documentation (MSDN) is not very clear. I checked SSCLI
2.0 implementation, it's just passthrough to win32 send. MSDN says:

"Return Values
If no error occurs, send returns the total number of bytes sent, which
can be less than the number indicated by len. Otherwise, a value of
SOCKET_ERROR is returned, and a specific error code can be retrieved by
calling WSAGetLastError."

And later:

"If no buffer space is available within the transport system to hold the
data to be transmitted, send will block unless the socket has been
placed in nonblocking mode. On nonblocking stream oriented sockets, the
number of bytes written can be between 1 and the requested length,
depending on buffer availability on both client and server computers."

First paragraph suggest it could return < length, second seems to
implicate this is only true for non-blocking mode. Hm...

So I'm still confused :)

Regards,
Goran
 
D

Dave Sexton

Hi Goran,

It seems to me that if a blocking Send were to return less than the number of bytes specified that it would be exhibiting
non-blocking behavior since it should always block until it could copy the full buffer.

If there's a change that a blocking Send won't copy the entire buffer, I assume the reason would be different than that of a
non-blocking Send not copying the entire buffer otherwise these two modes would have identical functions, in terms of a synchronous
Send at least.

So assuming that a blocking Send might not always copy the full buffer before returning to the caller, do you know of any reason
that would prevent it from doing so where the reason differs from that of a non-blocking Send that returns less than the length of
the specified buffer?
 
W

William Stacey [MVP]

Maybe the first paragraph is being too general. It is wrapping both
blocking and non-blocking cases in a sentence. The later paragraph seems to
bring out the truth. The question is does this *always happen even in all
edge cases? I have not found a case yet, but would be interested.

--
William Stacey [MVP]

| William Stacey [MVP] wrote:
| > TMK the only time a blocking socket will return < count is when you use
the
| > SocketError out overload and there is an error, otherwise it will return
| > count or exception. If you see different, that would be good to know.
| >
|
| Problem is that documentation (MSDN) is not very clear. I checked SSCLI
| 2.0 implementation, it's just passthrough to win32 send. MSDN says:
|
| "Return Values
| If no error occurs, send returns the total number of bytes sent, which
| can be less than the number indicated by len. Otherwise, a value of
| SOCKET_ERROR is returned, and a specific error code can be retrieved by
| calling WSAGetLastError."
|
| And later:
|
| "If no buffer space is available within the transport system to hold the
| data to be transmitted, send will block unless the socket has been
| placed in nonblocking mode. On nonblocking stream oriented sockets, the
| number of bytes written can be between 1 and the requested length,
| depending on buffer availability on both client and server computers."
|
| First paragraph suggest it could return < length, second seems to
| implicate this is only true for non-blocking mode. Hm...
|
| So I'm still confused :)
|
| Regards,
| Goran
|
|
 
P

Peter Duniho

William Stacey said:
Maybe the first paragraph is being too general. It is wrapping both
blocking and non-blocking cases in a sentence. The later paragraph seems
to
bring out the truth. The question is does this *always happen even in all
edge cases? I have not found a case yet, but would be interested.

A plain-vanilla Winsock blocking socket will block until the entire
requested buffer has been sent, or an error occurs. It will never return a
sent byte count less than asked for.

So as long .NET is really just thinly wrapping this behavior, a completed
send on a blocking socket will always have buffered the same number of bytes
that the caller specified.

That said, it's trivial to verify this by comparing the sent byte count to
the requested count. If one doesn't want to write the code to handle
partial sends (not really that hard, but let's assume that's true) it would
be easy enough to include an extra line of code to treat a different byte
count as an error.

While this should not be necessary, Winsock supports third-party "layered
service providers" which may or may not be written correctly. Defensively
double-checking this behavior in code can insulate your program from
badly-written LSPs.

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top