Socket weirdness

  • Thread starter William Stacey [MVP]
  • Start date
W

William Stacey [MVP]

| Your explanation seems to be accurate with respect to my test results and
previous experience with sockets, however Alan's
| explanation led me to believe that a different mechanism was in place. I
am referring specifically to the following statement of
| Alan's:
|
| >>> And at some point the first send now reaches the server, who responds
| >>> with a packet with the RST flag set. That then makes its way back
| >>> across the network. Again the client application can be doing sends,
| >>> and maybe some packets are sent too.
| >>> Then the RST reaches the client device, and works its way up through
the stack.
|
| In your test case there was no concurrent sending, so I assumed that on a
blocking send with no concurrent activity that the RST
| flag would "make its way back across the network" immediately from Alan's
statements.

I think we are still talking about the same thing. RST makes it way back to
the client on an ACK. However, you may not see that ACK until after 1 or 2
sends. I "think" tcp can send two packets without ACK, but then must wait
(correct me here if wrong) for ACK before sending more. But eventually you
will get the ACK with the RST set and the next send/receive will fail.

| Interesting that the RST made it back into the client stack but
ConnectionReset is returned only on every subsequent request.

I did not follow you here.

| previous statements were referring to this particular statement of Alan's.
Why not just fail the initial send if the RST flag has
| already been received, if in fact it is received?

It will fail if it was received. But in our case it not (and can not) be
received until after the first send, because it is set in the ACK of that
first send. But I may already have did another Send before first ACK is
processed, so I may see it on second or third send. If we always do the
"proper" shutdown both sides, then none of this matters and all should work
fine:
1.. Finish sending data.
2.. Call shutdown() with the how parameter set to 1.
3.. Loop on recv() until it returns 0.
4.. Call closesocket().
Cheers.
--wjs
 
W

William Stacey [MVP]

Also. I am wondering about the usefullness of BeginSend? Send actually
does not block, it just puts bytes on the kernel buffer(s) and returns - the
stack then takes it from there. It seems as if BeginSend would actually be
more overhead because of the callback, context switch, and IAsync, etc. So
is BeginSend actually usefull? If so, why? Does anyone have any perf data?
tia.
 
P

Peter Duniho

William Stacey said:
Also. I am wondering about the usefullness of BeginSend? Send actually
does not block, it just puts bytes on the kernel buffer(s) and returns -
the
stack then takes it from there.

Assuming .NET Send is similar to Winsock send()/WSASend, it *can* block.
The kernel buffer is only so large, and if one sends a buffer larger than
can be buffered by the network driver, the call to send will block until all
of the data has been buffered (which for large sends implies that at least
some of the data has already been actually sent, though there's no way for
the sending application to know this).
It seems as if BeginSend would actually be
more overhead because of the callback, context switch, and IAsync, etc.
So
is BeginSend actually usefull? If so, why? Does anyone have any perf
data?
tia.

It seems to me that if performance is an issue, then BeginSend is likely
just fine. When performance is an issue, one is generally dealing with
large amounts of data, or large numbers of clients, or both. A large amount
of data implies that even a regular Send is likely to block (and result in a
context switch). A large number of clients implies that non-blocking
sockets are desirable as is not blocking the primary thread just to handle
one client.

I was very interested to learn, in some of the recent .NET socket threads,
that the Begin/End async versions of the Winsock calls use IOCP. IOCP
addresses the above issues gracefully and with good performance.

There may well be more overhead with BeginSend, and it may well be that
overhead is unwarranted when dealing with short transmissions to few (or
just one) connections. But in those cases, one is unlikely to notice the
overhead anyway (just as the overhead of managed code is unlikely to be
noticed in many situations). And one very nice thing is that one can take
advantage of IOCP using sockets without having to actually write all the
relatively complex code that is normally required to support IOCP using
plain Win32 Winsock.

Pete
 
D

Dave Sexton

Hi William,

Here are a few questions that I still cannot answer:

1. If shutting down receive on a socket blocks incoming data, does the socket respond with anything when data is actually sent from
the peer?
2. What actually is sent; ACK or RST, ACK and RST, nothing, or something else?
3. Is the response immediate?
4. Does a blocking Send wait for a response before returning to the caller?
5. If Send does wait for a response, why doesn't the initial call to Send fail when RST is received (this question is derived from
how I've understood Alan's explanation thus far)?
6. Does BeginSend wait for a response before returning to the caller (I hope not)? Does EndSend wait for a response?

I responded to your OP because I knew how that behavior could be explained in general terms, through simple experimentation. Now
that we gone down a more technical road I'm intrigued, yet confused.
I think we are still talking about the same thing. RST makes it way back to
the client on an ACK. However, you may not see that ACK until after 1 or 2
sends. I "think" tcp can send two packets without ACK, but then must wait
(correct me here if wrong) for ACK before sending more. But eventually you
will get the ACK with the RST set and the next send/receive will fail.

That's where I'm confused. Alan's explanation does not mention ACK, except when he stated the following early on in his response:

Yours and Alan's sound like competing explanations to me. I'm trying to understand how this works, techincally speaking, but I'm
confused as to which explanation is correct. Or are they both correct, but only partial answers?
| Interesting that the RST made it back into the client stack but
ConnectionReset is returned only on every subsequent request.

I did not follow you here.

I assumed that the peer was aware of the reset immediately after the first send, as per Alan's comments above. If that is true, I'm
not sure why the first Send doesn't just fail with ConnectionReset and not just subsequent Sends.

I guess I'll have to do some reading on my own. If ever I can explain this phenomena I'll post my findings.
It will fail if it was received. But in our case it not (and can not) be
received until after the first send, because it is set in the ACK of that
first send. But I may already have did another Send before first ACK is
processed, so I may see it on second or third send. If we always do the
"proper" shutdown both sides, then none of this matters and all should work
fine:
1.. Finish sending data.
2.. Call shutdown() with the how parameter set to 1.
3.. Loop on recv() until it returns 0.
4.. Call closesocket().
Cheers.
--wjs

Understood, but I don't see how that applies to the OP or this thread. I thought we were trying to address problems that have been
identified with the unorthodox, but possible, scenario for using sockets whereby a peer attempts to shutdown receiving without a
higher-level protocol to notify its counterpart of such an occurrence. A standardized shutdown sequence seems a bit off-topic to me
and does not address the problem at hand.

Sorry if I'm nitpicking ;)
 
G

Goran Sliskovic

William Stacey [MVP] wrote:
....
until the 3rd send depending on speed, etc. Thing I don't understand yet is
why RST is not set in the header of outgoing data if the server does a send
after Receive is closed? Or why the server even lets you do a send after a
Shutdown.Receive if ultimately it forces both sides of the client down
anyway?

Server cannot send RST on send after shutdown on its receive part
because it would violate the TCP standard. After you shutdown receive,
sends are legal. Even more, this is part of graceful close handshake.
TCP connection can be "half-open", meaning one directon is closed, other
is still open.

Shutdown.Receive is meaningfull only to OS, there is no data exchange
when it is executes. Usually, you would call Shutdown.Receive when you
receive 0 from Socket.Recv, meaning the other part has closed it's
outgoing side. You are still allowed to send until you do Shutdown.Send.
Probably intention with Shutdown.Receive was to signal OS that you
don't expect any data to come int so OS can release resources (buffers)
and spare some memory.

The problem when you do Shutdown.Recv is what TCP stack should do when
it receives data on this connection anyway (this would be simptom of
application protocol error)? It has 3 choices:
a) ignore
b) ACK
c) RST

If it ignores data, this will lead to retransmissions and eventual
timeout on other side and connection close.

If it ACKs, it will trick application on other side which will assume it
was received by other side leading to bad consequences (not an option)

If it sends RST it will force close on other side much sooner.

So, call it only when other end has closed its outging side or you
abandon connection (followed by close).

Regards,
Goran
 
G

Goran Sliskovic

Dave said:
Hi William,

Here are a few questions that I still cannot answer:

1. If shutting down receive on a socket blocks incoming data, does the socket respond with anything when data is actually sent from
the peer?

Not when data is sent by peer, but when data from peer is received. Once
that RST segment reaches poor sender, all subsequent operation (both
read and send), should fail with "Connection reset".

server: Shutdown.Recieve
client: send ok
client: send ok
server: receives data -> send back RST immediatly
client: send ok
client: send ok
client: TCP stack receives RST
client: send failes with "Connection reset"
client: send failes with "Connection reset"
client: send failes with "Connection reset"
....
2. What actually is sent; ACK or RST, ACK and RST, nothing, or something else?
RST.

3. Is the response immediate?

Sort of. It should RST when data is received that cannot be delivered to
application. It may take a while to reach the other side and also that
RST packet could be lost.
4. Does a blocking Send wait for a response before returning to the caller?

Sort of. If OS has enaugh buffer space it will buffer and return
immediatly. If not, it will wait for ACK from other side that will free
some buffer space.
5. If Send does wait for a response, why doesn't the initial call to Send fail when RST is received (this question is derived from
how I've understood Alan's explanation thus far)?

See 4.
6. Does BeginSend wait for a response before returning to the caller (I hope not)? Does EndSend wait for a response?

BeginSend should not. EndSend I'm not sure...

....

Regards,
Goran
 
D

Dave Sexton

Hi Goran,

Thanks very much for your response. As it turns out, I still have a lot of questions even though you've answered a few (but you've
created some more :).

I still have yet to answer the question, "Why doesn't the initial blocking Send fail?". I believe now that there are many forces at
work here affecting the answer to my question and that the behavior of some of the socket functions are simply indeterministic due
to network latency, including the blocking Send itself. What, exactly, does a blocking send do that requires it to block?

I'm gonna have to hit the books.

Something else that might be of interest to readers, which I found browsing around for more info on the subject of TCP/IP:

Nagle Algorithm on Wikipedia.org:
http://en.wikipedia.org/wiki/Nagel_algorithm
 
A

Alan J. McFarlane

Eeeeh. I hoped to clear things up in my posting; looks like I didn't
quite succeed. :)

Here are a few questions that I still cannot answer:

1. If shutting down receive on a socket blocks incoming data, does
the socket respond with anything when data is actually sent from the
peer?
(Just to note firstly, doing Shutdown(Receive) alone/initially is very
very very rare, Close/Shutdown(Both), or Shutdown(Send) later followed
by Shutdown(Receive) is very much more common. And if a particular
application/session layer protocol implied that one peer should do
Shutdown(Receive) then it would not be valid for it to then have the
other peer send data!)

Anyway, if the application on one device does Shutdown(Receive) then if
a packet containing data is received from the peer on that connection,
then that is an not a valid packet and a packet with the RST bit set is
sent clearing down the connection.
2. What actually is sent; ACK or RST, ACK and RST, nothing, or
something else?

Firstly, just to be absolutely clear, there is no such thing as an ACK
packet, or a RST packet, or a SYN packet, etc. In TCP there is simply
*one* type of packet, this is unlike HDLC (and its children), which
carries data in 'I' frames, has ACK frames which it calls 'RR' (Receiver
Ready), and lots more. :-( Unfortunately its also often those such
terms that are used by our teachers...

Anyway, TCP's header is always the same format. It contains some
numerical fields e.g one for the sequence number, has a set of flags:
URG, ACK, PSH, RST, SYN and FIN, and finally it optionally carries some
data. So (to be very correct) we can have "a packet with the ACK bit
set", but no such thing as a "ACK packet". Now often in reality we
generally call a packet with no data and the ACK bit set as an "ACK
packet" but it can confuse...

So in the case above, a packet with the RST bit will be sent. I'm not
sure whether the ACK bit will be set, I'd have to go and read the
specification to be sure, and I'm not sure that it matters
particularly...
3. Is the response immediate?

Err to what? :)

The server 'immediately' sends a RST when it gets any packet that is not
valid. And receiving a segment containing data is not valid where the
local application has done shutdown(receive)). Of course there is time
for both of those packets to cross the network.

And as we note below, a send is _not_ immediate on the application
calling send (or it returning etc)...
4. Does a blocking Send wait for a response before returning to the
caller?

No. It just adds data to the buffer and returns. (Ignoring here what
happens if the buffer becomes full...) The TCP protocol layer then
decides when to take a segment's worth of data and send it in a packet.
In general we should consider the two as independent.
5. If Send does wait for a response, why doesn't the initial call to
Send fail when RST is received (this question is derived from how
I've understood Alan's explanation thus far)?
It doesn't. This might be where the confusion lies. :-(
6. Does BeginSend wait for a response before returning to the caller
(I hope not)? Does EndSend wait for a response?
Neither.


I got lost following the messages in the text below, let me know what if
anything that doesn't cover. :)

Alan
 
A

Alan J. McFarlane

wrote:
William Stacey [MVP] wrote:
...

Server cannot send RST on send after shutdown on its receive part
because it would violate the TCP standard. After you shutdown receive,
sends are legal. Even more, this is part of graceful close handshake.
TCP connection can be "half-open", meaning one directon is closed,
other is still open.
Er, it's "half-closed". :-,)

I'll quote from Stevens, TCP/IP Illustrated Volume 1. Section 18.5:
"TCP provides the ability for one end of a connection to terminate its
output, while still receiving data from the other end. This is called
_half-close_. Few applications take advantage of this capability
[...]."

And in section 18.7:
"A TCP connection is said to be _half-open_ if one end has closed or
aborted the connection without the knowledge of the other end. Thsi
can happen any time one of the two hosts crashes. As long as there's
no attempt to transfer data across a half-open connection, the end
that's still up won't detect that the other end has crashed."

Shutdown.Receive is meaningfull only to OS, there is no data exchange
when it is executes. Usually, you would call Shutdown.Receive when you
receive 0 from Socket.Recv, meaning the other part has closed it's
outgoing side. You are still allowed to send until you do
Shutdown.Send. Probably intention with Shutdown.Receive was to
signal OS that you don't expect any data to come int so OS can
release resources (buffers) and spare some memory.

The problem when you do Shutdown.Recv is what TCP stack should do when
it receives data on this connection anyway (this would be simptom of
application protocol error)? It has 3 choices:
a) ignore
b) ACK
c) RST

If it ignores data, this will lead to retransmissions and eventual
timeout on other side and connection close.

If it ACKs, it will trick application on other side which will assume
it was received by other side leading to bad consequences (not an
option)
If it sends RST it will force close on other side much sooner.

So, call it only when other end has closed its outging side or you
abandon connection (followed by close).
Nice.
 
W

William Stacey [MVP]

Fair enouph. But where does "pinning" buffers come into play? I thought
when you sent your buffer or buffers (with ArraySegment overload) *those
buffers get pinned and the driver uses them directly instead of making a
costly copy of user buffers to driver buffers.
I mean if it did just use the user buffers, then there is no buffers to
copy, just something to queue up. Any light on this?

--
William Stacey [MVP]

| | > Also. I am wondering about the usefullness of BeginSend? Send
actually
| > does not block, it just puts bytes on the kernel buffer(s) and returns -
| > the
| > stack then takes it from there.
|
| Assuming .NET Send is similar to Winsock send()/WSASend, it *can* block.
| The kernel buffer is only so large, and if one sends a buffer larger than
| can be buffered by the network driver, the call to send will block until
all
| of the data has been buffered (which for large sends implies that at least
| some of the data has already been actually sent, though there's no way for
| the sending application to know this).
|
| > It seems as if BeginSend would actually be
| > more overhead because of the callback, context switch, and IAsync, etc.
| > So
| > is BeginSend actually usefull? If so, why? Does anyone have any perf
| > data?
| > tia.
|
| It seems to me that if performance is an issue, then BeginSend is likely
| just fine. When performance is an issue, one is generally dealing with
| large amounts of data, or large numbers of clients, or both. A large
amount
| of data implies that even a regular Send is likely to block (and result in
a
| context switch). A large number of clients implies that non-blocking
| sockets are desirable as is not blocking the primary thread just to handle
| one client.
|
| I was very interested to learn, in some of the recent .NET socket threads,
| that the Begin/End async versions of the Winsock calls use IOCP. IOCP
| addresses the above issues gracefully and with good performance.
|
| There may well be more overhead with BeginSend, and it may well be that
| overhead is unwarranted when dealing with short transmissions to few (or
| just one) connections. But in those cases, one is unlikely to notice the
| overhead anyway (just as the overhead of managed code is unlikely to be
| noticed in many situations). And one very nice thing is that one can take
| advantage of IOCP using sockets without having to actually write all the
| relatively complex code that is normally required to support IOCP using
| plain Win32 Winsock.
|
| Pete
|
|
 
P

Peter Duniho

William Stacey said:
Fair enouph. But where does "pinning" buffers come into play? I thought
when you sent your buffer or buffers (with ArraySegment overload) *those
buffers get pinned and the driver uses them directly instead of making a
costly copy of user buffers to driver buffers.
I mean if it did just use the user buffers, then there is no buffers to
copy, just something to queue up. Any light on this?

I don't know. That's a .NET thing while my knowledge comes from experience
with Winsock itself. I'm new to the whole .NET framework stuff.

That said, again borrowing from the underlying Winsock behavior...it is
common procedure when using IOCP to set the underlying network buffers to 0
length to force Winsock to send and receive from and to your own buffers,
avoiding an extra copy of the data. So perhaps .NET is doing something
similar when one uses the Begin/End paradigm.

For a normal blocking Send, however, I would expect that *some* buffering
does occur. I can't say this for sure, but an actual network send can take
a fairly long time and it would be kind of rude for .NET to cause your
thread to block until all of the data has been sent and acknowledged, at
least when the size of the sent data is small (an application shouldn't be
making large blocking sends unless it's prepared to sit and wait awhile).

For all I know, even in the buffered case, .NET has to pin your buffer to
ensure that it doesn't move while the underlying Winsock send occurs. I
have no idea how .NET handles data or threading. If it's got some kind of
maintenance thread (like, where does the garbage collector run?) that could
move memory around even while the thread using it is blocked on a system
API, I guess .NET would have to pin the buffer before calling the system
API.

Hopefully, these questions are academic and the .NET programmer using the
..NET sockets API doesn't need to worry too much about them. It should be
safe to assume that when .NET tells you a socket operation has completed,
that the buffer is yours again. For a blocking Send call, "completed"
should be as soon as control has been returned back to the calling program.
For a non-blocking method, such as BeginSend, I would expect to not be
permitted to touch the buffer until notification that the operation has
completed (either in the callback, or by blocking on the EndSend
method...the documentation isn't completely clear on this IMHO).

Pete
 
D

Dave Sexton

Hi Alan,

Excellent response. See inline:
Eeeeh. I hoped to clear things up in my posting; looks like I didn't quite succeed. :)

Lol. It's hard to explain all of TCP in a single post.
(Just to note firstly, doing Shutdown(Receive) alone/initially is very very very rare, Close/Shutdown(Both), or Shutdown(Send)
later followed by Shutdown(Receive) is very much more common. And if a particular application/session layer protocol implied that
one peer should do Shutdown(Receive) then it would not be valid for it to then have the other peer send data!)

Fair enough. I do understand the rarity of the operation under scrutiny. I'd like a complete understing of TCP and the ability to
shutdown Receive only is part of the protocol (and, not to mention, the topic of this thread :)
Anyway, if the application on one device does Shutdown(Receive) then if a packet containing data is received from the peer on that
connection, then that is an not a valid packet and a packet with the RST bit set is sent clearing down the connection.

Understood (from Goran's post as well).
Firstly, just to be absolutely clear, there is no such thing as an ACK packet, or a RST packet, or a SYN packet, etc. In TCP
there is simply *one* type of packet, this is unlike HDLC (and its children), which carries data in 'I' frames, has ACK frames
which it calls 'RR' (Receiver Ready), and lots more. :-( Unfortunately its also often those such terms that are used by our
teachers...

I don't know anything about HDLC, and being an autodidactic anything that I don't undestand about TCP up until now is my own fault
:)
Anyway, TCP's header is always the same format. It contains some numerical fields e.g one for the sequence number, has a set of
flags: URG, ACK, PSH, RST, SYN and FIN, and finally it optionally carries some data. So (to be very correct) we can have "a
packet with the ACK bit set", but no such thing as a "ACK packet". Now often in reality we generally call a packet with no data
and the ACK bit set as an "ACK packet" but it can confuse...

Thanks for clearing that up. Also, the wikipedia.org article (link in a previous thread of mine) contains a nice little chart of
the TCP header and the meaning of each section of bytes.
So in the case above, a packet with the RST bit will be sent. I'm not sure whether the ACK bit will be set, I'd have to go and
read the specification to be sure, and I'm not sure that it matters particularly...

Understood now. RST is sent; ACK doesn't matter since it would be in the same header anyway. What's important is the RST (and ACK
probably shouldn't be set, for that matter)
Err to what? :)

To the peer that sent the data! I was leading in to my next question about whether Send blocked for that response because I assumed
the answer to this question was "yes"...
The server 'immediately' sends a RST when it gets any packet that is not valid. And receiving a segment containing data is not
valid where the local application has done shutdown(receive)). Of course there is time for both of those packets to cross the
network.

And as we note below, a send is _not_ immediate on the application calling send (or it returning etc)...

Understood now (from your response below as well). There is an inherant asynchronicity in TCP due to network latency and,
therefore, RST is sent immediately but might not be received by the peer before another Send, even if the peer only calls Send in a
synchronous manner.

This leads me to believe that our test code in this thread does not accurately represent the behavior of a socket in all possible
circumstances. The console output shows that the second Send will always raise an error since it received the RST, however, that
might not be the case in a real world application. (Goran tried explaining this in his illustration but he failed to mention
whether his model was synchronous or asynchronous, and without this new knowledge of mine that Send does not block for a response, I
assumed he was talking about an asynchronous model only.)
No. It just adds data to the buffer and returns. (Ignoring here what happens if the buffer becomes full...) The TCP protocol
layer then decides when to take a segment's worth of data and send it in a packet. In general we should consider the two as
independent.

Isn't it the Nagle algorithm, not the TCP protocol, that determines what a "segment" actually is and when it should be sent?

So it seems that Send blocks when it has to load data into an overflowing buffer [Also: Peter Duniho's response to William's
question in a branch of this thread] and not for a response from its peer. This reallly confused me since I always assumed Send
waited for a response and BeginSend didn't.

This makes me change my outlook on TCP in general. I always thought TCP provided "Control' over the flow of information on its own
connection, but now I think that "Control" in "Transmission Control Protocol" is referring to the fact that TCP is used commonly to
control the flow of information on a higher-level protocol that involves other connections, such as with a video game that uses UDP
to transfer data, or as with an FTP data connnection that is independant of the "Control" connection. Is that correct or does
"Control" mean something else that I'm unaware of, such as an improvement on an ancestor protocol perhaps?
It doesn't. This might be where the confusion lies. :-(

Yep, it was one of the sources of confusion. It's not anymore. This thread contains a few answers to that question, but none have
clearly defined whether or not a blocking Send blocks for the RST, which I needed to know first before I could undertstand anything
else. Thank you for clearing that up. I guess that those who already understand TCP on the protocol level felt it was too obvious
of an answer to mention ;)

Understood now from your answer to #5 as well.
I got lost following the messages in the text below, let me know what if anything that doesn't cover. :)

Alan

You've answered all of the questions that I wrote so far. Sorry if I don't explain myself thoroughly enough. It's hard to ask
questions without fully understanding the terminology. Thanks for your help.

So it seems that the behavior I witnessed when using TCP/IP Sockets, with regards to the example in the OP, was a side effect of the
actual implementation and not a valid indication of the real behavior of TCP/IP. I can't say that I've used this mechanism in
production code but I have played around with it before and thought that I had the behavior pinned.

Again, I'm planning on reading more about TCP. RTFM is an acceptible response to any of my inquiries, but I do appreciate the help
:)
 
W

William Stacey [MVP]

| Understood now. RST is sent; ACK doesn't matter since it would be in the
same header anyway. What's important is the RST (and ACK
| probably shouldn't be set, for that matter)

Back to the beginning I think.. RST is set, not sent independently AFAICT.
There can be no RST set unless it is set in an ACK packet of the client's
first send (in this case). If the client never sends anything after server
did close, client would never see the RST - which now makes sense.
Example:

1) Server closes Receive.
2) Client does a send. Does not block on either Send or BeginSend.
2a) Client may do another send as RST has not been "seen" yet by the
client.
4) Client gets the ACK of first Send which has the RST set in the header.
5) Bang. Future Sends/Receive at client will error because server's RST has
been processed.

--wjs
 
D

Dave Sexton

Hi William,

Nice example. It includes all of the points I was trying to understand. This shows how the example in the OP does not acurrately
illustrate the behavior of this mechanism since it used synchronous Sends only, making it seem like the second Send will always
fail, when in fact it might very well be the thousandth send that will fail, given enough time.
 
D

Dave Sexton

Hi Vadym,

So, I have to ask...

1. How often will a blocking Send block, and for how long?
2. I understand this depends on the size of the buffer, so how big is the kernel buffer?
3. Is the size of the buffer affected by the Nagle algorithm in any way?
4. Does the size of the buffer fluxuate, or can it be changed programmatically?
5. If a blocking Send isn't waiting for a response from the server why not just write the buffer directly into unmanaged memory (or
pin a copy) and return immediately to the caller? i.e., why block at all? I take it that this is what BeginSend does?
6. The example in the OP attempts to send 10 bytes a few times, synchronously, and it seems that the second Send always failed after
RST in my testing. Will increasing or decreasing the number of bytes sent in the first Send cause this behavior to change? In
other words, if the first send no longer blocks (if it currently is blocking, depending on the size of the buffer and the number of
bytes sent), is it possible that the second Send will not always fail because the time it has taken to normally Send has decreased
even if the time it takes to receive the RST has remained the same?

I only ask the last question because it seems to me that this behavior is really unpredictable and that no real example can be
written that will function identically on each individual computer. In other words, it's impossible to understand this behavior
only through testing.

(I just realized that the example in the OP blocked the thread for 100 milliseconds after each iteration. In that case my question
(6) is still valid, but please ignore the context in which it was asked.)

RTFM is acceptible ;) Just, where is the manual exactly? I'll check out TCP I as William recommended, but if there is a genuine
manual that describes the protocol on the web somewhere I'd like to know.

--
Dave Sexton

Vadym Stetsyak said:
Hello, William!

WSM> Fair enouph. But where does "pinning" buffers come into play? I
WSM> thought when you sent your buffer or buffers (with ArraySegment
WSM> overload) *those buffers get pinned and the driver uses them directly
WSM> instead of making a costly copy of user buffers to driver buffers.
WSM> I mean if it did just use the user buffers, then there is no buffers
WSM> to copy, just something to queue up. Any light on this?

"Pinning" also occurs when you're using Socket.Send(byte[] buffer .... ).
So, the Send process comes like this:
- Socket.Send(byte[] buffer) - buffer is pinned and marshaled to native send(....)
- Native send then passsed that buffer down to undelying WSPSend ( LSP related stuff )
- Finnaly buffer gets copied into kernel mode buffer, if kernel buffer size is less then buffer
specified by the user, then WSPSend is blocked, thus blocking all the upward call chain.
 
W

William Stacey [MVP]

| Nice example. It includes all of the points I was trying to understand.
This shows how the example in the OP does not acurrately
| illustrate the behavior of this mechanism since it used synchronous Sends
only, making it seem like the second Send will always
| fail, when in fact it might very well be the thousandth send that will
fail, given enough time.

TCP will only allow so many outstanding ACKs before it will not send
anymore - so we should not be able to send many before we get push-back. I
thought I read somewhere that was 2 outstanding packets, but could be wrong.
So I guess that means Send could also block because of this (i.e. waiting to
post a buffer because the stack is waiting for outstanding ACK) until send
timeout.
--wjs
 
D

Dave Sexton

Hi William,

So "thousandth" is completely inaccurate :)

Are you suggesting that Send will block for ACK in some cases? I have been led to believe that Send does not block for ACK in any
circumstance.
 
W

William Stacey [MVP]

| 1. How often will a blocking Send block, and for how long?
socket.SendTimeout (also paired ReceiveTimeout)

| 2. I understand this depends on the size of the buffer, so how big is the
kernel buffer?
socket.SendBufferSize (and ReceiveBufferSize)

| 3. Is the size of the buffer affected by the Nagle algorithm in any way?
Don't think so, but not sure.

| 4. Does the size of the buffer fluxuate, or can it be changed
programmatically?
See above.

| 5. If a blocking Send isn't waiting for a response from the server why not
just write the buffer directly into unmanaged memory (or
| pin a copy) and return immediately to the caller? i.e., why block at all?

It does not block if buffer space is available. If space is available,
it copies the user buf and returns N. Non-blocking socket mode gets a
little more complex. It will copy upto the point it has space for and
return N or something < N, then your code needs to send the rest of the buf.

| I take it that this is what BeginSend does?

BeginSend does not copy user buffer, but keeps it pinned and driver
uses user buffer directly. Another reason why BeginSend can be more
efficient as no buffer copy overhead. In a busy system, this can be a
drain. Not sure if there is every a case where is does a copy and releases
the users buffer?

| 6. The example in the OP attempts to send 10 bytes a few times,
synchronously, and it seems that the second Send always failed after
| RST in my testing. Will increasing or decreasing the number of bytes sent
in the first Send cause this behavior to change? In
| other words, if the first send no longer blocks (if it currently is
blocking, depending on the size of the buffer and the number of
| bytes sent), is it possible that the second Send will not always fail
because the time it has taken to normally Send has decreased
| even if the time it takes to receive the RST has remained the same?

Interestingly, if you set SendBufferSize to 0 in the code we are talking
about, on my tests, the *first send does throw the error. So it would seem,
it is blocking for the ACK because of this zero buffer. Try it out and see
if you see the same.

| I only ask the last question because it seems to me that this behavior is
really unpredictable and that no real example can be
| written that will function identically on each individual computer. In
other words, it's impossible to understand this behavior
| only through testing.

But I think we are talking about an error in "our" protocol so not sure this
matters. The connection is implicitly shutdown half-way by server. Server
can send and client can receive - all good. Client should not be sending
anyway sence it should "know" the state of the protocol - hence the error in
our protocol. Client only knows explicitly, after it trys a send and gets
the ACK with the RST set.


| RTFM is acceptible ;) Just, where is the manual exactly? I'll check out
TCP I as William recommended, but if there is a genuine
| manual that describes the protocol on the web somewhere I'd like to know.

You can also read the RFCs (i.e. 793, 3168)
ftp://ftp.rfc-editor.org/in-notes/rfc793.txt
 
W

William Stacey [MVP]

| Are you suggesting that Send will block for ACK in some cases? I have
been led to believe that Send does not block for ACK in any
| circumstance.

Ultimately, the send buffer will do the the push-back. If TCP is waiting
for a pending ACK, it is not sending. If it is not sending, the send buffer
is not poped and fills up. Send will block (in non-blocking socket mode) if
it can not write the user buffer - so yes, it seems send could block for
upto sendtimeout.

Setting SendBufferSize to 0 seems to also make send block until write
complete (and it seems like I see ACK with RST right away also).

--wjs
 
D

Dave Sexton

Hi William,

Thanks, that clears it up a bit more. I also see that you've mentioned SendBufferSize, which answers a related question I posed in
another branch of this thread. I'm going to take a look at your response to that post now.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top