Question on Socket disconnect handling

B

Bruce

Hi
I have a question on what is the cleanest way to handle disconnects in
my socket handling code.
I have an application in which there is a class that once initiated
calls socket.BeginReceive() in whose callback I do some processing and
call BeginReceive again (so I read in a loop.)
This works great until I decide to disconnect. In the disconnect code,
I close the socket and shut it down. But the BeginReceive thread may
still be going on and I get a Socketexception or a
ObjectDisposedException in BeginReceive or EndReceive based on the
timing. I don't care about this error at this stage since I have
closed the socket.
But while running my app under load, I see a large number of CLR
exceptions/sec (perf counter) because of the constant disconnects I
do. (intentional)

My question is, since exceptions are expensive in C#, I should keep
them to 0 most of the time right? But in the above case, I am getting
legitimate exceptions that I handle.
1. Should I not worry about the exception rate?
2. Or, should I place some flag all over the code so that I check the
flag before doing EndReceive or BeginReceive? And I set the flag in
the Disconnect code. (race conditions can still exist)

I am trying to make my app scale as much as I can.

Thanks
Bruce
 
K

Ken Foskey

The solution that I took to this was to simply use a thread with receive
(not async) and timeout. So every 5 seconds it loops around and checks
a boolean variable and exits the loop cleaning up.
 
B

Bruce

Hi
I have a question on what is the cleanest way to handle disconnects in
my socket handling code.
I have an application in which there is a class that once initiated
calls socket.BeginReceive() in whose callback I do some processing and
call BeginReceive again (so I read in a loop.)
This works great until I decide to disconnect. In the disconnect code,
I close the socket and shut it down. But the BeginReceive thread may
still be going on and I get a Socketexception or a
ObjectDisposedException in BeginReceive or EndReceive based on the
timing.  [...]

It sounds as though you are doing the disconnect incorrectly.  One  
endpoint or the other will initiate the disconnect by doing a graceful  
closure.  In .NET, this means (for the Socket class) calling the  
Shutdown() method.  Once you've called that, you must continue to receive  
data until you get a 0-byte receive, indicating the end of the stream.

Only at that point should you be calling Close() on the Socket instance.  
And of course at that point, you would not try to receive any more data.

If you do it correctly, you should get no exceptions thrown.

Pete

Thanks for the reply Pete. Yes, I was doing the incorrect thing - now,
I have modified my code to first call Shutdown and wait for zero bytes
to be got before calling socket.close.
I have a related question - say sending a certain message makes the
server disconnect my client
I call
socket.BeginSend("badMessage",.....,callback..)

In the callback method, I call socket.EndSend(ar). Now, I observe that
sometimes I get an ObjectDisposedException here. I assume there could
be a legitimate race condition between the callback getting called/
EndSend called and the socket being disconnected by the server?
And I have to absorb the exception or choose not to call EndSend?

Thanks
Bruce
 
B

Bruce

[...]
In the callback method, I call socket.EndSend(ar). Now, I observe that
sometimes I get an ObjectDisposedException here. I assume there could
be a legitimate race condition between the callback getting called/
EndSend called and the socket being disconnected by the server?
And I have to absorb the exception or choose not to call EndSend?

Yes.  Well, actually...there's at least one other option: handle all  
outstanding EndSend() calls when you get the 0-byte receive, before you  
call Socket.Close(), and then ignore them in their own callbacks.

To me, that's the "cleanest" way to deal with the issue, since it avoids  
any exceptions.  But it's a bit of extra work for no real apparent gainas  
far as I can see.

You should definitely still call EndSend() though.  So just ignore the  
exception that occurs there if you know the socket should have already  
been closed.

Pete


Thanks Pete.
But it's a bit of extra work for no real apparent gain as far as I can see.
Under high load, I do not want to see a large number of CLR exceptions
and that is why I am trying to resolve this.
Since I am anyway throwing away the connection, what is actually wrong
with not calling EndSend at all?
 
B

Bruce

Unless you have measured a real performance issue due to the exceptions, I  
think you should not worry about it.  Exceptions can be extremely  
expensive when the debugger is running.  But without the debugger  
attached, they are only "sort of expensive".  As long as you're not seeing  
a huge number of them over and over, I doubt it will affect your  
throughput at all.


The short answer is: because there's nothing in the documentation that  
says you are allowed to not call EndSend().  The general async pattern in  
.NET requires balanced Begin...() and End...() method calls.

As far as more specific reasons go, .NET is required to keep the data  
structures around that allow you to successfully call EndSend() until you 
do.  If you don't call EndSend(), that can lead to resource allocation  
issues.  In some sense, EndSend() is an alternate pattern to IDisposable  
for operations.  If you haven't called EndSend(), you haven't properly  
disposed of the operation that you started.

If you just lose track of the IAsyncResult that was returned by  
Begin...(), then it's possible a finalizer on the operation will  
eventually run and clean things up for you.  But that's not something you  
should rely on.  Code that depends on the finalizer is code that's broken.

Pete

Thanks Pete. I think I will just take the hit of the exceptions in the
send case. I should not be relying on the exceptions counter to
determine whether something "bad" happened on my client.
To clarify the earlier point, a disconnect is only truly complete when
EndReceive returns 0 bytes upon which I call socket.Close().
So, a client disconnect is an asynchronous operation and I should wait
for it to 'complete' (by say, having an event after socket.close())
and only then I should reuse my socket read and write buffers.

Is that right?

Bruce
 
B

Bruce

I'm not sure what you mean by "disconnectis an asynchronous operation".  
Network i/o isn't inherently synchronous or asynchronous; it's just i/o.  
You can process it either way.

If you are processing network i/o with asynchronous techniques, then the  disconnectis asynchronous as well.  But, you could easily make it  
synchronous simply by using blocking methods to do thedisconnect(i.e.  
call Shutdown(), then Receive() until 0 bytes, then Close(), in sequence  
all in the same thread).

But, other than that, yes...it sounds like you've got the idea.

Pete

Hi Pete
Thanks for patiently answering all my questions. I have one final
question on this topic. (Sorry, the MSDN documentation is just not
enough to know the exact cleanup pattern)

1. Should I socket.Shutdown with Send alone since I am trying to
receive 0 bytes? Or should I socket.Shutdown(Both)?
2. Can I reuse my read/write buffer for another socket after Shutdown
instead of waiting for getting for 0 bytes and doing a socket.Close()?
(since though buffer can be accessed concurrently, since I have
disabled send,receive, nothing should be written to the buffer by the
shutdown socket)

Thanks
Bruce
 
B

Bruce

The .NET docs are sparse, to be sure.  But, a lot of the .NET model  
follows the standard Winsock model, which is much better documented.  So,  
reading through the Winsock docs on MSDN would probably be helpful to  
you.  As would reading through the Winsock FAQ  
(http://tangentsoft.net/wskfaq/).


The initiator of thedisconnectshould use Send.  The other endpoint will 
then get a 0 byte receive once all the data the initiator had sent has  
arrived at the remote endpoint; once the remote endpoint is done sending  
all the data it needs to send, it can call Shutdown() with Both.  After 
calling Shutdown() with Send, the initiator will continue to receive data 
until _it_ gets a 0 byte receive.

In this way, both endpoints can reliably read from thesocketuntil the  
end-of-stream indication (0 byte receive), with confidence that by the  
time they close thesocket, both ends really are done with the connection.


You should not shutdown receives on asocketuntil you've actually reached  
the end-of-stream (i.e. had a receive operation complete with 0 bytes).  
Otherwise, you may (and probably will) lose data.

That said, if you don't care about losing data, then sure...call  
Shutdown() with Both instead of Send.  This _should_ (I haven't tested it  
myself) cause any outstanding read operation on thesocketto complete  
(probably with an exception).  Once you've completed those read operations  
(e.g. by callingSocket.EndReceive()), you can reuse the buffers that were 
used for those operations.

Don't reuse a buffer until the operation that was using it has completed  
somehow.  Otherwise you don't have any guarantee that it won't be written  
to by theSocketyou originally used it with.

Pete

Great! Thanks for all the help. I will definitely readup the Winsock
documentation for gaining a better understanding.

Bruce
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top