C# Async Socket timeouts

C

ColoradoGeiger

I have a fairly standard server application that I am using
asynchronous socket calls to make connections, send and receive data,
and all of that jazz. No blocking methods was my goal in writing this
and it's gone very well so far. But coming from the blocking world, I
want to have some timeouts on my receive routines and I after reading
a lot on MSDN and here, I get the impression that once I set one of
these Begin... methods a running and nothing connects or happens that
there is nothing I can do about canceling it. Is this really true?

For example, I make a connection, I send a packet of data, and then I
go and run .BeginRead(...). Lets say I know that my client will
respond in less than 4 seconds. If there is no response, then I can
be sure that I need to resend my packet - maybe it didn't get there,
maybe there were collisions on the technology on the other end, who
knows. All I know is that I need to retry. But I don't see any way
to detect a timeout using async.

The best method I have for this now: I use a class that contains the
socket and some other supporting data that I need to know that is all
built after the connection is made. This object is what I pass into
the .Begin... async calls and is returned to me when it completes. If
I set a timer in that class, I can manually close the socket while the
async call is still pending. I think this throws a SocketException or
something in my calling function for the .Begin... call and all is
well. Except it really isn't well because I don't have my connection
anymore, it's closed. I can't retry. I got out of my async call but I
don't have an open socket to retry with. Since the communications are
initiated by the clients, I have no direct control over when this
connection will be retried again.

Is there really no better way?

I have not included code, this is a more theory based question. If
anyone has an idea and needs to see some code, I can provide that no
problem.

Anyone have any ideas? Or should I start from scratch and start using
blocking methods which HAVE a timeout feature.

Jason
 
C

ColoradoGeiger

I did just find this:

http://groups.google.com/group/micr...st&q=c#+async+timeout+socket#a58c632b18ab4672

According to that post, I make my .BeginReceive call, then I sit and
wait for some flags before leaving the method. I could use a timer in
that class that I am passing around and I can check that from the
thread that called .BeginReceive, since I am just sitting there
waiting. If the received data flag is never set (by the return
delegate) AND my timer sets its own complete.timed out flag I do know
that I timed out.

But then what? Is there a way to end that original BeginReceive
call? I don't know how to do that. I can, from there, initiate a new
BeginReceive call, that should not be hard, but I need to end that old
one.

This async stuff is new to me. It's going well but it's all a bit hard
to wrap my head around sometimes.

Ideas? Comments?

Jason
 
J

Jeroen Mostert

I have a fairly standard server application that I am using
asynchronous socket calls to make connections, send and receive data,
and all of that jazz. No blocking methods was my goal in writing this
and it's gone very well so far. But coming from the blocking world, I
want to have some timeouts on my receive routines and I after reading
a lot on MSDN and here, I get the impression that once I set one of
these Begin... methods a running and nothing connects or happens that
there is nothing I can do about canceling it. Is this really true?
Yes. The only way to cancel them is to close the socket and thereby destroy
the connection.

Well, technically, you can also call CancelIoEx(), but you'd need to
P/Invoke, it's only supported from Vista onwards and I'm pretty sure the
..NET classes are not prepared for having their I/O aborted behind their
backs, so I wouldn't recommend that.
For example, I make a connection, I send a packet of data, and then I
go and run .BeginRead(...). Lets say I know that my client will
respond in less than 4 seconds. If there is no response, then I can
be sure that I need to resend my packet - maybe it didn't get there,
maybe there were collisions on the technology on the other end, who
knows. All I know is that I need to retry. But I don't see any way
to detect a timeout using async.
Set a separate timer (you *can* stop those) and do not issue .BeginRead()
calls in response to sending. Instead, always have a .BeginRead() in the air
and be ready to process whenever the other side sends something. In other
words, your .BeginRead() should be stateless and your .EndRead() should
check what it should be doing with received data depending on your current
state and issue a new .BeginRead() if it should continue receiving. Do not
close the connection until you're really done with it.

If you follow this pattern, you do not need to abort anything when a timeout
occurs, since you're *always* waiting for the other side to send something.
It's just your reactiong when they do send something that'll be different.
Anyone have any ideas? Or should I start from scratch and start using
blocking methods which HAVE a timeout feature.
This is actually an option, as long as you don't mind that it's impossible
to scale if you do that. If your server will never make more than a few
dozen connections, the classic "separate blocking threads" model will work.
It's just not very future-proof.
 
J

Jeroen Mostert

Peter said:
[...]
Anyone have any ideas? Or should I start from scratch and start using
blocking methods which HAVE a timeout feature.
This is actually an option, as long as you don't mind that it's
impossible to scale if you do that. If your server will never make
more than a few dozen connections, the classic "separate blocking
threads" model will work. It's just not very future-proof.

And in addition, even with blocking i/o, you cannot continue using the
socket after it's timed out. Using blocking i/o doesn't change the
fundamental issues; it just is a different implementation.
Right, I somehow had the idea that there were blocking overloads that
allowed you to specify timeouts for individual requests, but I see that's
not the case (just goes to show you how long it's been since I've used
blocking methods :) You have to use the .Timeout properties, and those are
useless, because the connection is broken after these trigger. (This is a
neat gotcha if you're used to BSD sockets.)
 
C

ColoradoGeiger

Good stuff, thanks to everyone. I will read over all of this and see
if I have any followup.

Jason
 
C

ColoradoGeiger

Jeroen,

I think I know what you mean. Since that BeginRead is already
floating out there when I detect a timeout with my own timer, I should
be able to just send data again, and if the client does respond this
time then it will still jump into that original delegate that the
first .BeginRead() attempt.

That makes sense to me. Is that right?

Also the note about the socket timeout causing a closed connection - I
didn't know that and that explains some oddball things that I
sometimes see in another older server app that I wrote using blocking
methods. Good deal!

I owe you a beer,

Jason
 
C

ColoradoGeiger

I have many reasons for not getting a response, and TCP issues are not
the most likely. I am talking to embedded systems over cellular and
the sad truth is that cellular is not very reliable, and past that
embedded device server is a radio modem network and it's just a lot of
levels of comm to get lost for many reasons, TCP is the least of my
worries.

But I do know what you both mean I think.

Thank you!

Jason
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top