GetStream.Read behavior changed in .Net 2.0 with respect to ReceiveTimeout

K

Keith Langer

Hi,

I've noticed that the behavior of ReceiveTimeout has changed
between .Net 1.1 and .Net 2.0 when it comes to the GetStream.Read
method. In .Net 1.1, if the timeout period elapsed, I would get a
System.IO.IOException, but the connection stayed open. I could then
check the InnerException for Errorcode 10060 to verify that it was a
timeout. In .Net 2.0, I still get an IOException when there's a
timeout, but this time the socket closes. This means I would have to
reestablish the connection when I didn't need to before (or worse yet,
if I'm not controlling the connection, then I have to wait for the
other end to reconnect. Is there a way to avoid the disconnects?

Below is some sample code


m_tcpClient.ReceiveTimeout = 5000
try
intCount = m_tcpClient.GetStream.Read(bytData, 0, DATA_SIZE)
If intCount > 0 Then
'Got data
Else
'intCount = 0 means the connection was closed at the other end
ShowMessage("Lost connection")
blnReset = True
End If
Catch ex As System.IO.IOException
If TypeOf ex.InnerException Is SocketException AndAlso
CType(ex.InnerException, SocketException).ErrorCode = 10060 Then
'This should mean that we had a timeout
'for some reason, in .Net 2.0 I am getting a disconnect after a
timeout. For now, reset connection
End If
end try

thanks,
Keith Langer
 
P

Peter Duniho

Keith said:
Hi,

I've noticed that the behavior of ReceiveTimeout has changed
between .Net 1.1 and .Net 2.0 when it comes to the GetStream.Read
method. In .Net 1.1, if the timeout period elapsed, I would get a
System.IO.IOException, but the connection stayed open. I could then
check the InnerException for Errorcode 10060 to verify that it was a
timeout. In .Net 2.0, I still get an IOException when there's a
timeout, but this time the socket closes. This means I would have to
reestablish the connection when I didn't need to before

Actually, you should have been reestablishing the connection before as
well. It's just that .NET didn't force you to, even though doing so was
important to maintain data integrity.

See this previous thread, discussing this exact issue:
http://groups.google.com/group/micr...read/thread/8cb606b8d5e49756/864b862266b59310

Pete
 
K

Keith Langer

Actually, you should have been reestablishing the connection before as
well. It's just that .NET didn't force you to, even though doing so was
important to maintain data integrity.

See this previous thread, discussing this exact issue:http://groups.google.com/group/microsoft.public.dotnet.framework/brow...

Pete

Pete,

Can you explain this indeterminate state? I have always been able to
reuse the connection without any problems in 1.1. Now if I have a
series of serial devices I'm polling, for every serial timeout, I have
to add a few more seconds to reestablish the tcp connection. This is
going to slow things down quite a bit. Plus, in cases where someone
else makes the connection, I'm forcing them to reconnect.

Keith
 
P

Peter Duniho

Keith said:
Can you explain this indeterminate state?

From the Winsock documentation for SO_RCVTIMEO (which I believe is the
underlying mechanism for the ReceiveTimeout property):

If a send or receive operation times out on a
socket, the socket state is indeterminate, and
should not be used; TCP sockets in this state
have a potential for data loss, since the
operation could be canceled at the same moment
the operation was to be completed.

I have always been able to
reuse the connection without any problems in 1.1.

This is the sort of thing that could work an arbitrarily large number of
times before you actually see a problem. But that doesn't mean that the
problem isn't worth worrying about.

Now if I have a
series of serial devices I'm polling, for every serial timeout, I have
to add a few more seconds to reestablish the tcp connection. This is
going to slow things down quite a bit. Plus, in cases where someone
else makes the connection, I'm forcing them to reconnect.

If disconnecting is not suitable, then you need a different mechanism
than using the ReceiveTimeout property. It is fairly easy to implement
a timeout explicitly, rather than having the socket manage the timeout
for you, and doing so not only addresses the above issue, it gives you
much more control over how the timeout works (including how to handle
the case of having data arrive on the socket immediately after the
timeout was signaled).

That said, IMHO if having the socket disconnected isn't a reasonable
response to the timeout, I have to question whether what you really want
is a timeout in the first place. A timeout is essentially an error
condition. If you are going to continue to try to wait for data (that
is, the timeout doesn't represent an error), then you don't really want
a timeout...you just want some sort of notification that a certain
amount of time has gone by without any data being received on the
socket. That can be easily done without involving the socket at all,
and IMHO it _should_ be done instead of getting the socket involved.

This latter point is more philosophical than anything, but it is IMHO a
design point worth considering.

Pete
 
K

Keith Langer

From the Winsock documentation for SO_RCVTIMEO (which I believe is the
underlying mechanism for the ReceiveTimeout property):

If a send or receive operation times out on a
socket, the socket state is indeterminate, and
should not be used; TCP sockets in this state
have a potential for data loss, since the
operation could be canceled at the same moment
the operation was to be completed.

I have always been able to


This is the sort of thing that could work an arbitrarily large number of
times before you actually see a problem. But that doesn't mean that the
problem isn't worth worrying about.

Now if I have a


If disconnecting is not suitable, then you need a different mechanism
than using the ReceiveTimeout property. It is fairly easy to implement
a timeout explicitly, rather than having the socket manage the timeout
for you, and doing so not only addresses the above issue, it gives you
much more control over how the timeout works (including how to handle
the case of having data arrive on the socket immediately after the
timeout was signaled).

That said, IMHO if having the socket disconnected isn't a reasonable
response to the timeout, I have to question whether what you really want
is a timeout in the first place. A timeout is essentially an error
condition. If you are going to continue to try to wait for data (that
is, the timeout doesn't represent an error), then you don't really want
a timeout...you just want some sort of notification that a certain
amount of time has gone by without any data being received on the
socket. That can be easily done without involving the socket at all,
and IMHO it _should_ be done instead of getting the socket involved.

This latter point is more philosophical than anything, but it is IMHO a
design point worth considering.

Pete

Pete,

Could you show me some code which uses these alternatives for checking
if data has arrived? The only other way I know to check for data on
the socket is to keep checking the DataAvailable property and sleeping
for a small period of time, then exit out when my timeout period has
elapsed without receiving data.

thanks,
Keith
 
P

Peter Duniho

Keith said:
Could you show me some code which uses these alternatives for checking
if data has arrived?

For the record, I only mentioned alternatives to implementing a timeout.
Alternatives for "checking if data has arrived" is a somewhat
different topic.
The only other way I know to check for data on
the socket is to keep checking the DataAvailable property and sleeping
for a small period of time, then exit out when my timeout period has
elapsed without receiving data.

I guess at this point it may be helpful to take a step back and look at
the bigger picture. From the sounds of it, you are using a timeout for
something other than an actual timeout. It might be helpful to be more
clear about why it is you are using a timeout in the first place. Most
socket i/o code does not bother with a timeout at all.

For simple socket-based applications, in .NET or otherwise, generally
the design either assigns a dedicated thread to each socket for
receiving, or uses the function/method named "select" or "Select"
(Winsock and .NET Socket, respetively). The thread-per-socket model
works fine for very small numbers of sockets, while the select model
works fine for slightly larger numbers of sockets, but less than 64 (in
Winsock, there's a limit of 64 sockets that can be passed to
select...I'm not sure if the same limit exists in the .NET version, but
it may).

In .NET specifically, one would normally use the async paradigm. That
is, using the BeginXXX/EndXXX methods for i/o. These are the most
efficient, and for very large numbers of sockets they are the only
practical way to produce scalable network i/o code. (In Winsock, there
are other alternatives to deal with larger numbers of sockets, one of
them being to use i/o completion ports, which is what the .NET async
paradigm is built on when run on platforms that support it).

IMHO, even if you have just one socket, the async paradigm makes a lot
of sense. It is easily extended if you wind up needing to do so in the
future, it has no additional issues beyond that which would normally be
present in a dedicated-thread implementation, and personally I just like
the way it works. Via a BeginXXX method you tell .NET to call a
specific method when the i/o completes, and when it completes, your
method is called. You do in there whatever you want to process the i/o
(such as calling Socket.EndReceive() for a receive operation), and then
you call the BeginXXX method again to repeat the sequence.

In _any_ of these mechanisms, there is no need for a timeout. You don't
do your i/o on a thread that needs to do something else. Using
Socket.Select(), you can in fact provide a timeout value, but that
shouldn't be used just so that you can mix your i/o code in the same
thread with code that does something else.

Anyway, that's all a long way of saying that, from the limited amount of
information you've posted so far, it sounds to me as though even though
you are asking how to get the timeout to do what you want, the more
basic issue is that using a timeout isn't the appropriate solution in
the first place.

So, if after reading the above you still think it is, maybe the next
step here is for you to explain in more detail why you need the timeout.
That would help with respect to answering the question regarding how
to implement a timeout.

If instead the question remains "Could you show me some code which uses
these alternatives for checking if data has arrived", then hopefully the
above at least begins to answer that question (the code is on MSDN, as
sample code for the various methods and other members of the Socket class).

Finally, yes...I realize you asked about GetStream.Read() (which is a
little confusing itself...I know you really mean Stream.Read(), but
since there's not actually any GetStream class, saying
"GetStream.Read()" seems a little odd out of context), but clearly
what's really going on is related to the socket itself, not the stream
wrapped around it, I think it's more useful to discuss the socket
specifically. You can implement the above either by manipulating the
socket directly, or in the case of the BeginXXX/EndXXX method, the
Stream class does have the async i/o methods in it, so you can operate
on the stream directly in that case.

Pete
 
K

Keith Langer

For the record, I only mentioned alternatives to implementing a timeout.
Alternatives for "checking if data has arrived" is a somewhat
different topic.


I guess at this point it may be helpful to take a step back and look at
the bigger picture. From the sounds of it, you are using a timeout for
something other than an actual timeout. It might be helpful to be more
clear about why it is you are using a timeout in the first place. Most
socket i/o code does not bother with a timeout at all.

For simple socket-based applications, in .NET or otherwise, generally
the design either assigns a dedicated thread to each socket for
receiving, or uses the function/method named "select" or "Select"
(Winsock and .NET Socket, respetively). The thread-per-socket model
works fine for very small numbers of sockets, while the select model
works fine for slightly larger numbers of sockets, but less than 64 (in
Winsock, there's a limit of 64 sockets that can be passed to
select...I'm not sure if the same limit exists in the .NET version, but
it may).

In .NET specifically, one would normally use the async paradigm. That
is, using the BeginXXX/EndXXX methods for i/o. These are the most
efficient, and for very large numbers of sockets they are the only
practical way to produce scalable network i/o code. (In Winsock, there
are other alternatives to deal with larger numbers of sockets, one of
them being to use i/o completion ports, which is what the .NET async
paradigm is built on when run on platforms that support it).

IMHO, even if you have just one socket, the async paradigm makes a lot
of sense. It is easily extended if you wind up needing to do so in the
future, it has no additional issues beyond that which would normally be
present in a dedicated-thread implementation, and personally I just like
the way it works. Via a BeginXXX method you tell .NET to call a
specific method when the i/o completes, and when it completes, your
method is called. You do in there whatever you want to process the i/o
(such as calling Socket.EndReceive() for a receive operation), and then
you call the BeginXXX method again to repeat the sequence.

In _any_ of these mechanisms, there is no need for a timeout. You don't
do your i/o on a thread that needs to do something else. Using
Socket.Select(), you can in fact provide a timeout value, but that
shouldn't be used just so that you can mix your i/o code in the same
thread with code that does something else.

Anyway, that's all a long way of saying that, from the limited amount of
information you've posted so far, it sounds to me as though even though
you are asking how to get the timeout to do what you want, the more
basic issue is that using a timeout isn't the appropriate solution in
the first place.

So, if after reading the above you still think it is, maybe the next
step here is for you to explain in more detail why you need the timeout.
That would help with respect to answering the question regarding how
to implement a timeout.

If instead the question remains "Could you show me some code which uses
these alternatives for checking if data has arrived", then hopefully the
above at least begins to answer that question (the code is on MSDN, as
sample code for the various methods and other members of the Socket class).

Finally, yes...I realize you asked about GetStream.Read() (which is a
little confusing itself...I know you really mean Stream.Read(), but
since there's not actually any GetStream class, saying
"GetStream.Read()" seems a little odd out of context), but clearly
what's really going on is related to the socket itself, not the stream
wrapped around it, I think it's more useful to discuss the socket
specifically. You can implement the above either by manipulating the
socket directly, or in the case of the BeginXXX/EndXXX method, the
Stream class does have the async i/o methods in it, so you can operate
on the stream directly in that case.

Pete


Pete,

This application runs a number of signs (serial devices) connected to
several device servers. Each device server has a dedicated thread and
TcpClient associated with it in my application. There could be
anywhere from 1 to 150 of these device servers, and each device server
could have 1 to 100 signs (a typical site might have anywhere from 10
to 500 signs in total). Some of the signs respond to messages sent,
but there is no guarantee that all signs are functional. So for each
device server, I must send a message to sign 1, wait for a response or
for a timeout period of X seconds to elapse, then send the next
message to that sign or the next sign. Hence, this is my need for
some sort of timeout. I either need to keep checking the
DataAvailable property for several seconds, or implement a blocking
operation for several seconds until data is available.

Keith
 
P

Peter Duniho

Keith said:
This application runs a number of signs (serial devices) connected to
several device servers. Each device server has a dedicated thread and
TcpClient associated with it in my application. There could be
anywhere from 1 to 150 of these device servers,

You may want to reconsider the design. 150 threads is a LOT of threads
for Windows. If you can redesign the code so that a single thread can
handle all of the server tasks, you will be less likely to run into
performance problems when you approach that number.

and each device server
could have 1 to 100 signs (a typical site might have anywhere from 10
to 500 signs in total). Some of the signs respond to messages sent,
but there is no guarantee that all signs are functional.

Nothing unusual about that.

So for each
device server, I must send a message to sign 1, wait for a response or
for a timeout period of X seconds to elapse, then send the next
message to that sign or the next sign.

I don't understand where you get this requirement. It is not something
that is implied by anything you wrote so far.

What is it about your application that imposes a requirement that you do
not send a message to a given sign until you have resolved the outcome
of a message sent previously to a different sign? Does the message that
you tried to send to a given sign propagate forward to the next sign if
you don't get a response from the first sign?

Hence, this is my need for
some sort of timeout. I either need to keep checking the
DataAvailable property for several seconds, or implement a blocking
operation for several seconds until data is available.

See above. I don't see how you have arrived at this conclusion.

Pete
 
K

Keith Langer

You may want to reconsider the design. 150 threads is a LOT of threads
for Windows. If you can redesign the code so that a single thread can
handle all of the server tasks, you will be less likely to run into
performance problems when you approach that number.

and each device server


Nothing unusual about that.

So for each


I don't understand where you get this requirement. It is not something
that is implied by anything you wrote so far.

What is it about your application that imposes a requirement that you do
not send a message to a given sign until you have resolved the outcome
of a message sent previously to a different sign? Does the message that
you tried to send to a given sign propagate forward to the next sign if
you don't get a response from the first sign?

Hence, this is my need for


See above. I don't see how you have arrived at this conclusion.

Pete

Pete,

The signs are on a RS-485 loop (you can read more about RS-485 at
http://www.lammertbies.nl/comm/info/RS-485.html), which uses the same
pair of wires for send and receive. It is a "gentleman's agreement"
format which requires that neither side send simultaneously. Hence,
the need to wait for each response. Additionally, each sign usually
takes multiple messages, and the sign must finish processing and
acknowledge one message before it starts another.

Regarding the number of threads, I can't have operations on one tcp
connection waiting for operations on another to complete. I know 150
is a lot of threads, but the design has served me well for several
years now.

Keith
 
P

Peter Duniho

Keith said:
The signs are on a RS-485 loop (you can read more about RS-485 at
http://www.lammertbies.nl/comm/info/RS-485.html), which uses the same
pair of wires for send and receive. It is a "gentleman's agreement"
format which requires that neither side send simultaneously. Hence,
the need to wait for each response. Additionally, each sign usually
takes multiple messages, and the sign must finish processing and
acknowledge one message before it starts another.

Is it guaranteed that if a sign does not respond to a message within
some specified time (your timeout period, presumably), that it will
never respond to the message?

Without seeing the code, I can't say for sure. But I suspect that if
you simply stop using the ReceiveTimeout property, and instead use the
Socket.Select() method to wait for i/o completion, your problem would be
addressed. Just specify your timeout in the call to the Select() method.

Personally, I would use async paradigm, with BeginReceive, and a timer
(if you've got a GUI, I'd use Forms.Timer, but there are other options)
to signal when the timeout has elapsed, but that's likely a bigger
change to your existing code than you're willing to make, especially
given your otherwise highly-synchronous scenario.
Regarding the number of threads, I can't have operations on one tcp
connection waiting for operations on another to complete. I know 150
is a lot of threads, but the design has served me well for several
years now.

I don't know what you mean by "operations on one tcp connection waiting
for operations on another to complete", but I won't bother discussing
that aspect further. Suffice to say, you don't need 150 threads, and if
it were me I wouldn't have designed the code to use 150 threads. But if
you're happy with it, that's fine.

Pete
 
K

Keith Langer

Is it guaranteed that if a sign does not respond to a message within
some specified time (your timeout period, presumably), that it will
never respond to the message?

Without seeing the code, I can't say for sure. But I suspect that if
you simply stop using theReceiveTimeoutproperty, and instead use the
Socket.Select() method to wait for i/o completion, your problem would be
addressed. Just specify your timeout in the call to the Select() method.

Personally, I would use async paradigm, with BeginReceive, and a timer
(if you've got a GUI, I'd use Forms.Timer, but there are other options)
to signal when the timeout has elapsed, but that's likely a bigger
change to your existing code than you're willing to make, especially
given your otherwise highly-synchronous scenario.


I don't know what you mean by "operations on one tcp connection waiting
for operations on another to complete", but I won't bother discussing
that aspect further. Suffice to say, you don't need 150 threads, and if
it were me I wouldn't have designed the code to use 150 threads. But if
you're happy with it, that's fine.

Pete

Pete,

There are never any guarantees about what devices send if there's a
problem. It's always possible that a device might malfunction and
start sending out random data, which can bring down everything on the
loop. Thanks for the suggestion of using Socket.Select(). That
sounds like a good way to block while waiting for data without causing
the socket to disconnect after a timeout. I'll look into it further.
The 150 thread case is rare, it just depends on what config a customer
has. Usually it's about 10-30 threads.

Keith
 
K

Keith Langer

KeithLangerwrote:

Is it guaranteed that if a sign does not respond to a message within
some specified time (your timeout period, presumably), that it will
never respond to the message?

Without seeing the code, I can't say for sure. But I suspect that if
you simply stop using theReceiveTimeoutproperty, and instead use the
Socket.Select() method to wait for i/o completion, your problem would be
addressed. Just specify your timeout in the call to the Select() method.

Personally, I would use async paradigm, with BeginReceive, and a timer
(if you've got a GUI, I'd use Forms.Timer, but there are other options)
to signal when the timeout has elapsed, but that's likely a bigger
change to your existing code than you're willing to make, especially
given your otherwise highly-synchronous scenario.


I don't know what you mean by "operations on one tcp connection waiting
for operations on another to complete", but I won't bother discussing
that aspect further. Suffice to say, you don't need 150 threads, and if
it were me I wouldn't have designed the code to use 150 threads. But if
you're happy with it, that's fine.

Pete

Pete,

I looked into Socket.Select, and found that there is also Socket.Poll
for just checking one socket. That method does exactly what I want.
It blocks until a timeout is reached, or returns sooner if data has
arrived or the socket has been closed. Works like a champ!

thanks,
Keith
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top