socket.receiveTimeout question

D

djc

I read a network programming book (based on framework 1.1) which indicated
that you should 'never' use the RecieveTimeout or the SendTimeout 'socket
options' on TCP sockets or you may loose data. I now see the
socket.RecieveTimeout 'property' in the visual studio 2005 help
documentation (framework 2.0) and it has example of it being used with TCP
socket. This propery is also listed as 'new in .net 2.0'.

1) is the socket.RecieveTimeout property just a 'new' property that exposes
the existing RecieveTimeout socket option? meaning, are they essentially the
same thing?

2) should it be used with TCP sockets?

3) I'm looking for a stable way, following good programming practices, to
structure my client-server send/receive operations (all synchronous at this
point - no real volume). The example from the book I read doesn't seem to
cover the client side handling the case of a server side issue. To clarify
that, the book generally says this:

in english, to send data and then receive response: connect to remote host,
send data, call socket.shutdown, enter an infinite loop which calls recieve
and checks the number of bytes recieved. When the bytes recieved == 0 you
break out of the loop and then call socket.close.

my question is that what if there is a problem on the server end and the
bytes recieved NEVER == 0? Logically I am thinking I would use the
socket.RecieveTimeout as a contingency plan but the book advises against it?
So what do you do? Similarly, I need solution for send operations. If you
are not supposed to use the SendTimeout then what do you do to bail out of a
problem?

any input on this would be greatly appreciated. Thanks.
 
J

jeremiah johnson

djc said:
I read a network programming book (based on framework 1.1) which indicated
that you should 'never' use the RecieveTimeout or the SendTimeout 'socket
options' on TCP sockets or you may loose data. I now see the
socket.RecieveTimeout 'property' in the visual studio 2005 help
documentation (framework 2.0) and it has example of it being used with TCP
socket. This propery is also listed as 'new in .net 2.0'.

Always use timeouts. The default is zero, or "wait forever." If you
don't specify a timeout, you will wait forever if you don't read all the
data you expect, or you don't get the null at the end. That is almost
certainly not the behavior you want.

I'm almost 100% sure that SendTimeout and ReceiveTimeout existed in .net
1.1. It is only new in .net compact framework 2.0.
1) is the socket.RecieveTimeout property just a 'new' property that exposes
the existing RecieveTimeout socket option? meaning, are they essentially the
same thing?

You're thinking of TcpClient timeouts. they have existed since .net 1.0.
2) should it be used with TCP sockets?

absolutely. I *always* set a timeout, although the timeout that I set
differs per application.
3) I'm looking for a stable way, following good programming practices, to
structure my client-server send/receive operations (all synchronous at this
point - no real volume). The example from the book I read doesn't seem to
cover the client side handling the case of a server side issue. To clarify
that, the book generally says this:

in english, to send data and then receive response: connect to remote host,
send data, call socket.shutdown, enter an infinite loop which calls recieve
and checks the number of bytes recieved. When the bytes recieved == 0 you
break out of the loop and then call socket.close.

I think you're saying that you should send data, listen, and receive
data until you receive a null (character 0). If you don't know the
length of the data you're receiving, then this should work. If you know
the length of the data you're receiving, you should read exactly that
amount of data, no matter what it ends with.

A good server should send some sort of header that states the payload
length, so you know exactly how much data is coming. If you ever write
a server, make the protocol between server and client either stream
continuously (or until socket close,) or, if you're sending chunks of
data around, make the server and client send each other payload sizes.
my question is that what if there is a problem on the server end and the
bytes recieved NEVER == 0? Logically I am thinking I would use the
socket.RecieveTimeout as a contingency plan but the book advises against it?
So what do you do? Similarly, I need solution for send operations. If you
are not supposed to use the SendTimeout then what do you do to bail out of a
problem?

I don't know who would ever advise that timeouts are a bad idea. Not
using timeouts is a worse idea. It sounds like something one of my
offshore programmers would say. My Indian friends often have odd
assumptions about things like this that they stick to for no good
reason. Everyone says that offshore programmers aren't any good, but
they ARE, if you help them learn what they need to know to stick to good
design and coding practices. Not a single one of them (okay, maybe a
few) are stupid and can't grasp their jobs.
any input on this would be greatly appreciated. Thanks.

timeouts are necessary, that's why they're part of the TCP
specification, its why MS went to the trouble of implementing them, and
its why everyone I know uses them. If you lose data because you waited
5 minutes and nothing came through, you were going to lose the data
anyway. Just ask for the data again. If the data is time sensitive,
and asking for the lost data again wouldn't make sense, send it over UDP
instead.

someone let me know if I'm wrong with any of this.

jeremiah();
 
D

djc

thanks for the reply Jeremiah. See inline:

jeremiah johnson said:
Always use timeouts. The default is zero, or "wait forever." If you
don't specify a timeout, you will wait forever if you don't read all the
data you expect, or you don't get the null at the end. That is almost
certainly not the behavior you want.

I'm almost 100% sure that SendTimeout and ReceiveTimeout existed in .net
1.1. It is only new in .net compact framework 2.0.


You're thinking of TcpClient timeouts. they have existed since .net 1.0.

I was referring to the property of the socket class itself (although I just
saw the TcpClient class has the same property). I wanted to verify that this
property was equivalent to the actual 'socket option' ReceiveTimeout that
you set by using socket.SetSocketOption method. I believe they are the same
but wanted to verify.
absolutely. I *always* set a timeout, although the timeout that I set
differs per application.


I think you're saying that you should send data, listen, and receive data
until you receive a null (character 0).

same effect but I was refering to the integer return value from the
socket.Receive method (which returns number of bytes received), not a null
character at the end of the data received. The book said the method will
return zero indicating there is nothing more to receive and that is your
condition for breaking out of the loop.
If you don't know the length of the data you're receiving, then this should
work. If you know the length of the data you're receiving, you should read
exactly that amount of data, no matter what it ends with.

A good server should send some sort of header that states the payload
length, so you know exactly how much data is coming. If you ever write a
server, make the protocol between server and client either stream
continuously (or until socket close,) or, if you're sending chunks of data
around, make the server and client send each other payload sizes.


I don't know who would ever advise that timeouts are a bad idea. Not
using timeouts is a worse idea. It sounds like something one of my
offshore programmers would say. My Indian friends often have odd
assumptions about things like this that they stick to for no good reason.
Everyone says that offshore programmers aren't any good, but they ARE, if
you help them learn what they need to know to stick to good design and
coding practices. Not a single one of them (okay, maybe a few) are stupid
and can't grasp their jobs.

"Network Programming For the Microsoft .NET Framework" MS Press (Anthony
Jones, Jim Ohlund, Lance Olson)
page 163: after listing a table of socket options there is a 'warning' box
that says:
"The ReceiveTimeout and SendTimeout socket options should never be used on
TCP sockets because data might be lost when a timeout occurs"

Looking at their example code for receiving data which relies on the
socket.receive method returning zero to break out of an infinite loop I
immediately thought "what if it never returns zero?". You got an infinite
loop. Naturally I thought the ReceiveTimeout property of the socket class
was a logical choice to handle this case but since the book warned against
using the 'socket option' ReceiveTimeout, and I assumed this 'socket option'
and the ReceiveTimeout 'property' of the socket class were equivalent, I
didn't know what to do. And thats what prompted this post. 1) are they the
same and 2) what to do.

timeouts are necessary, that's why they're part of the TCP specification,
its why MS went to the trouble of implementing them, and its why everyone
I know uses them. If you lose data because you waited 5 minutes and
nothing came through, you were going to lose the data anyway. Just ask
for the data again. If the data is time sensitive, and asking for the
lost data again wouldn't make sense, send it over UDP instead.

someone let me know if I'm wrong with any of this.

jeremiah();

I think you have pretty much cleared it up for me. This shows that just
because its in a book it doesn't mean its right! Or, I just took that
statement out of context somehow but I don't think so.

Thanks for the help.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top