implementing a time bound wait on the socket (TCP)

P

puzzlecracker

Problem:

I send a lot of requests to the application (running on a different
box, of course), and I receive back responses from the app .


Below: socket corresponds to Socket socket=new
Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);

in my Wait method I have the following


public void Wait (uint milliseconds)
{
while(/*Remain idle for passed number of "milliseconds"*/){
if (socket.Poll(1, SelectMode.SelectRead))
{
ProcessSocket(socket);// read info of the buffer
and calls registered callbacks for the client
}
else
return; //returns after Poll has expired
}
}
}

Hence when a client calls Wait, he get all the callbacks, and then
Wait blocks for milliseconds before assuming that nothing else is
coming from the wire. I have trouble implementing the last point.
 
P

puzzlecracker

Don't call Socket.Poll().  Ever.  It's just not the right way to implement  
things.  Problems include that you are unnecessarily using the CPU and the  
Poll() method isn't reliable (the Socket may become unreasonable between  
the time Poll() says it's readable and the time you get around to actually  
trying to read it).

Well, I inherited the project from the previous developer used poll,
among other things. The ongoing design on the client side is not in
the best state, and I am trying to sort it out
Correct approaches to the problem involve keeping a separate timer (for  
example, using the System.Threading.Timer class) that performs some  
appropriate action.  If you are looking for a "wait since last read any 
data", then you need to reset the timer each time you actually  
successfully read data.  An appropriate action might be to close the  
socket, but without knowing your specific goals, it's impossible to say  
for sure.

The simplistic version of the protocol: you send a request, via
socket, to the remote application, and it 'immediately' response with
a response in a byte buffer (there are no intentional delays on the
application side). The buffer contains various headers, sub-headers,
and the message. After parsing it, I forward a response to clients via
subscribed events. So client sends many requests, then waits for
responses. I want client to be able to specify how much time to
allocate for the [post] responses before sending additional set of
requests or close the connection.
Also, that assumes that "assuming that nothing else is coming from the  
wire" is a valid approach.  If you have no control over the application 
protocol, maybe that's correct.  But generally speaking, it's a poor way  
to deal with network i/o.  Communications should have a well-defined end,  
so that timeouts aren't required at all.

Any good references you may recommend to design this sort of
applications?

Thanks...
 
P

puzzlecracker

Don't call Socket.Poll().  Ever. [...]
Well, I inherited the project from the previous developer used poll,
among other things. The ongoing design on the client side is not in
the best state, and I am trying to sort it out

Whether you wrote it or you inherited it, calling Poll() is still bad.  
You don't need to defend the code to me; I'm just sharing what I know  
about network programming.
The simplistic version of the protocol: you send a request, via
socket, to the remote application, and it 'immediately' response  with
a response in a byte buffer (there are no intentional delays on the
application side). The buffer contains various headers, sub-headers,
and the message. After parsing it, I forward a response to clients via
subscribed events. So client sends many requests, then waits for
responses. I want client to be able to specify how much time to
allocate for the [post] responses before sending additional set of
requests or close the connection.

I am skeptical of that design.  Use it at your own risk.

I am would like to steer clear of WinSock and use native csharp stuff.
First of all, why is

Why would I have these problems --" Problems include that you are
unnecessarily using the CPU and the
Poll() method isn't reliable (the Socket may become unreasonable
between the time Poll() says it's readable and the time you get
around to actually trying to read it). " if I begin the reading
right after I call poll? I just don't see why/how socket can become
unreasonable?

However, if this is the case, why not use Socket.Receive with
SocketFlag.Peek and not have to resort to WinSock altogether?

thanks
 
P

puzzlecracker

I'm not suggesting you do otherwise. But, the .NET (not C#) stuff is
built on top of Winsock, and all the same caveats that apply to Winsock
apply to the .NET stuff too. So if you want to learn how to write .NET
network code, you should start by becoming familiar with Winsock, at least
to some degree.

The caveats that apply to Winsock _should_ be documented in the .NET API,
but they aren't. This is unfortunate, but it simply means that people
writing code for .NET Sockets need to be familiar with caveats that apply
to Winsock, and to networking in general.



First, let me correct my previous statement: I meant to write
"unreadable", not "unreasonable". Sorry about the typo.

Second, one reason the readability state can change is that the network
driver is not required to hang on to data that it's buffered. As long as
it hasn't already acknowledged the packet, it's allowed to toss the data
away, and it might not acknowledge the packet until you retrieve the data.

I see, in this case, the reliable (or rather more reliable)way to get
data from the socket is to call Receive . This poses an issue, as I
described before, whereby I want to be able to calculate the timeout
after the last pocket (or byte [65535]) and a user's max inactive
wait time. In other words, I would have to start a timer
simultaneously with Socket.Receive. I can already envision the
vagaries of code.

Then, what's the better alternative: using socket.Poll or Receive
coupled with Timer for the better and more reliable handling of
network data processing?


Thanks...
 
O

ozbear

Well, as I mentioned before, the basic approach you're trying to implement
is really not the best way to do this in the first place. Having a
timeout as part of your communications protocol is almost certainly
unnecessary and so introduces unneeded complexity into your code.


The most reliable handling of network data processing would not involve a
timeout on the connection at all.
<snip>
I don't understand why anyone would say that. We live in a world of
timeouts, all the way from watchdog timers in operating systems
to alarm clocks next to our beds. In a /perfect/ world with /perfect/
networks what you say might be true, but we don't. Packets get lost,
routers malfunction, network traffic gets delayed if rerouted thru
congested links. All of these things contribute to an environment
where some timeout logic is required in all but the most naive
applications. That is why we have keep-alive probes even built into
the logic of the TCP/IP protocol.
If one needs to be absoutely certain of defined behaviour in a network
application, you are always going to find some form of timeout
processing.

Oz
 
P

puzzlecracker

Secondly, I'm not talking about eschewing timeouts altogether.  I'm  
talking about implementing a timeout as an integral part of an application  
protocol.  If you've been following this thread, you should understand  
that the timeout being used here is part of the protocol, not part of  
error detection and recovery.

The application protocol should have a better-defined demarcation of  
end-of-transmission than simply waiting some arbitrary period of time and 
calling that good.


Some application domain do require timeouts. Let's say market data
servers, where information is pushed to a client at indeterminate time
frequencies, among other domains. I wonder if there is a common
approach to this architecture... there has to be....
 
P

puzzlecracker

Why does that require timeouts _as part of the application protocol_?

Pete


It doesn't require a timeout per se --Receive would be sufficient.
However, timeouts handle the case where client would like to
unsubscripted in the case data hasn't arrived within a specified
timeout window.

On a side note, you've mentioned the race condition between timeout
and Receive. Is there a way to avoid the race without making a
mentioned assumption?

thanks
 
P

puzzlecracker

Why would the client need to do that? Can't you just provide the user
with a way to explicitly disconnect? For example, a button they can push
if they don't want to remain connected? What downside is there to
remaining connected?
In any case, yes...it's true that using a timeout since last
communications as a guide for when to disconnect is sometimes used. But
it's done as a single-sided feature only based on specific needs of one
endpoint or the other, not as part of the application protocol itself.
Your previous messages seemed to imply you were doing the latter, not the
former.

I provide API to the client, so that he doesn't have to use the GUI
application. I want the client to be able to specify that he wants to
stop receiving server's callbacks after some inactivity time....

Here is a pseudo-client api example:

APIConnector client =new APIConnector (); //perhaps this should be
implemented as a factory???


client.OnLogin+=/*delegate*/;
client./*Other events*/+=/*other delegates*/

client.Login();
client.DoAction1();
client.DoAction2()

/*As client make these calls, he receives callbacks as they arrive
from the server asynchronously **/

Now after client make a last call, he wants to wait some time and then
Disconnect, hence.

uint timeoutInMilliseconds=/**/;
client.WaitUntilInactive(timeout).
client.Destroy();



No. The race condition is inherent in having two different code paths
both using the same resource.

That's where Monitor comes in.


Btw, I want to handle callbacks asynchronously, simply, by
Socket.BeginReceive after socket connects to the server.


byte [] buffer;

socket.BeginReceive(buffer, 0,buffer.Length, 0,new
AsyncCallback(this.ReadCallback), socket);

and in ReadCalllback, I will process the events and dispatch to
delegates.

BTW, will AsyncCallback continously, assuming no errors on the link,
read data into buffer and invoke callbacks?

Also, can calls to AsyncCallback starve the main thread? perhaps I
should change the priority?



Thanks
 
O

ozbear

[...]
The most reliable handling of network data processing would not involve
a
timeout on the connection at all.
<snip>
I don't understand why anyone would say that.

Because it's true.
We live in a world of
timeouts, all the way from watchdog timers in operating systems
to alarm clocks next to our beds. In a /perfect/ world with /perfect/
networks what you say might be true, but we don't. Packets get lost,
routers malfunction, network traffic gets delayed if rerouted thru
congested links. All of these things contribute to an environment
where some timeout logic is required in all but the most naive
applications. That is why we have keep-alive probes even built into
the logic of the TCP/IP protocol.

First, keep-alive in TCP has little to do with time-outs (TCP/IP itself,
which is not the same as TCP, doesn't have an inherent keep-alive feature,
nor would it have any reason to). The fact that the default interval is
two hours should be proof enough of that (who wants to wait two hours for
a time-out?).
Whether keepalives are /defined/ at Layer 4 or Layer 4/3 has little
to do with the point I was making, which was that timeouts are a
fact of life, and that they describe a time period which may expire.
That has more than "a little" to do with timeouts. Two hours is a
Microsoft default, can be changed via the registry, and other
platforms may define different values. At any rate...
Secondly, I'm not talking about eschewing timeouts altogether. I'm
talking about implementing a timeout as an integral part of an application
protocol. If you've been following this thread, you should understand
that the timeout being used here is part of the protocol, not part of
error detection and recovery.

That sounds far more sensible.
The application protocol should have a better-defined demarcation of
end-of-transmission than simply waiting some arbitrary period of time and
calling that good.

Pete
On that point I think we can agree, in general.
Oz
 
P

puzzlecracker

Well, you can use that design if you like. I wouldn't. It creates a
situation where the APIConnector class can become invalid without any
interaction by the client code. It would be better to have the client
code manage the timeout itself, and provide a
Disconnect()/Logout()/whatever method that the client code can call to do
the disconnect. That allows, and even requires, that the client code
itself manage _all_ conditions that might lead to a forced disconnect.

At the very least, I hope that your APIConnector class has an event that
is raised when the APIConnector instance times out.

Client cannot manage timeouts without help of APIConnector since it
doesn't know when packets arrive.

No. You have to call BeginReceive() again each time the previous
BeginReceive() completes.

Then my option is to call BeginReceive again inside ReadCallback
recursively. I feel this could cause a problem, not sure. Or is there
a different way I can implement that so of design?


Is it better to let client itself ask for callbacks? In other words,
should client make calls to the server, and then call Wait() and get
all the callbacks and proceed with other with eventual call to
Disconnect/Destroy?... Then again, Wait should be constrained by a
timeout, back to square one.

Thanks
 
S

Steve

puzzlecracker said:
Problem:

I send a lot of requests to the application (running on a different
box, of course), and I receive back responses from the app .


Below: socket corresponds to Socket socket=new
Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);

in my Wait method I have the following


public void Wait (uint milliseconds)
{
while(/*Remain idle for passed number of "milliseconds"*/){
if (socket.Poll(1, SelectMode.SelectRead))
{
ProcessSocket(socket);// read info of the buffer
and calls registered callbacks for the client
}
else
return; //returns after Poll has expired
}
}
}

Hence when a client calls Wait, he get all the callbacks, and then
Wait blocks for milliseconds before assuming that nothing else is
coming from the wire. I have trouble implementing the last point.

Take a look at Socket.Select. Set up the socket you want to read from in
the checkRead list and call it with the timeout you want. The call will
return when data is available or there is a timeout.

I think that is the building block you need.

Regards,
Steve
 
P

puzzlecracker

Take a look at Socket.Select.  Set up the socket you want to read from in
the checkRead list and call it with the timeout you want.  The call will
return when data is available or there is a timeout.

Steve, I am quite aware of Select, however, I think it will have
similar issues to Poll method. In addition, I only anticipate one
socket connection to the main, which is remote, application. What's
can select do differently than poll, other than having list of
connection, and not just one?

Thanks
 
P

puzzlecracker

Exactly, we need to define what time out means for APIConnecter. Is it
configurable by the client, when he/she creates an instance of
APIConnecter class? My design idea was for client to spell it out to
the API connecter "at this point in time, I want to you tell me when
you have been idle enough with WaitUntilInactive call". Curious what
you think of it.

..
But the client shouldn't be executing a timeout on the basis of packets.
Just to be clear: in this thread, you've used the term "packets" even  
though you seem to be describing a TCP connection.  In reality, at the  
Socket level you don't see packets with TCP, you see bytes.

sorry, yes I was referring to pocket as TCP level data, which is just
an array of bytes... umm, I see pockets with ethereal :)))


Now, if you mean the latter, then the client surely does see those, since 
your APIConnector is providing those to the client as they arrive.  If you  
mean the former, then all you can accomplish by implementing a timeout  
between these arbitrary series of bytes is to artificially introduce  
errors into your network i/o when you otherwise wouldn't have had any.

Since you say the client doesn't know when the "packets" arrive, I can  
only assume you're talking of the latter, and IMHO causing an error to  
occur when one otherwise wouldn't have is simply not a good design.
It's not recursive, unless the operation is able to complete immediately, 
which is rare and not a problem at all.  Calling BeginReceive() from your  
callback method is in fact the standard way to use BeginReceive() and the 
other asynchronous methods.

Thanks
 
S

Steve

puzzlecracker said:
[snip]
Take a look at Socket.Select. Set up the socket you want to read from in
the checkRead list and call it with the timeout you want. The call will
return when data is available or there is a timeout.

Steve, I am quite aware of Select, however, I think it will have
similar issues to Poll method. In addition, I only anticipate one
socket connection to the main, which is remote, application. What's
can select do differently than poll, other than having list of
connection, and not just one?

Thanks

Sorry, on closer exammination it appears that Poll is basically select for
one socket. I've done a fair amount TCP/IP programming in other languages
using BSD sockets and Winsock and the standard answer there is to use
select.

The way I have handled timeouts with messages using TCP/IP sockets in a
quasi-realtime environment is to create my own layer on top of the socket
stream.

I divide the stream into messages. Each message is preceded a header that
includes the length and a numeric command code. This allows the receiver to
identifiy when an entire message has been received and to deal with the
message based on the command code. One command code I define is "are you
there", another is "acknowledge". I have a separate thread that
periodically sends an "are you there" message. If it doesn't receive an
"acknowledge" in a reasonable amount of time, it triggers recovery action.

I have found that this recovery code seldom gets executed.

In reading about TCP/IP I have found that it is kind of a two-edged sword.
On the one hand it is really nice in that it creates the abstraction of a
continuous stream of bytes across a network that is very convenient to use.
On the other hand it was set up for systems where a few seconds or even few
minute delays are acceptable when recovering from communication errors. It
makes sense if you think of about TCP/IP as being a protocol for sending
files from coast to coast. But when you're trying to send them across a
room and want delivery in less than 100 msec, it makes it difficult.

Regards,
Steve
 
P

puzzlecracker

[...]
At the very least, I hope that your APIConnector class has an event 
that
is raised when the APIConnector instance times out.
Exactly, we need to define what time out means for APIConnecter. Is it
configurable by the client, when he/she creates an instance of
APIConnecter class?  My design idea was for client to spell it  outto
the API connecter "at this point in time, I want to you tell me when
you have been idle enough with WaitUntilInactive call". Curious what
you think of it.

As I think I've mentioned, it's not how I'd likely implement it.  But  
depending on how much control you delegate to the client, it could be okay.

For example, as long as the client is reliably notified when the  
connection is being shut down, that's a start in the right direction.  
Even better is if the event that's raised is a cancellable event,  
providing the client with a way to be notified that the APIConnector  
_wants_ to shut down the connection, but allowing the client to perform  
some kind of logic making the ultimate decision (for example, presenting a  
dialog to the user, or counting the number of timeouts, or checking some  
state that's relevant, etc.)
 sorry, yes I was referring to pocket as TCP level data, which is just
an array of bytes... umm, I see pockets with ethereal :)))

Right.  Just be careful, because that "array of bytes" from the TCP socket  
can be of arbitrary length.  Using the word "packets" has the subtle  
implication that there's some orderly arrangement of the bytes, that they 
are presented in well-defined segments.  Of course, that's just not how 
TCP is.  TCP only guarantees that the bytes arrive in exactly the order 
they were sent.  The divisions between groups of bytes that are received  
can occur anywhere, up to the total size of the buffer used for a single  
receive operation.

As long as you understand that (and I assume you do), then the erroneous  
terminology isn't that big of a deal.  But words have more power than  
people tend to recognize, and it's worth being careful about that sort of 
thing.  Use the wrong word often enough, and the underlying design might  
start to suffer.  :)

Pete

Is it a good idea to use Poll and then check if socket is available.
socket.IsAvailable()??

Thanks
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top