TCPClient

A

Anders Eriksson

Hello,

I'm trying to create a client for a Client/Server application. I need some
ideas since mine are getting very complicated!

A short description of the system:

Once the connections is made both server and client may send requests.
Once a request has been sent the other party has to wait until this request
is processed before it can send a new request.
When a request is recieved an ACK must be sent to acknowlegde that the
request been received. If no ACK is received by the sending party within 10
seconds, the request is resent.
Every 14 second a KeepAlive message is sent by both server and client. if
this message has not arrived after 18 seconds the connection is reset.

a request may look like this
<STX><request>;<Id>;[<param>;[<param>;[...]]]<ETX>

My idea was to have two threads one for listening and one for writing, but
then I realize that when reading I might not get a whole request, which make
it more complicated.
I could solve it by adding a new thread but I feel that I missing something
and creating a much more complicated program than needed.

Please Advice!

// Anders

--
English is not my first language
so any error, insults or strangeness
has happend during the translation.
Please correct my English so that I
may become better at it!
 
P

Peter Duniho

Anders said:
Hello,

I'm trying to create a client for a Client/Server application. I need
some ideas since mine are getting very complicated!

A short description of the system:

Once the connections is made both server and client may send requests.

That's fine.
Once a request has been sent the other party has to wait until this
request is processed before it can send a new request.

You need to be less ambiguous here. When you write "other party", which
end of the connection do you mean? The end that has just sent a
request, or the end to whom the request has been sent?

The sentence you wrote could be interpreted either way, and only one
interpretation is going to work. In particular, it's fine if for a
given endpoint of the connection, _that_ endpoint must not send another
request until it's received the results from a previous request it's
sent. But there is no way to enforce that if one endpoint sends a
request, that the _other_ endpoint to whom that request was sent must
not itself send a request until it has finished replying to the request
that was sent to it.

Network latency means that one endpoint could send a request, and then
the other endpoint could send a request after the first endpoint sent
one, but before it actually receives the request that the first endpoint
sent.

If either endpoint is permitted to send a request, then your design must
take into account an endpoint having to process a request from the other
endpoint while it's still waiting for the other endpoint to respond to a
request it itself has sent.
When a request is recieved an ACK must be sent to acknowlegde that the
request been received. If no ACK is received by the sending party within
10 seconds, the request is resent.

Using TCP, this is a very bad idea. TCP will already generate an error
if the data was not successfully sent to the other endpoint. The only
scenario where you'd fail to get an ACK but not get an error from the
network API would be caused by some kind of latency on the network
delaying, but not preventing, communication.

So, if you don't get an error on the network connection itself, but you
fail to get the ACK, then resending the request guarantees that the
remote endpoint will receive multiple, duplicated requests. And absent
an actual error on the network, that the endpoint trying to send the
requests will eventually get multiple, duplicated responses.

If you want to reimplement the reliability that TCP already provides,
use UDP instead. It will give you lots of annoying hazards to worry
about and deal with.
Every 14 second a KeepAlive message is sent by both server and client.
if this message has not arrived after 18 seconds the connection is reset.

There is no point in having a timeout for keep-alive messages. If
there's a problem with the connection, simply attempting to send the
message will result in an error from the network API. If either
endpoint did in fact fail to receive a keep-alive from the other
endpoint, the most likely reason for that would be latency on the other
end (i.e. the endpoint is itself not keeping up for some reason), and
resetting the connection is guaranteed to fail.

If you must use keep-alives, it's sufficient simply to send them and
note whether an error occurred during the send or not.

In reality, generally it's pointless to use keep-alives at all. TCP is
designed to tolerate intermittent interruptions in the end-to-end
connectivity. Including keep-alives in your application protocol has as
its main effect disabling this failure-tolerance. I.e. it prevents TCP
from being as effective as it is designed to be.

At the very least, you should state clearly and precisely what specific
goal you intend to resolve by using keep-alive messages. Then you can
make an informed decision as to whether keep-alive messages are really
the best way to solve that goal, or if there's some other less-intrusive
way to do it.
a request may look like this
<STX><request>;<Id>;[<param>;[<param>;[...]]]<ETX>

My idea was to have two threads one for listening and one for writing,
but then I realize that when reading I might not get a whole request,
which make it more complicated.
I could solve it by adding a new thread but I feel that I missing
something and creating a much more complicated program than needed.

The question of threading is orthogonal to the question of dealing with
the application protocol design. But, unless you expect there to be a
very large amount of data to be sent at one time, or very large latency
in responding to requests, you definitely do not need separate threads
for receiving and sending. In general, for most simple programs it
should be sufficient for the thread that receives a request to go ahead
and process that request and send the response.

If your program does meet one of the conditions where it would be
helpful to have multiple threads, then I would recommend using the
asynchronous socket API (i.e. Socket.BeginReceive(),
Socket.BeginSend()), where you have a single thread dedicated to
processing requests as they come in. The async API provides additional
threads on your behalf for the actual i/o.

Pete
 
A

Anders Eriksson

Hello,

Peter Duniho said:
You need to be less ambiguous here. When you write "other party", which
end of the connection do you mean? The end that has just sent a request,
or the end to whom the request has been sent?

The sentence you wrote could be interpreted either way, and only one
interpretation is going to work. In particular, it's fine if for a given
endpoint of the connection, _that_ endpoint must not send another request
until it's received the results from a previous request it's sent. But
there is no way to enforce that if one endpoint sends a request, that the
_other_ endpoint to whom that request was sent must not itself send a
request until it has finished replying to the request that was sent to it.
I had to reread the specification and you are correct the endpoint that has
SENT a request can't send a new request before it has gotten an answer.
Using TCP, this is a very bad idea. TCP will already generate an error if
the data was not successfully sent to the other endpoint. The only
scenario where you'd fail to get an ACK but not get an error from the
network API would be caused by some kind of latency on the network
delaying, but not preventing, communication.
There also is a NAK message, which I fail to see when I should use ;-)
Since the Specification demands this message and the handling I have no
choice...
At the very least, you should state clearly and precisely what specific
goal you intend to resolve by using keep-alive messages. Then you can
make an informed decision as to whether keep-alive messages are really the
best way to solve that goal, or if there's some other less-intrusive way
to do it.
This is in the specification and I can't change this! If I remember
correctly it is in the event of hardware failure, e. g. someone removing the
net cable.
I do think that 18 seconds is a very short time, but ...
The question of threading is orthogonal to the question of dealing with
the application protocol design. But, unless you expect there to be a
very large amount of data to be sent at one time, or very large latency in
responding to requests, you definitely do not need separate threads for
receiving and sending. In general, for most simple programs it should be
sufficient for the thread that receives a request to go ahead and process
that request and send the response.
As I said in the beginning I think I have complicated things.
After reading you very helpful mail I will create a new class and just do
everything in a single thread.

Thank you very much for your help!

// Anders
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top