Strange server socket behaviour

M

Massimo

I'm facing quite a strange problem with a network server application.

This is quite a complex project, involving some embedded roaming clients
sending data to a central server using GPRS modems, and only the server part
is being developed using .NET: the clients uses C-language firmware running
on microcontrollers.

I'm not quite sure the clients' TCP/IP libraries really follow every
existing standard, so I can't be sure the problem isn't there (I'm quite
sure of the opposite, actually)... but they can connect without troubles to
any other TCP/IP network server in the world (ok, maybe this is exagerating
a bit, but you get the idea), so I think there must be some problem in the
server code.

The server is quite simple: it just sits there, waits for client
connections, accepts them and starts reading from the sockets until a server
shutdown is requested or the connection breaks. It never sends any data
because the client-server protocol is unidirectional, and it never closes
the connection, unless preliminary authentication fails or an error is
detected.

This is (roughly) the server's code:


----------
int firstpacketsize = 10;
int port = 42;
Socket serversocket = null;

void Start()
{
serversocket = new
Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp);

serversocket.Bind(new IPEndPoint(IPAddress.Any,port));

serversocket.Listen(10);

serversocket.BeginAccept(AcceptCallback,null);

return;
}

void AcceptCallback(IAsyncResult ar)
{
Socket clientsocket = serversocket.EndAccept(ar);

string ip = (clientsocket.RemoteEndPoint as
IPEndPoint).Address.ToString();

Console.WriteLine("Connection started from {0}",ip);

byte[] packet = new byte[firstpacketize];

Console.WriteLine("Waiting for first packet");

int r =
clientsocket.Receive(packet,Protocol.AuthPacketSize,SocketFlags.None);

// ...

// Some error checking and client authentication code

// ...

// Now a new ClientConnection object is created and starts its own
// asynchronous data reading and processing

// ...

serversocket.BeginAccept(AcceptCallback,null);

return;
}
----------


This code works perfectly when the client is a .NET program, but crashes
when a connection is made by the "real" client. A SocketException is thrown
during the first clientsocket.Receive(), complaining about the connection
being forcibly closed by the remote host. But the worse is still to come:
after this error (and the subsequent clientsocket.Close()),
serversocket.BeginAccept() too throws an exception!

The client, of course, didn't close anything... or at least didn't want to.
I think there must be some nasty bug in the client's TCP/IP libraries,
because the server works flawlessly with a .NET client... but, as I said,
the clients seem to work ok when connecting to other network servers, so
maybe the problem's here and I'm not correctly handling some error that
usually doesn't happen but sometimes do.

What can I do to understand what's really happening here?

Do you see any flaw in my server code which could account for this behaviour
upon clientsocket.Receive()?

And how can an error in clientsocket.Receive() cause another error in
serversocket.BeginAccept()?!?

I'm really puzzled here...


Massimo
 
K

Kevin Spencer

The remote client will certainly close the connection if your client socket
does not. Take a look at the following 2 articles on using an asynchronous
server socket:

http://www.codeguru.com/csharp/csharp/cs_misc/sampleprograms/article.php/c7695/#more
http://www.codeguru.com/Csharp/Csharp/cs_network/sockets/article.php/c8781/

The following tutorial may also be of help:
http://www.devhood.com/tutorials/tutorial_details.aspx?tutorial_id=709

--
HTH,

Kevin Spencer
Microsoft MVP
Software Composer
http://unclechutney.blogspot.com

The shortest distance between 2 points is a curve.

Massimo said:
I'm facing quite a strange problem with a network server application.

This is quite a complex project, involving some embedded roaming clients
sending data to a central server using GPRS modems, and only the server
part is being developed using .NET: the clients uses C-language firmware
running on microcontrollers.

I'm not quite sure the clients' TCP/IP libraries really follow every
existing standard, so I can't be sure the problem isn't there (I'm quite
sure of the opposite, actually)... but they can connect without troubles
to any other TCP/IP network server in the world (ok, maybe this is
exagerating a bit, but you get the idea), so I think there must be some
problem in the server code.

The server is quite simple: it just sits there, waits for client
connections, accepts them and starts reading from the sockets until a
server shutdown is requested or the connection breaks. It never sends any
data because the client-server protocol is unidirectional, and it never
closes the connection, unless preliminary authentication fails or an error
is detected.

This is (roughly) the server's code:


----------
int firstpacketsize = 10;
int port = 42;
Socket serversocket = null;

void Start()
{
serversocket = new
Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp);

serversocket.Bind(new IPEndPoint(IPAddress.Any,port));

serversocket.Listen(10);

serversocket.BeginAccept(AcceptCallback,null);

return;
}

void AcceptCallback(IAsyncResult ar)
{
Socket clientsocket = serversocket.EndAccept(ar);

string ip = (clientsocket.RemoteEndPoint as
IPEndPoint).Address.ToString();

Console.WriteLine("Connection started from {0}",ip);

byte[] packet = new byte[firstpacketize];

Console.WriteLine("Waiting for first packet");

int r =
clientsocket.Receive(packet,Protocol.AuthPacketSize,SocketFlags.None);

// ...

// Some error checking and client authentication code

// ...

// Now a new ClientConnection object is created and starts its own
// asynchronous data reading and processing

// ...

serversocket.BeginAccept(AcceptCallback,null);

return;
}
----------


This code works perfectly when the client is a .NET program, but crashes
when a connection is made by the "real" client. A SocketException is
thrown during the first clientsocket.Receive(), complaining about the
connection being forcibly closed by the remote host. But the worse is
still to come: after this error (and the subsequent clientsocket.Close()),
serversocket.BeginAccept() too throws an exception!

The client, of course, didn't close anything... or at least didn't want
to. I think there must be some nasty bug in the client's TCP/IP libraries,
because the server works flawlessly with a .NET client... but, as I said,
the clients seem to work ok when connecting to other network servers, so
maybe the problem's here and I'm not correctly handling some error that
usually doesn't happen but sometimes do.

What can I do to understand what's really happening here?

Do you see any flaw in my server code which could account for this
behaviour upon clientsocket.Receive()?

And how can an error in clientsocket.Receive() cause another error in
serversocket.BeginAccept()?!?

I'm really puzzled here...


Massimo
 
M

Massimo

The remote client will certainly close the connection if your client
socket does not. Take a look at the following 2 articles on using an
asynchronous server socket:

http://www.codeguru.com/csharp/csharp/cs_misc/sampleprograms/article.php/c7695/#more
http://www.codeguru.com/Csharp/Csharp/cs_network/sockets/article.php/c8781/

The following tutorial may also be of help:
http://www.devhood.com/tutorials/tutorial_details.aspx?tutorial_id=709

I'm sorry?
What do you exactly mean?

I already have a perfectly working server that uses asynchronous sockets :)

The problem is, when this particular device connects to it, the behaviour I
described surfaces. When I connect from a .NET client program, it works
flawlessly.

Why should the remote client close the connection? Its C code isn't telling
him to do this, its code says "open a connection to this server on this port
and keep it open". But when the server tries to Receive() from the newly
accepted connection, it gets that SocketException saying the connection was
closed from the remote host. It should just blocks waiting for data,
shouldn't it? It does, when a .NET client connects. This is the reason I
think there are some bugs in the device's TCP/IP libraries. But the device
can open a connection to an Internet web server, so maybe its TCP/IP
works... and there's something wrong in my server.

It's a strange problem, I agree... but I don't think a socket tutorial is
what I'm in need of :)


Massimo
 
K

Kevin Spencer

I think it has to do with making a blocking call to Receive when listening
asynchronously. The .Net Framework SDK says the following in the Receive
documentation:

"If no data is available for reading, the Receive method will block until
data is available. If you are in non-blocking mode, and there is no data
available in the protocol stack buffer, the Receive method will complete
immediately and throw a SocketException. You can use the Available property
to determine if data is available for reading. When Available is non-zero,
retry your receive operation."

In fact, I've never heard of anyone attempting to combine asynchronous and
synchronous methods in the same application.

--
HTH,

Kevin Spencer
Microsoft MVP
Software Composer
http://unclechutney.blogspot.com

The shortest distance between 2 points is a curve.
 
B

Barry Kelly

Massimo said:
Why should the remote client close the connection? Its C code isn't telling
him to do this, its code says "open a connection to this server on this port
and keep it open". But when the server tries to Receive() from the newly
accepted connection, it gets that SocketException saying the connection was
closed from the remote host. It should just blocks waiting for data,
shouldn't it? It does, when a .NET client connects. This is the reason I
think there are some bugs in the device's TCP/IP libraries. But the device
can open a connection to an Internet web server, so maybe its TCP/IP
works... and there's something wrong in my server.

I think you need to break out a packet analyser like Wireshark or
similar, and see what's going on over the wires.

-- Barry
 
B

Barry Kelly

Kevin said:
"If no data is available for reading, the Receive method will block until
data is available. If you are in non-blocking mode, and there is no data
available in the protocol stack buffer, the Receive method will complete
immediately and throw a SocketException. You can use the Available property
to determine if data is available for reading. When Available is non-zero,
retry your receive operation."

Note that non-blocking mode is not referring to asynchronous code, the
two are quite different. Non-blocking mode is based on polling.

-- Barry
 
G

Goran Sliskovic

Barry said:
I think you need to break out a packet analyser like Wireshark or
similar, and see what's going on over the wires.

-- Barry

Yes, packet analyzer should do the trick (Ethereal is also good). I
suspect the client is doing send() followed by immediate close()
(non-linger mode) on socket. Which leads to data + rst segment sent.

Just a hunch :)

Regards,
Goran
 
M

Massimo

I think it has to do with making a blocking call to Receive when listening
asynchronously. The .Net Framework SDK says the following in the Receive
documentation:

"If no data is available for reading, the Receive method will block until
data is available. If you are in non-blocking mode, and there is no data
available in the protocol stack buffer, the Receive method will complete
immediately and throw a SocketException. You can use the Available
property to determine if data is available for reading. When Available is
non-zero, retry your receive operation."

In fact, I've never heard of anyone attempting to combine asynchronous and
synchronous methods in the same application.

Ok, maybe I'm mixing things that really shouldn't be mixed, but why does it
work when a .NET client connects then?

Also, I never activated non-blocking mode... and I don't think the framework
set it for me when I called an asynchronous socket method. By the way, I
never called async methods on the socket returned from
serversocket.Accept()... Receive() is my first operation on it.

Besides, even if the socket was in non-blocking mode, it shouldn't throw
*that* exception... the exception should carry a WSAEWOULDBLOCK error code,
not a WSAECONNRESET!


Massimo
 
M

Massimo

Yes, packet analyzer should do the trick (Ethereal is also good). I
suspect the client is doing send() followed by immediate close()
(non-linger mode) on socket. Which leads to data + rst segment sent.

Can you elaborate about this, or give me some links to more info?


Massimo
 
P

Peter Duniho

Note that non-blocking mode is not referring to asynchronous code, the
two are quite different. Non-blocking mode is based on polling.

Well, that's half right, anyway.

Non-blocking refers to what the socket does when an operation cannot
complete immediately. A blocking socket will block the thread until at
least some portion of the requested action can be done. A non-blocking
socket will return the error WSAEWOULDBLOCK (or a .NET equivalent when
writing .NET code, as I assume everyone here is doing :) ).

Synchronous and asynchronous refer to how code is notified that an
operation has completed. The terminology is fuzzier, depending on whether
one insists that only a blocking call can be synchronous, or if something
like using select() qualifies (opinions vary). But generally speaking,
synchronous goes with blocking and asynchronous goes with non-blocking. I
readily admit though that this is not aset in stone as the labels blocking
and non-blocking, so perhaps your disagreement with Kevin on that point is
reasonable.

However, as far as "non-blocking mode is based on polling" goes...there is
absolutely nothing about non-blocking that implies polling, and in fact
most people using non-blocking sockets are NOT polling. In fact, any
asynchronous use of a socket is non-blocking by definition since after
all, if the call to a socket method blocked, there'd be no point in any
sort of asynchronous notification. Asynchronous does imply non-blocking,
even if non-blocking doesn't necessarily imply asynchronous.

Pete
 
P

Peter Duniho

Ok, maybe I'm mixing things that really shouldn't be mixed, but why does
it work when a .NET client connects then?

I for one, don't know. I wish I had a good answer to the question you're
asking. However, I can at least comment on other stuff... :)
Also, I never activated non-blocking mode... and I don't think the
framework set it for me when I called an asynchronous socket method.

On any OS where i/o completion ports are supported (generally speaking,
NT-based versions of Windows, including XP), asynchronous use of sockets
in .NET uses i/o completion ports. When you create a completion port
associated with a given socket, that socket is necessarily put into
non-blocking mode. Otherwise, calls to send or receive would simply just
block, negating the point of using an i/o completion port in the first
place.

Just because you didn't explicitly set the socket to non-blocking, that
doesn't mean it's a blocking socket. There are lots of ways a socket can
wind up being set implicitly to non-blocking.
By the way, I never called async methods on the socket returned from
serversocket.Accept()... Receive() is my first operation on it.

Besides, even if the socket was in non-blocking mode, it shouldn't throw
*that* exception... the exception should carry a WSAEWOULDBLOCK error
code, not a WSAECONNRESET!

Agreed. If the problem was related solely to the socket being a
non-blocking socket, the exception would have WSAEWOULDBLOCK in it.

I would agree with the other suggestions that the best way to start
looking at this would be to use a network traffic monitor and see what's
actually getting sent and received at each end. The first step is to
determine at what point is the connection getting reset, and hopefully get
some indication as to what data caused that end to reset the connection.

Pete
 
M

Mike Blake-Knox

I'm fearing this, too :-(

What shows up on the server console when you run your test? You are
writing some extra information onto the console.

Also, it looks as if the code accepts a connection and waits for the
first data to be received before it re-primes the accept. I think I
know (from looking at logs from a server in an unfreindly environment)
that it is possible to get exceptions from an EndAccept(). Where is the
exception taken? The sample code doesn't include the try/catch code.

Hope this helps.

Mike
 
M

Massimo

What shows up on the server console when you run your test? You are
writing some extra information onto the console.

My error-checking code catches the SocketException in Receive(), closes the
new socket and discards the client connection. Then, it tries to do
serversocket.BeginAccept() again... and this, too, throws an exception!
Also, it looks as if the code accepts a connection and waits for the
first data to be received before it re-primes the accept. I think I
know (from looking at logs from a server in an unfreindly environment)
that it is possible to get exceptions from an EndAccept(). Where is the
exception taken? The sample code doesn't include the try/catch code.

Because it was sample code :)
In the actual code, every socket method call is checked for exceptions, of
course.


Massimo
 
B

Barry Kelly

Peter said:
However, as far as "non-blocking mode is based on polling" goes...there is
absolutely nothing about non-blocking that implies polling, and in fact
most people using non-blocking sockets are NOT polling. In fact, any
asynchronous use of a socket is non-blocking by definition

I can't agree with you because I think you are redefining the accepted
meaning of "non-blocking" in the context and history of BSD sockets.

-- Barry
 
P

Peter Duniho

I can't agree with you because I think you are redefining the accepted
meaning of "non-blocking" in the context and history of BSD sockets.

While I certainly respect your right to disagree, in this case you are
mistaken. Even using BSD sockets, one is not required to poll a
non-blocking socket, and the fact is we are talking about Windows sockets
and so BSD behavior would be irrelevant in any case.

Either way, characterizing "non-blocking" as necessarily meaning "polling"
is simply wrong.

Pete
 
K

Kevin Spencer

This is why I suspect the problem is with mixing the asynchronous and
synchronous models for this class. The documentation is not entirely clear
about what goes on underneath, and one would need a network protocol
analyzer to be entirely sure, but knowing Microsoft, the 2 models
(synchronous and asynchronous) were designed as the documentation suggests,
each to work within the parameters of that model. So, before I tried to do
all of that packet analysis, I would simply try the easiest method and
switch to a consistent usage of the object model, which Microsoft probably
tested fairly well, just to potentially save time.

--
HTH,

Kevin Spencer
Microsoft MVP
Software Composer
http://unclechutney.blogspot.com

The shortest distance between 2 points is a curve.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top