Possible bug in .Net 2.0 udp sockets?

R

Redge

Hello everyone!

A while ago, I created a client-server UDP application in .Net 1.1. A
client application connects via UDP to a server; once acknowledged, the
server sends the requested data to the client (a video stream).

The application has to work over the internet (where the data may cross
NATs, routers and firewalls on its way). That's why the client initiates
the data transfer, and the server examines the EndPoint-object to find
out where the request originates, and where to send the data.

As recommended by an msdn-article
(http://msdn.microsoft.com/msdnmag/issues/06/02/UDP/),
I use one socket to listen on a port, and run the command
"BeginReceiveFrom()" multiple times, each time with a separate buffer.
In .Net 1.1, this works just fine: in the callback, I call
"EndReceiveFrom()" to get the data and the endpoint, and use this
endpoint to send the requested data to.

However, in .Net 2.0, I get a wrong endpoint from time to time!
I tested this with a simple application:

- Several sockets that send data, each sending from (= binded to) a
different port
- The data that is sent contains the port it was sended from
- One socket listening for incoming udp-packets, constructed as
described above (several calls to "BeginReceiveFrom()" with separate
buffers).

The test application can run on a single computer or on several
computers within the same network. In each case, the application
compares the port saved in the EndPoint-object with the port specified
in the received data.

- In .Net 1.1, this works just fine: the port of the EndPoint-object and
in the received buffer are always equal, BUT:
- In .Net 2.0, I get lots of wrong ports in the beginning -- normally,
the port is always false when "EndReceiveFrom()" is called THE FIRST
TIME for a buffer; the subsequent calls seem to work fine. (However, in
my REAL application, wrong port-information is not only found in the
beginning, but also later on during runtime).

It is obvious that this error can lead to real problems in my
application, since I send data to the wrong clients. It is of course
possible that I just made a mistake in my implementation -- a mistake
that happens to have no effect in .Net 1.1, while .Net 2.0 is more
delicate in that respect.

However, I did not find anything drastically different from the msdn
samples. I made a class "SocketState", and one instance of this class is
used for ever "BeginReceiveFrom()":

class SocketState
{
private Socket udpSocket;
private byte[] buffer;
private int bufferSize;
private EndPoint remoteEndPoint;

public SocketState(Socket socket, int bufferSize)
{
this.udpSocket = socket;
this.bufferSize = bufferSize;
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));
}

public void BeginReceive(AsyncCallback callback)
{
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));

this.udpSocket.BeginReceiveFrom(this.buffer, 0,
this.buffer.Length,
SocketFlags.None, ref this.remoteEndPoint,
callback, this);
}

// properties omitted
}

So I create one listening socket and a number of SocketStates, which all
get a reference to this socket. Then I call "BeginReceive()" on every
SocketState instance; in the specified callback, I compare buffer and
endpoint.

In case somebody wants to take a closer look, the source code can be
downloaded here:

http://www.incognitek.com/user/stuff/MassiveSocketTest.zip

The zip file contains projects for Visual Studio 2003 and 2005.

Any help would be appreciated!
Greetings,

Daniel Sperl, Funworld AG

Mail: daniel[DOT]sperl[AT]funworld[DOT]com
 
R

Redge

Hello Vadym!

First of all, thanks for having a look at the code. But I am still not
sure what to make of this:

I called BeginReceiveFrom() several times on purpose, since I do not
miss any data that arrives at the socket. Have a look at the following
code, which is my callback method:

public void OnReceive(IAsyncResult result)
{
SocketState currSocketState = result.AsyncState as SocketState;

try
{
EndPoint remoteEndPoint = new IPEndPoint(0, 0);
int bytesRead = currSocketState.UdpSocket.EndReceiveFrom(result,
ref remoteEndPoint);

// ... do some stuff
}
finally
{
currSocketState.BeginReceive(new AsyncCallback(this.OnReceive));
}
}

If I call BeginReceiveFrom() only once, my server will have a "blind
spot" in the time between entering the callback and starting
BeginReceiveFrom() again!

The msdn article I referred to in my last mail puts it the following way:

UDP will (...) drop packets upon receipt if, even momentarily,
BeginReceiveFrom is not active on the socket.

(This) problem is the one most easily solved. In my sample receiver
code, there's a short span of time between acceptance of a packet and
calling another BeginReceiveFrom. Even if this call were the first one
in MessageReceivedCallback, there's a still short period when the app
isn't listening. One improvement would be activating several instances
of BeginReceiveFrom, each with a separate buffer to hold received packets.

And that's exactly what I did. So, you are right, when I limit the
number of simultaneous listeners to 1, my application works. But I will
lose a lot of incoming packets, especially if there is a high workload.

So there is still the question: Is something wrong in my implementation
of calling BeginReceiveFrom() multiple times, or is there an error in
the .Net Framework? I suppose it's my error, but what is strange is that
it works in .Net 1.1 without any problems ...

Greetings,
Daniel

Vadym said:
Hello, Redge!

There is subtle bug in your implementation.

You have multiple listeners and only one socket and one endpoint. There is no
purpose creating mutiple listeners. If you had multiple endpoints then it is necessary
to create one listener per endpoint.

I changed your code and bug vanished.

public Receiver(int numListeners)
{
this.inPort = 10000;
this.numListeners = 1;//= numListeners;
....
}


R> A while ago, I created a client-server UDP application in .Net 1.1. A
R> client application connects via UDP to a server; once acknowledged,
R> the
R> server sends the requested data to the client (a video stream).

R> The application has to work over the internet (where the data may
R> cross
R> NATs, routers and firewalls on its way). That's why the client
R> initiates
R> the data transfer, and the server examines the EndPoint-object to
R> find
R> out where the request originates, and where to send the data.

R> As recommended by an msdn-article
R> (http://msdn.microsoft.com/msdnmag/issues/06/02/UDP/),
R> I use one socket to listen on a port, and run the command
R> "BeginReceiveFrom()" multiple times, each time with a separate
R> buffer.
R> In .Net 1.1, this works just fine: in the callback, I call
R> "EndReceiveFrom()" to get the data and the endpoint, and use this
R> endpoint to send the requested data to.

R> However, in .Net 2.0, I get a wrong endpoint from time to time!
R> I tested this with a simple application:

R> - Several sockets that send data, each sending from (= binded to) a
R> different port
R> - The data that is sent contains the port it was sended from
R> - One socket listening for incoming udp-packets, constructed as
R> described above (several calls to "BeginReceiveFrom()" with separate
R> buffers).

R> The test application can run on a single computer or on several
R> computers within the same network. In each case, the application
R> compares the port saved in the EndPoint-object with the port
R> specified
R> in the received data.

R> - In .Net 1.1, this works just fine: the port of the EndPoint-object
R> and
R> in the received buffer are always equal, BUT:
R> - In .Net 2.0, I get lots of wrong ports in the beginning --
R> normally,
R> the port is always false when "EndReceiveFrom()" is called THE FIRST
R> TIME for a buffer; the subsequent calls seem to work fine. (However,
R> in
R> my REAL application, wrong port-information is not only found in the
R> beginning, but also later on during runtime).

R> It is obvious that this error can lead to real problems in my
R> application, since I send data to the wrong clients. It is of course
R> possible that I just made a mistake in my implementation -- a mistake
R> that happens to have no effect in .Net 1.1, while .Net 2.0 is more
R> delicate in that respect.

R> However, I did not find anything drastically different from the msdn
R> samples. I made a class "SocketState", and one instance of this class
R> is
R> used for ever "BeginReceiveFrom()":

R> class SocketState
R> {
R> private Socket udpSocket;
R> private byte[] buffer;
R> private int bufferSize;
R> private EndPoint remoteEndPoint;

R> public SocketState(Socket socket, int bufferSize)
R> {
R> this.udpSocket = socket;
R> this.bufferSize = bufferSize;
R> this.buffer = new byte[bufferSize];
R> this.remoteEndPoint = (EndPoint)
R> (new IPEndPoint(IPAddress.Any, 0));
R> }

R> public void BeginReceive(AsyncCallback callback)
R> {
R> this.buffer = new byte[bufferSize];
R> this.remoteEndPoint = (EndPoint)
R> (new IPEndPoint(IPAddress.Any, 0));

R> this.udpSocket.BeginReceiveFrom(this.buffer, 0,
R> this.buffer.Length,
R> SocketFlags.None, ref this.remoteEndPoint,
R> callback, this);
R> }

R> // properties omitted
R> }

R> So I create one listening socket and a number of SocketStates, which
R> all
R> get a reference to this socket. Then I call "BeginReceive()" on every
R> SocketState instance; in the specified callback, I compare buffer and
R> endpoint.

R> In case somebody wants to take a closer look, the source code can be
R> downloaded here:

R> http://www.incognitek.com/user/stuff/MassiveSocketTest.zip

R> The zip file contains projects for Visual Studio 2003 and 2005.

R> Any help would be appreciated!
R> Greetings,

R> Daniel Sperl, Funworld AG

R> Mail: daniel[DOT]sperl[AT]funworld[DOT]com
 
A

Al Norman

Pardon me for butting in, but you should be able to increase the size of the
queue of UDP messages. We do this (in C++) with the following:

int receivesize = RECEIVEQUEUESIZE;
if (0 != setsockopt(mysocket,SOL_SOCKET,SO_RCVBUF,
(char *)&receivesize,sizeof(receivesize)))


where RECEIVEQUEUESIZE is a #define of 1024*1024

You won't miss any UDP packets with a buffer that large!

al


Vadym Stetsyak said:
Hello, Redge!

R> I called BeginReceiveFrom() several times on purpose, since I do not
R> miss any data that arrives at the socket. Have a look at the
R> following
R> code, which is my callback method:

R> public void OnReceive(IAsyncResult result)
R> {
R> SocketState currSocketState = result.AsyncState as SocketState;

R> try
R> {
R> EndPoint remoteEndPoint = new IPEndPoint(0, 0);
R> int bytesRead =
R> currSocketState.UdpSocket.EndReceiveFrom(result,
R> ref remoteEndPoint);

R> // ... do some stuff
R> }
R> finally
R> {
R> currSocketState.BeginReceive(new
R> AsyncCallback(this.OnReceive));
R> }
R> }

That is correct, implementation. And your app won't miss any data, because
after receiving
data you issue another BeginReceiveFrom call - this is correct.

If you don't do that, indeed, UDP stack can drop packets.
But this "dropping" occurs under certain circumstances.

I'll try to explain why this can happen. When Udp stack receives a packet,
it stores it in the queue. It does so with every pending packet. This
queue has
a limit, connected with the size of occupied memory. When this limit is
reached
all subsequent UDP packets will be dropped by the stack.

When you call ReceiveFrom or BeginReceiveFrom you're taking
that packet out of queue ( data is copied to the buffer supplied by the
caller ).

R> If I call BeginReceiveFrom() only once, my server will have a "blind
R>
R> spot" in the time between entering the callback and starting
R>
R> BeginReceiveFrom() again!


R> The msdn article I referred to in my last
R> mail puts it the following
R> way:

R>
R> UDP will (...) drop packets
R> upon receipt if, even momentarily,
R> BeginReceiveFrom is not active on
R> the socket.

R> (This) problem is the one most easily solved. In my sample
R> receiver
R> code, there's a short span of time between acceptance of a
R> packet and
R> calling another BeginReceiveFrom. Even if this call were the
R> first
R> one
R> in MessageReceivedCallback, there's a still short period
R> when the app
R> isn't listening. One improvement would be activating
R> several
R> instances
R> of BeginReceiveFrom, each with a separate buffer to
R> hold received
R> packets.
R>


See the above.

Doing several BeginReceiveFrom calls can lead to unpredictable behavior


To minimize the time between EndReceiveFrom and subsequent BeginReceivFrom
you can enqueue data for processing. The algorithm can be the following:
- received data
- enqueud them in the queue
- issued another receive data

While another thread is taking data from the queue for further processing.

R> And that's exactly what I did. So,
R> you are right, when I limit the
R> number of simultaneous listeners to 1,
R> my application works. But I
R> will
R> lose a lot of incoming packets,
R> especially if there is a high
R> workload.

Did you test packet loss percentage?
Add a counter to the message and check that counter
every time you receive a message.

R> So there is still the
R> question: Is something wrong in my
R> implementation
R> of calling
R> BeginReceiveFrom() multiple times, or is there an error in
R> the .Net
R> Framework? I suppose it's my error, but what is strange is
R> that
R> it
R> works in .Net 1.1 without any problems ...

IMO it is not stable solution.

While experimenting with
your code, I've added two calls to Console.WriteLine().
One after Send and one after Receive, and problem dissapeared.

So, this means that the fact that the code worked under .NET 1.1
doesn't mean that the code is correct.

R> Greetings,
R> Daniel

R> Vadym
R> implementation.
R> per endpoint.
R> R> server sends the requested data to the client (a video stream).
R> where to send the data.
R> requested data to.
R> the received data.
R> beginning, but also later on during runtime).
R> respect.
R> However, I did not find anything drastically different from R> the
msdn
R> samples. I made a class "SocketState", and one instance R> of this
class
R> is
R> used for ever "BeginReceiveFrom()":
R> R> class SocketState
R> {
R> private Socket udpSocket;
R> R> private byte[] buffer;
R> private int bufferSize;
R> private
R> EndPoint remoteEndPoint;
R> public SocketState(Socket socket, int R> bufferSize)
R> {
R> this.udpSocket = socket;
R> R> this.bufferSize = bufferSize;
R> this.buffer = new R> byte[bufferSize];
R> this.remoteEndPoint = (EndPoint)
R> R> (new IPEndPoint(IPAddress.Any, 0));
R> }
R> public void R> BeginReceive(AsyncCallback callback)
R> {
R> this.buffer R> = new byte[bufferSize];
R> this.remoteEndPoint = (EndPoint)
R> R> (new IPEndPoint(IPAddress.Any, 0));
R> endpoint.
R> http://www.incognitek.com/user/stuff/MassiveSocketTest.zip
R> file contains projects for Visual Studio 2003 and 2005.
R> Mail: daniel[DOT]sperl[AT]funworld[DOT]com
--
Regards, Vadym R> Stetsyak
www: http://vadmyst.blogspot.com d be appreciated!
R> Greetings,
R> Daniel Sperl, Funworld AG
R> Mail: daniel[DOT]sperl[AT]funworld[DOT]com
 
A

Al Norman

Hi Vadym.

Yes, that's true, but you could tune the queue size (some multiple of the
UDP packet sizes that you are receiving, if they're known) and ensure that
you have sufficient buffering to accommodate the delays between calls to
ReceiveFrom(). If you are unable to do that, because the processing time for
an individual UDP packet is too large, you might want to consider using
another thread to process the packet and let the thread that's doing the
ReceiveFrom() calls be able to react more quickly. Of course, you may be
just postponing the problem (i.e. you may need many 'worker threads' to
process the packets) and/or some form of internal queuing.

Anyways, that's my two cents worth ... I didn't read all of the preceding
stuff, so I may be re-hashing stuff that you've already talked about.

cya

al


Vadym Stetsyak said:
Hello, Al!

AN> Pardon me for butting in, but you should be able to increase the
AN> size of the
AN> queue of UDP messages. We do this (in C++) with the following:

AN> int receivesize = RECEIVEQUEUESIZE;
AN> if (0 != setsockopt(mysocket,SOL_SOCKET,SO_RCVBUF,
AN> (char *)&receivesize,sizeof(receivesize)))

AN> where RECEIVEQUEUESIZE is a #define of 1024*1024

AN> You won't miss any UDP packets with a buffer that large!

Large buffer will only delay the time of packet dropping if
time between subsequent ReceiveFrom calls is significant.

AN> "Vadym Stetsyak said:
Hello, Redge!
R> I called BeginReceiveFrom() several times on purpose, since I do
not
R> miss any data that arrives at the socket. Have a look at the
R> following
R> code, which is my callback method:
R> public void OnReceive(IAsyncResult result)
R> {
R> SocketState currSocketState = result.AsyncState as SocketState;
R> try
R> {
R> EndPoint remoteEndPoint = new IPEndPoint(0, 0);
R> int bytesRead =
R> currSocketState.UdpSocket.EndReceiveFrom(result,
R> ref remoteEndPoint);
R> // ... do some stuff
R> }
R> finally
R> {
R> currSocketState.BeginReceive(new
R> AsyncCallback(this.OnReceive));
R> }
R> }
That is correct, implementation. And your app won't miss any data,
because
after receiving
data you issue another BeginReceiveFrom call - this is correct.
If you don't do that, indeed, UDP stack can drop packets.
But this "dropping" occurs under certain circumstances.
I'll try to explain why this can happen. When Udp stack receives a
packet,
it stores it in the queue. It does so with every pending packet. This
queue has
a limit, connected with the size of occupied memory. When this limit
is
reached
all subsequent UDP packets will be dropped by the stack.
When you call ReceiveFrom or BeginReceiveFrom you're taking
that packet out of queue ( data is copied to the buffer supplied by
the
caller ).
R> If I call BeginReceiveFrom() only once, my server will have a
"blind
R>
R> spot" in the time between entering the callback and starting
R>
R> BeginReceiveFrom() again!
R> The msdn article I referred to in my last
R> mail puts it the following
R> way:
R>
R> UDP will (...) drop packets
R> upon receipt if, even momentarily,
R> BeginReceiveFrom is not active on
R> the socket.
R> (This) problem is the one most easily solved. In my sample
R> receiver
R> code, there's a short span of time between acceptance of a
R> packet and
R> calling another BeginReceiveFrom. Even if this call were the
R> first
R> one
R> in MessageReceivedCallback, there's a still short period
R> when the app
R> isn't listening. One improvement would be activating
R> several
R> instances
R> of BeginReceiveFrom, each with a separate buffer to
R> hold received
R> packets.
R>
See the above.
Doing several BeginReceiveFrom calls can lead to unpredictable
behavior
To minimize the time between EndReceiveFrom and subsequent
BeginReceivFrom
you can enqueue data for processing. The algorithm can be the
following:
- received data
- enqueud them in the queue
- issued another receive data
While another thread is taking data from the queue for further
processing.
R> And that's exactly what I did. So,
R> you are right, when I limit the
R> number of simultaneous listeners to 1,
R> my application works. But I
R> will
R> lose a lot of incoming packets,
R> especially if there is a high
R> workload.
Did you test packet loss percentage?
Add a counter to the message and check that counter
every time you receive a message.
R> So there is still the
R> question: Is something wrong in my
R> implementation
R> of calling
R> BeginReceiveFrom() multiple times, or is there an error in
R> the .Net
R> Framework? I suppose it's my error, but what is strange is
R> that
R> it
R> works in .Net 1.1 without any problems ...
IMO it is not stable solution.
While experimenting with
your code, I've added two calls to Console.WriteLine().
One after Send and one after Receive, and problem dissapeared.
So, this means that the fact that the code worked under .NET 1.1
doesn't mean that the code is correct.
R> Greetings,
R> Daniel
R> Vadym
R> Stetsyak wrote:
Hello, Redge!
There is subtle bug in your
R> implementation.
You have multiple listeners and only one socket and
R> one endpoint.
There is no
purpose creating mutiple listeners. If you
R> had multiple endpoints then
it is necessary
to create one listener
R> per endpoint.
I changed your code and bug vanished.
public
R> Receiver(int numListeners)
{
this.inPort = 10000;
R> this.numListeners = 1;//= numListeners;
....
}
R> A while ago,
R> I created a client-server UDP application in .Net 1.1.
A
R> client
R> application connects via UDP to a server; once acknowledged,
R> the
R> R> server sends the requested data to the client (a video stream).
R>
R> The application has to work over the internet (where the data may
R>
R> cross
R> NATs, routers and firewalls on its way). That's why the
R> client
R> initiates
R> the data transfer, and the server examines the
R> EndPoint-object to
R> find
R> out where the request originates, and
R> where to send the data.
R> As recommended by an msdn-article
R>
R> (http://msdn.microsoft.com/msdnmag/issues/06/02/UDP/),
R> I use one
R> socket to listen on a port, and run the command
R> "BeginReceiveFrom()"
R> multiple times, each time with a separate
R> buffer.
R> In .Net 1.1,
R> this works just fine: in the callback, I call
R> "EndReceiveFrom()" to
R> get the data and the endpoint, and use this
R> endpoint to send the
R> requested data to.
R> However, in .Net 2.0, I get a wrong endpoint from
R> time to time!
R> I tested this with a simple application:
R> -
R> Several sockets that send data, each sending from (= binded to) a
R>
R> different port
R> - The data that is sent contains the port it was
R> sended from
R> - One socket listening for incoming udp-packets,
R> constructed as
R> described above (several calls to
R> "BeginReceiveFrom()" with
separate
R> buffers).
R> The test
R> application can run on a single computer or on several
R> computers
R> within the same network. In each case, the application
R> compares the
R> port saved in the EndPoint-object with the port
R> specified
R> in
R> the received data.
R> - In .Net 1.1, this works just fine: the port of
R> the
EndPoint-object
R> and
R> in the received buffer are always
R> equal, BUT:
R> - In .Net 2.0, I get lots of wrong ports in the
R> beginning --
R> normally,
R> the port is always false when
R> "EndReceiveFrom()" is called THE
FIRST
R> TIME for a buffer; the
R> subsequent calls seem to work fine.
(However,
R> in
R> my REAL
R> application, wrong port-information is not only found in
the
R>
R> beginning, but also later on during runtime).
R> It is obvious that
R> this error can lead to real problems in my
R> application, since I send
R> data to the wrong clients. It is of
course
R> possible that I just
R> made a mistake in my implementation -- a
mistake
R> that happens to
R> have no effect in .Net 1.1, while .Net 2.0 is more
R> delicate in that
R> respect.
R> However, I did not find anything drastically different from
R> the
msdn
R> samples. I made a class "SocketState", and one instance
R> of this
class
R> is
R> used for ever "BeginReceiveFrom()":
R>
R> class SocketState
R> {
R> private Socket udpSocket;
R>
R> private byte[] buffer;
R> private int bufferSize;
R> private
R> EndPoint remoteEndPoint;
R> public SocketState(Socket socket, int
R> bufferSize)
R> {
R> this.udpSocket = socket;
R>
R> this.bufferSize = bufferSize;
R> this.buffer = new
R> byte[bufferSize];
R> this.remoteEndPoint = (EndPoint)
R>
R> (new IPEndPoint(IPAddress.Any, 0));
R> }
R> public void
R> BeginReceive(AsyncCallback callback)
R> {
R> this.buffer
R> = new byte[bufferSize];
R> this.remoteEndPoint = (EndPoint)
R> R> (new IPEndPoint(IPAddress.Any, 0));
R>
R> this.udpSocket.BeginReceiveFrom(this.buffer, 0,
R>
R> this.buffer.Length,
R> SocketFlags.None, ref
R> this.remoteEndPoint,
R> callback, this);
R> }
R>
R> // properties omitted
R> }
R> So I create one listening socket and a
R> number of SocketStates,
which
R> all
R> get a reference to this
R> socket. Then I call "BeginReceive()" on
every
R> SocketState
R> instance; in the specified callback, I compare buffer
and
R>
R> endpoint.
R> In case somebody wants to take a closer look, the source
R> code can
be
R> downloaded here:
R>
R> http://www.incognitek.com/user/stuff/MassiveSocketTest.zip
R> The zip
R> file contains projects for Visual Studio 2003 and 2005.
R> Any help
R> would be appreciated!
R> Greetings,
R> Daniel Sperl, Funworld AG
R>
R> Mail: daniel[DOT]sperl[AT]funworld[DOT]com
--
Regards, Vadym
R> Stetsyak
www: http://vadmyst.blogspot.com
d be appreciated!
R> Greetings,
R> Daniel Sperl, Funworld AG
R> Mail: daniel[DOT]sperl[AT]funworld[DOT]com
 
P

Peter Duniho

Al Norman said:
Pardon me for butting in, but you should be able to increase the size of
the queue of UDP messages. We do this (in C++) with the following:

int receivesize = RECEIVEQUEUESIZE;
if (0 != setsockopt(mysocket,SOL_SOCKET,SO_RCVBUF,
(char *)&receivesize,sizeof(receivesize)))


where RECEIVEQUEUESIZE is a #define of 1024*1024

You won't miss any UDP packets with a buffer that large!

This is categorically incorrect. There is no way with UDP to guarantee that
you won't lose datagrams.

No one should ever write a program using UDP unless they implement in that
program logic for handling the vagaries of UDP. This includes handling the
following cases:

* The datagram never arrives
* The datagram arrives, but out of order (either before one sent
earlier, or after one sent later)
* The datagram arrives more than once

These are all things that can happen with UDP, and using UDP means one must
anticipate and handle those conditions.

As far as the original question goes, I can't speak for how .NET handles
things, but it is not normally true in Winsock that one's program must be
blocking on a call to receive (in Winsock, recv() or similar) in order to
catch all the UDP datagrams that do make it to your end of the network. It
is true that if you don't process the datagrams fast enough, the buffer can
fill up and some datagrams will be lost. But there's no requirement to be
ready and waiting to receive a datagram the moment it comes in.

I don't know why the MSDN article would say otherwise...whether Winsock will
drop a UDP datagram may be related to the question of whether there's an
outstanding receive on the socket, but it's not a 1-for-1 correlation. A
datagram could get dropped even with an outstanding receive, and even the
absence of an outstanding receive doesn't guarantee the datagram will be
dropped.

Note that all of the above pertains to the default usage. If one sets the
internal Winsock buffers to zero (a common enough technique when using
IOCP), things change and the lack of available buffer space due to the lack
of an outstanding queued IOCP receive could cause a UDP datagram to be
dropped. So perhaps there's something special about .NET's use of Winsock
that causes the statement to be true.

As far as whether it's actually a *problem* to enqueue multiple receives
(that is, to have more than one BeginReceiveFrom outstanding at a time),
assuming other things that have been posted here are correct with respect to
using the Begin/End constructs in .NET's socket implementation, there
shouldn't be a problem doing so. According to those other posts (and note
that I have not personally verified them), the Begin/End constructs use
IOCP, and IOCP will correctly queue the receives.

That said, someone relying on this should definitely confirm for themselves
that IOCP is what's being used. Anything else could possibly involve having
multiple plain-vanilla receives outstanding at the same time, which is NOT
supported by Winsock and could result in indeterminate results, including
something like what the original poster is reporting.

If I had to guess, I'd say that it's much more likely there's a bug in the
original poster's implementation. What that bug might be, I can't say.
Maybe not handling the EndReceiveFrom() method correctly, or using the wrong
data, or something. I don't know. I haven't looked at the full source code
itself...IMHO, if it's worth asking a question about on a newsgroup, it's
worth posting the code to the newsgroup. Providing the code in a linked
..zip download breaks the longevity benefit of using an archivable resource
like a newsgroup in the first place.

Pete
 
R

Redge

Hello Pete & all the others!

Thanks a lot for your information! It seems that I should be more
critical in analysing information I find on a subject, even if it is an
msdn article I'm referring to. I will make more research if there is
some truth in the article's recommendation to call BeginReceiveFrom()
multiple times, or if it was just some error on the author's side. In
addition, I will try to find out if .Net's socket implementation uses
IOCP internally.

I must admit that I have not written any advanced socket code in c++
(except some helloworld-stuff), so I do not know many of the advanced
internals of WinSock, as well as their drawbacks and pitfalls. I just
thought that the .Net implementation would use aprioriate default values
for those parts of the internal implementation it hides from me.

Concerning the comment that I did not post the source code directly, but
only a link to a zip-file: I did not want to spam the newsgroup with all
of the code, and in addition I thought it was more convenient for
anybody if they could test my code by just opening a project und hitting
F5 ;) But of course, regarding newsgroups as an all time archive for the
future, this approach is far from perfect. So I thank you for the
advice, Pete, and will now add the source code directly, in a seperate
posting.

Again: thanks everybody! I will post any useful information I find in
this thread.

But even if it turns out that I should call BeginReceiveFrom() only once
per socket, I still haven't found the error in my code that is
responsible for the wrong endpoints. Besides, of course, this practice
is "forbidden" anyway and just happens to produce correct behaviour in
..Net 1.1, while failing in .Net 2.0.

Greetings,
Daniel
 
R

Redge

As requested, the complete source code of the application.

//--------------------------------------------------------------------------

class SocketState
{
private Socket udpSocket;
private byte[] buffer;
private int bufferSize;
private EndPoint remoteEndPoint;

public SocketState(Socket socket, int bufferSize)
{
this.udpSocket = socket;
this.bufferSize = bufferSize;
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));
}

public void BeginReceive(AsyncCallback callback)
{
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));

this.udpSocket.BeginReceiveFrom(this.buffer, 0,
this.buffer.Length,
SocketFlags.None, ref this.remoteEndPoint,
callback, this);
}

// properties omitted
}

//------------------------------------------------------------------------

class Receiver
{
private int inPort;
private int numListeners;
private readonly int BufferSize = 500;
private ArrayList socketStates;

public Receiver(int numListeners)
{
this.inPort = 10000;
this.numListeners = numListeners;
socketStates = new ArrayList();

Socket socket = new Socket(
AddressFamily.InterNetwork,
SocketType.Dgram,
ProtocolType.Udp);

/*
this setting avoids the error "connection reset by peer"
info on the issue can be found here:
http://aspalliance.com/groups/micro...3_UDP_Comms_and_Connection_Reset_Problem.aspx
*/

const int SIO_UDP_CONNRESET = -1744830452;
byte[] inValue = new byte[] { 0, 0, 0, 0 }; // == false
byte[] outValue = new byte[] { 0, 0, 0, 0 }; // initialize to 0
socket.IOControl(SIO_UDP_CONNRESET, inValue, outValue);

EndPoint bindEndPoint = new IPEndPoint(
IPAddress.Any,
this.inPort);

socket.Bind(bindEndPoint);

for (int i = 0; i < numListeners; ++i)
{
SocketState newSocketState = new SocketState(socket,
this.BufferSize);
socketStates.Add(newSocketState);
}
}

public void Start()
{
for (int i = 0; i < numListeners; ++i)
{
SocketState currSocketState = socketStates
as SocketState;
currSocketState.BeginReceive(new AsyncCallback(OnReceive));
}
}

public void OnReceive(IAsyncResult result)
{
SocketState currSocketState = result.AsyncState as SocketState;

try
{
EndPoint remoteEndPoint = new IPEndPoint(0, 0);
int bytesRead = currSocketState.UdpSocket.EndReceiveFrom(
result, ref remoteEndPoint);

int expectedPort = currSocketState.Buffer[0] + 20000;
int realPort = (remoteEndPoint as IPEndPoint).Port;

if (expectedPort != realPort)
{
Console.WriteLine("{0} - wrong endpoint",
System.DateTime.Now.ToLongTimeString());
}
}
catch (SocketException e)
{
Console.WriteLine("Socket Error: {0} {1}",
e.ErrorCode, e.Message);
}
catch (Exception ex)
{
Console.WriteLine("Unknown Error: {0}", ex.Message);
}
finally
{
currSocketState.BeginReceive(
new AsyncCallback(this.OnReceive));
}
}
}

//
--------------------------------------------------------------------------

class Sender
{
private Socket[] sockets;
private Thread[] threads;
private EndPoint sendEndPoint;
private int toPort;

public Sender(int numSockets)
: this(numSockets, "127.0.0.1")
{}

public Sender(int numSockets, string ipAddress)
{
this.toPort = 10000;
this.sendEndPoint = new IPEndPoint(
IPAddress.Parse(ipAddress), toPort);

this.sockets = new Socket[numSockets];
this.threads = new Thread[numSockets];
int fromPortBase = 20000;

for (int i = 0; i < numSockets; ++i)
{
sockets = new Socket(AddressFamily.InterNetwork,
SocketType.Dgram, ProtocolType.Udp);

// this setting avoids the error "connection reset by peer"
const int SIO_UDP_CONNRESET = -1744830452;
byte[] inValue = new byte[] { 0, 0, 0, 0 };
byte[] outValue = new byte[] { 0, 0, 0, 0 };
sockets.IOControl(SIO_UDP_CONNRESET, inValue, outValue);

int localPort = fromPortBase + i;
EndPoint localEndPoint = new IPEndPoint(
IPAddress.Any, localPort);
sockets.Bind(localEndPoint);

threads = new Thread(new ThreadStart(this.Send));
}
}

public void Start()
{
for (int i = 0; i < this.threads.Length; ++i)
{
threads.Start();
}
threads[0].Join();
}

public static Random rnd = new Random();

public void Send()
{
int sleepTime = 5;
while (true)
{
int socketID = rnd.Next(sockets.Length);
byte[] data = new byte[500];
data[0] = (byte)socketID;
lock (sockets)
{
sockets[socketID].SendTo(data, sendEndPoint);
}
Thread.Sleep(sleepTime);
}
}
}

//
--------------------------------------------------------------------------

class Program
{
static void Main(string[] args)
{
if (args.Length > 0)
{
if (args[0].ToLower().StartsWith("r"))
{
Console.WriteLine("Starting as receiver ...");
Receiver receiver = new Receiver(30);
receiver.Start();
System.Threading.Thread.Sleep(int.MaxValue);
return;
}
else if (args[0].ToLower().StartsWith("s"))
{
Console.WriteLine("Starting as sender ...");
Sender sender = new Sender(20, args[1]);
sender.Start();
System.Threading.Thread.Sleep(int.MaxValue);
return;
}
}

Console.WriteLine("Starting as sender and receiver ...");

Receiver rec = new Receiver(20);
Sender send = new Sender(20);

rec.Start();
send.Start();
}
}
 
P

Peter Duniho

Thanks for posting the code.

Some comments:

1) One very big problem is that you are making the assumption that the port
used by the sending code is the one seen by the receiving code. This is
problematic, in that the port known by the sending code may or may not be
the same as the port seen by the receiving code. If the sender is behind a
proxy or NAT router or similar object, that object may well translate the
sender's port to something else before actually sending the data to the
receiver. In that case, the port encoded in your transmission isn't going
to match the port seen by the receiver.

Why it should work some times and not other times I don't know. That would
depend on the exact behavior of the router or proxy you're going through.
It's possible that your proxy or router attempts to use the same port
whenever it thinks it's possible, and so most of the time they match, only
disagreeing in certain cases where the proxy or router thinks there's a
conflict (correctly or not).

Why you should never see a problem under .NET 1.1, I also don't know. Are
you sure that you are comparing apples to apples? That is, when you test
under .NET 1.1, are you testing under the exact same configuration as when
testing under .NET 2.0?

I guess it's possible that, for now, this is a red herring. That is, the
problem you're running into right now isn't actually caused by the above.
But in any case, it's a mistake to assume that the sender knows the port
that the receiver will see. I find it ironic that your code would have this
error in it, given the statement in your original post:

"The application has to work over the internet (where the data may cross
NATs, routers
and firewalls on its way). That's why the client initiates the data
transfer, and the server
examines the EndPoint-object to find out where the request originates,
and where to send
the data"

That statement implies an understanding that the sender can't know the
actual address that the receiver sees, and yet the code you've implemented
assumes that it can.

2) An issue that I see as a deficiency in the .NET documentation is that you
pass an EndPoint both to BeginReceiveFrom and EndReceiveFrom. It's not at
all clear to me from the documentation how these two parameters are supposed
to interact, or even why the BeginReceiveFrom method even takes that
parameter. I can't say that I see anything wrong with your methodology, but
at the same time I don't see anything in your code that makes me feel
comfortable that the SocketState.remoteEndPoint is ever set correctly (the
code you posted never actually uses the value, so that's not necessary a
problem here, but it could be in other code that borrows from this code).

3) As a minor stylistic point, I'll suggest that you not use the "as"
operator to cast objects that you know in advance should always be castable.
If you *do* use the "as" operator, you should always check for a null value
after the cast. If you don't check for null, you'll just get an exception
anyway, but it won't be the most direct, useful exception you could have
gotten (as compared to the invalid type exception you'd get casting
normally). I prefer to only use the "as" operator when getting a null
result is a normal possibility for which I'm prepared.

4) Finally, there is still the question of the validity of calling
BeginReceiveFrom multiple times on the same socket before handling an
EndReceiveFrom. If none of the above helps the issue you're seeing, I'd
suggest reducing your "numListeners" to one and seeing if that makes the
problem go away. If it does, that would strongly suggest to me that
multiple outstanding BeginReceiveFrom's on a UDP socket are not supported,
even though this would normally be okay using IOCP with regular Winsock. I
know so little about the underlying workings of .NET sockets, I can't really
offer any advice on why this would or would not be. But results count. :)

Hope that helps.

Pete
 
R

Redge

Hi Peter!

Thanks for your comments!
1) One very big problem is that you are making the assumption that the ...

You are of course right, and in my real application, I do not make the
assumption that the port which the client sees is the same as the port
that the server sees. But my real application showed the behaviour that
some clients received the WRONG data, which lead me to the assumption
that something with the endpoints went wrong. Thats why I made this
artificial testprogram that I posted here, which does compare the ports.
Of course this testprogram can only work within the same network -- and
the interesting thing is, the error occurs even when both client and
server run ON THE SAME MACHINE, using the loopback device. Under these
circumstances, the ports seen by client and server should indeed be the
same.
Why it should work some times and not other times I don't know. That would
depend on the exact behavior of the router or proxy you're going through.

As I said above, there is no router or proxy inbetween -- it's just the
loopback device you need to reproduce the error.
Why you should never see a problem under .NET 1.1, I also don't know. Are
you sure that you are comparing apples to apples? That is, when you test
under .NET 1.1, are you testing under the exact same configuration as when
testing under .NET 2.0?

Yes, the program uses the same code and runs under the exact same
circumstances -- with the known result that .Net 1.1 works fine and .Net
2.0 produces the error. And, yes, I have installed both runtimes.
That statement implies an understanding that the sender can't know the
actual address that the receiver sees, and yet the code you've implemented
assumes that it can.

I think that should be clear by now: the program I posted is an
artificial one, written with the only purpose to test if .Net's sockets
get the correct endpoints under simple circumstances.
2) An issue that I see as a deficiency in the .NET documentation is that you
pass an EndPoint both to BeginReceiveFrom and EndReceiveFrom. It's not at
all clear to me from the documentation how these two parameters are supposed
to interact, or even why the BeginReceiveFrom method even takes that
parameter. I can't say that I see anything wrong with your methodology, but
at the same time I don't see anything in your code that makes me feel
comfortable that the SocketState.remoteEndPoint is ever set correctly (the
code you posted never actually uses the value, so that's not necessary a
problem here, but it could be in other code that borrows from this code).

Yeah, I perfectly agree with you in this point. To be honest, I never
quite understood why it is necessary to give an EndPoint as "ref" to
both BeginReceiveFrom() AND EndReceiveFrom() -- it's unclear to me
either how those two interact.
3) As a minor stylistic point, I'll suggest that you not use the "as"
operator to cast objects that you know in advance should always be castable.
If you *do* use the "as" operator, you should always check for a null value
after the cast. If you don't check for null, you'll just get an exception
anyway, but it won't be the most direct, useful exception you could have
gotten (as compared to the invalid type exception you'd get casting
normally). I prefer to only use the "as" operator when getting a null
result is a normal possibility for which I'm prepared.

You are of course perfectly right. I omitted the error checking in the
code only to not distract too much from my real problem -- in my main
application, I do of course check for a null value after the cast.
4) Finally, there is still the question of the validity of calling
BeginReceiveFrom multiple times on the same socket before handling an
EndReceiveFrom. If none of the above helps the issue you're seeing, I'd
suggest reducing your "numListeners" to one and seeing if that makes the
problem go away. If it does, that would strongly suggest to me that
multiple outstanding BeginReceiveFrom's on a UDP socket are not supported,
even though this would normally be okay using IOCP with regular Winsock. I
know so little about the underlying workings of .NET sockets, I can't really
offer any advice on why this would or would not be. But results count. :)

Exactly, and the problem really does only occur when there is more than
one listener. As my server application uses the architecture proposed by
another posting anyway (adding the data to a queue which is then
processed in another thread, leaving the socket in a listening state for
as long as possible), I will change it to use only one listener. Using
the workloads I am able to reproduce artificially, I does not seem to
reduce the amount of received packets -- I just don't know if it stays
this way when hundreds of clients are connected; but I suppose it won't
be a problem. As I already said, as this way of handling the data was
proposed by an msdn article, I thought it was a common way of doing so,
and that's why I was quite astonished that the problem with the wrong
endpoints had not attracted someone else's attention before mine.
Hope that helps.

Yeah, that helped a lot! Thanks again for your tipps and comments!
Have a nice day,

Greetings,
Daniel
 
P

Peter Duniho

Redge said:
Thanks for your comments!

You're very welcome. Sorry I haven't been more help so far.
You are of course right, and in my real application, I do not make the
assumption that the port which the client sees is the same as the port
that the server sees. But my real application showed the behaviour that
some clients received the WRONG data, which lead me to the assumption
that something with the endpoints went wrong.

Okay...thanks for explaining that.

Perhaps you can clarify on what output exactly is incorrect. I can't tell
for sure (though I can guess) from your original post or other comments, so
maybe you can answer the question:

When the ID sent by the test application does not match the port number
returned by EndReceiveFrom, which one is actually incorrect? That is, is
the data in the datagram corrupted, resulting in the wrong ID in the
datagram? Or is EndReceiveFrom returning the wrong port? And if
EndReceiveFrom is returning the wrong port, is it just the port that's
wrong, or is the IP address also incorrect?
[...]
Yeah, I perfectly agree with you in this point. To be honest, I never
quite understood why it is necessary to give an EndPoint as "ref" to
both BeginReceiveFrom() AND EndReceiveFrom() -- it's unclear to me
either how those two interact.

Do you ever look at the EndPoint that you passed to BeginReceiveFrom? If
not, you should. It would be interesting to know whether it ever gets set
to something other than the default value you gave it before calling
BeginReceiveFrom, and if so whether it looks anything like what
EndReceiveFrom returns, and/or matches the port derived from the received
datagram.
You are of course perfectly right. I omitted the error checking in the
code only to not distract too much from my real problem -- in my main
application, I do of course check for a null value after the cast.

Well, my point was more that I personally prefer to not bother checking for
"null" unless that's a normal, expected result from my code. This is a
situation in which "null" should never be returned, so you may not be able
to really do much of anything useful if you do get "null" back.

If you get "null" back, it means something went really wrong and you'll
probably have to stop the program, or at least start again from scratch.

That sort of situation is, to me, a perfect example of one that would be
better dealt with using an exception. I used to avoid exception handling as
much as possible when writing C and C++ code (for a variety of reasons).
But there's no way to turn it off in C#, and it does have certain benefits.
One of them being that you don't have to write any code to handle these
sorts of cases. You just code for the expected case, and let an exception
occur if you've got something really bad going on.

That said, it was strictly a stylistic suggestion. There's not really a
good argument for using one over another...it's more a matter of preference
IMHO. And if you want to get *really* defensive with your coding (that is,
"code defensively"), if you think you can recover gracefully from an
incorrect type cast there (resulting in a null reference), I think it
actually makes more sense to stick with what you've got.
Exactly, and the problem really does only occur when there is more than
one listener. As my server application uses the architecture proposed by
another posting anyway (adding the data to a queue which is then
processed in another thread, leaving the socket in a listening state for
as long as possible), I will change it to use only one listener.

Sounds good. I agree with the other suggestion that it's better for your
i/o and processing to be decoupled. Even if it were possible to have
multiple receives posted to a given socket at once, this would be desirable.
And of course, given the issue you're running into, it's even more so. :)
Using
the workloads I am able to reproduce artificially, I does not seem to
reduce the amount of received packets -- I just don't know if it stays
this way when hundreds of clients are connected; but I suppose it won't
be a problem.

IMHO, if the most you're going to be dealing with is hundreds of clients,
you're unlikely to ever see any real performance issues, assuming you're
running on modern x86 hardware.
As I already said, as this way of handling the data was
proposed by an msdn article, I thought it was a common way of doing so,
and that's why I was quite astonished that the problem with the wrong
endpoints had not attracted someone else's attention before mine.

Well, as you've found, this appears to be unique to .NET 2.0. Perhaps the
article you referenced was written before .NET 2.0 was released?

As I've said, I'm not aware of any reason that multiple receives should
cause a problem. With IOCP that is. As far as I know, that's not only
supported, it's considered good programming practice. But: I have no
guarantee that when you call BeginReceiveFrom using a UDP socket, that you
really wind up using IOCP for those receives. And if it's NOT using IOCP,
then having multiple receives outstanding IS a very big problem.

It all depends on what's going on "under the hood", and it may be that
something changes from 1.1 to 2.0 in which what goes on "under the hood"
makes this problematic for 2.0 when it wasn't for 1.1.

For what it's worth, I posted a question to some other folks who do use IOCP
on a regular basis, and who can problem tell me for sure whether multiple
receives for UDP sockets is valid on IOCP. I don't see any reason it
shouldn't be, but they will be able to confirm that. If so, then I'd say
it's a safe bet that .NET isn't using IOCP in your situation. Whether
that's because it's a UDP socket, or because you have something else going
on with your configuration, or the documentation that claims that the .NET
Sockets use IOCP for the async pattern, I can't say. Any of those might be
possible.

I realize it's all pretty much academic at this point, but if I come across
anything else useful I'll post it back to this thread.

Pete
 
R

Redge

Hi Pete!
When the ID sent by the test application does not match the port number
returned by EndReceiveFrom, which one is actually incorrect? That is, is
the data in the datagram corrupted, resulting in the wrong ID in the
datagram? Or is EndReceiveFrom returning the wrong port?

The incoming data was correct -- the port numbers were wrong. I tested
this a while ago, with more logging. I saw that the client sent some
data, and the exact same data arrived at the server, but with a
different port.
And if
EndReceiveFrom is returning the wrong port, is it just the port that's
wrong, or is the IP address also incorrect?

Hm, I just ran the test on 3 machines (one server, two clients), and the
IP addresses were always correct, while only the ports were wrong. But
in my real application, I think sometimes I got wrong IPs as well --
though I'm not absolutely sure about this.

Another funny thing is that in the test application, the wrong ports
only appear in the beginning (only the first time each listener is
used), while my real application produced wrong ports later on, as well.
I never figured out what is the difference that produces this change of
behaviour.
Do you ever look at the EndPoint that you passed to BeginReceiveFrom? If
not, you should. It would be interesting to know whether it ever gets set
to something other than the default value you gave it before calling
BeginReceiveFrom, and if so whether it looks anything like what
EndReceiveFrom returns, and/or matches the port derived from the received
datagram.

The EndPoint that is passed to BeginReceiveFrom() always stays unchanged
(--> IPAddress.Any, port 0)! Which makes me wonder even more why I have
to pass it as "ref" in the first place ...
IMHO, if the most you're going to be dealing with is hundreds of clients,
you're unlikely to ever see any real performance issues, assuming you're
running on modern x86 hardware.

OK, that's good news -- I have not much experience in this respect.
Well, as you've found, this appears to be unique to .NET 2.0. Perhaps the
article you referenced was written before .NET 2.0 was released?

The article makes no comment on the .Net version to use.
But it was written in February 2006, and had even some references to
"Windows Communication Foundation", so it seems to me that it was
already written with .Net 2.0 in mind.
It all depends on what's going on "under the hood", and it may be that
something changes from 1.1 to 2.0 in which what goes on "under the hood"
makes this problematic for 2.0 when it wasn't for 1.1.

Mhm, this is my best guess as well.
For what it's worth, I posted a question to some other folks who do use IOCP
on a regular basis, and who can problem tell me for sure whether multiple
receives for UDP sockets is valid on IOCP. I don't see any reason it
shouldn't be, but they will be able to confirm that. If so, then I'd say
it's a safe bet that .NET isn't using IOCP in your situation. Whether
that's because it's a UDP socket, or because you have something else going
on with your configuration, or the documentation that claims that the .NET
Sockets use IOCP for the async pattern, I can't say. Any of those might be
possible.

Thanks for forwarding my question!! This could definitely help.
I realize it's all pretty much academic at this point, but if I come across
anything else useful I'll post it back to this thread.

Well, it helped me go on, in any case - academic or not ;)

I won't be able to check this thread in the next 3 weeks, as I am going
on a holiday, but please post anything you find -- I will definitely
check the thread as soon as I'm back. And again: I can't thank you
enough for the efforts you've already made.

Greetings,
Daniel
 
P

Peter Duniho

Redge said:
[...]
For what it's worth, I posted a question to some other folks who do use
IOCP
on a regular basis, and who can problem tell me for sure whether multiple
receives for UDP sockets is valid on IOCP. I don't see any reason it
shouldn't be, but they will be able to confirm that. If so, then I'd say
it's a safe bet that .NET isn't using IOCP in your situation. Whether
that's because it's a UDP socket, or because you have something else
going
on with your configuration, or the documentation that claims that the
.NET
Sockets use IOCP for the async pattern, I can't say. Any of those might
be
possible.

Thanks for forwarding my question!! This could definitely help.

Okay, one of the IOCP mavens confirmed that it is perfectly safe to have
multiple outstanding receives posted for UDP sockets under IOCP.

So, that doesn't really answer much but it does lead me to suspect that .NET
isn't using IOCP in this particular case. If it were, IOCP itself would
obscure any bugs in .NET related to multiple outstanding receives.

It seems to me that since the .NET API spec itself doesn't prohibit multiple
BeginReceiveFrom calls outstanding at once, that this would still be a bug
in .NET. If it's not using IOCP, then it should itself queue any receives
posted as you've done, rather than calling the underlying socket API
immediately. But since I don't really know what's going on behind the
scenes of .NET, I can't do anything but speculate.

But speculating, I'd say you're running into some real problem with .NET. I
don't see anything in the .NET documentation that would suggest you're not
allowed to do what you're doing, but it does seem as though doing that
breaks things. Obviously, that's not good. :)

Hope you had a good holiday...you've pretty much reached the limits of my
knowledge on the topic, so I don't have any more to offer. At this point, I
think it would require some lower-level debugging to determine what .NET is
doing, and to figure out whether it's a .NET bug or a problem with your
code. I suspect the former at this point, but I can't rule out the latter.
Good luck!

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top