R
Redge
Hello everyone!
A while ago, I created a client-server UDP application in .Net 1.1. A
client application connects via UDP to a server; once acknowledged, the
server sends the requested data to the client (a video stream).
The application has to work over the internet (where the data may cross
NATs, routers and firewalls on its way). That's why the client initiates
the data transfer, and the server examines the EndPoint-object to find
out where the request originates, and where to send the data.
As recommended by an msdn-article
(http://msdn.microsoft.com/msdnmag/issues/06/02/UDP/),
I use one socket to listen on a port, and run the command
"BeginReceiveFrom()" multiple times, each time with a separate buffer.
In .Net 1.1, this works just fine: in the callback, I call
"EndReceiveFrom()" to get the data and the endpoint, and use this
endpoint to send the requested data to.
However, in .Net 2.0, I get a wrong endpoint from time to time!
I tested this with a simple application:
- Several sockets that send data, each sending from (= binded to) a
different port
- The data that is sent contains the port it was sended from
- One socket listening for incoming udp-packets, constructed as
described above (several calls to "BeginReceiveFrom()" with separate
buffers).
The test application can run on a single computer or on several
computers within the same network. In each case, the application
compares the port saved in the EndPoint-object with the port specified
in the received data.
- In .Net 1.1, this works just fine: the port of the EndPoint-object and
in the received buffer are always equal, BUT:
- In .Net 2.0, I get lots of wrong ports in the beginning -- normally,
the port is always false when "EndReceiveFrom()" is called THE FIRST
TIME for a buffer; the subsequent calls seem to work fine. (However, in
my REAL application, wrong port-information is not only found in the
beginning, but also later on during runtime).
It is obvious that this error can lead to real problems in my
application, since I send data to the wrong clients. It is of course
possible that I just made a mistake in my implementation -- a mistake
that happens to have no effect in .Net 1.1, while .Net 2.0 is more
delicate in that respect.
However, I did not find anything drastically different from the msdn
samples. I made a class "SocketState", and one instance of this class is
used for ever "BeginReceiveFrom()":
class SocketState
{
private Socket udpSocket;
private byte[] buffer;
private int bufferSize;
private EndPoint remoteEndPoint;
public SocketState(Socket socket, int bufferSize)
{
this.udpSocket = socket;
this.bufferSize = bufferSize;
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));
}
public void BeginReceive(AsyncCallback callback)
{
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));
this.udpSocket.BeginReceiveFrom(this.buffer, 0,
this.buffer.Length,
SocketFlags.None, ref this.remoteEndPoint,
callback, this);
}
// properties omitted
}
So I create one listening socket and a number of SocketStates, which all
get a reference to this socket. Then I call "BeginReceive()" on every
SocketState instance; in the specified callback, I compare buffer and
endpoint.
In case somebody wants to take a closer look, the source code can be
downloaded here:
http://www.incognitek.com/user/stuff/MassiveSocketTest.zip
The zip file contains projects for Visual Studio 2003 and 2005.
Any help would be appreciated!
Greetings,
Daniel Sperl, Funworld AG
Mail: daniel[DOT]sperl[AT]funworld[DOT]com
A while ago, I created a client-server UDP application in .Net 1.1. A
client application connects via UDP to a server; once acknowledged, the
server sends the requested data to the client (a video stream).
The application has to work over the internet (where the data may cross
NATs, routers and firewalls on its way). That's why the client initiates
the data transfer, and the server examines the EndPoint-object to find
out where the request originates, and where to send the data.
As recommended by an msdn-article
(http://msdn.microsoft.com/msdnmag/issues/06/02/UDP/),
I use one socket to listen on a port, and run the command
"BeginReceiveFrom()" multiple times, each time with a separate buffer.
In .Net 1.1, this works just fine: in the callback, I call
"EndReceiveFrom()" to get the data and the endpoint, and use this
endpoint to send the requested data to.
However, in .Net 2.0, I get a wrong endpoint from time to time!
I tested this with a simple application:
- Several sockets that send data, each sending from (= binded to) a
different port
- The data that is sent contains the port it was sended from
- One socket listening for incoming udp-packets, constructed as
described above (several calls to "BeginReceiveFrom()" with separate
buffers).
The test application can run on a single computer or on several
computers within the same network. In each case, the application
compares the port saved in the EndPoint-object with the port specified
in the received data.
- In .Net 1.1, this works just fine: the port of the EndPoint-object and
in the received buffer are always equal, BUT:
- In .Net 2.0, I get lots of wrong ports in the beginning -- normally,
the port is always false when "EndReceiveFrom()" is called THE FIRST
TIME for a buffer; the subsequent calls seem to work fine. (However, in
my REAL application, wrong port-information is not only found in the
beginning, but also later on during runtime).
It is obvious that this error can lead to real problems in my
application, since I send data to the wrong clients. It is of course
possible that I just made a mistake in my implementation -- a mistake
that happens to have no effect in .Net 1.1, while .Net 2.0 is more
delicate in that respect.
However, I did not find anything drastically different from the msdn
samples. I made a class "SocketState", and one instance of this class is
used for ever "BeginReceiveFrom()":
class SocketState
{
private Socket udpSocket;
private byte[] buffer;
private int bufferSize;
private EndPoint remoteEndPoint;
public SocketState(Socket socket, int bufferSize)
{
this.udpSocket = socket;
this.bufferSize = bufferSize;
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));
}
public void BeginReceive(AsyncCallback callback)
{
this.buffer = new byte[bufferSize];
this.remoteEndPoint = (EndPoint)
(new IPEndPoint(IPAddress.Any, 0));
this.udpSocket.BeginReceiveFrom(this.buffer, 0,
this.buffer.Length,
SocketFlags.None, ref this.remoteEndPoint,
callback, this);
}
// properties omitted
}
So I create one listening socket and a number of SocketStates, which all
get a reference to this socket. Then I call "BeginReceive()" on every
SocketState instance; in the specified callback, I compare buffer and
endpoint.
In case somebody wants to take a closer look, the source code can be
downloaded here:
http://www.incognitek.com/user/stuff/MassiveSocketTest.zip
The zip file contains projects for Visual Studio 2003 and 2005.
Any help would be appreciated!
Greetings,
Daniel Sperl, Funworld AG
Mail: daniel[DOT]sperl[AT]funworld[DOT]com