Unicast UDP Server

O

O.B.

I'm attempting to set up at UDP server in unicast mode, where
10.1.16.25 is the remote machine. Below is the error being thrown
when binding the socket. What am I doing wrong?

System.Net.Sockets.SocketException
"The requested address is not valid in its context"


Socket socket = new Socket(AddressFamily.InterNetwork,
SocketType.Dgram,
ProtocolType.Udp);

socket.SetSocketOption(
SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer,
75000000); // 75 MB

socket.SetSocketOption(
SocketOptionLevel.Socket,
SocketOptionName.SendBuffer,
1472);

EndPoint receiveEndPoint = new IPEndPoint(
System.Net.IPAddress.Parse("10.1.16.25"),
socketConfig.Port);

socket.Bind(receiveEndPoint);
 
O

O.B.

You're trying to bind your local socket to a remote address.

If you want to specify a remote address to be used as the default
destination for the UDP socket, use Connect(), not Bind().

But that would require the remote connection to be Bound to that port
to accept connections. Here is the dilemma. We have a user that has
other commercial software that is only capable of running their
sockets as client UDP broadcast connections on a fixed port. On our
end, the user wants the ability to open two UDP servers both on the
same port, but for each one to filter based on the IP that is sending
data. I was hoping that it would be possible to do this in the
connection setup rather than having our asynchronous receive callback
do the filtering.

With that said, what is the point of binding to a specific address if
it isn't allowed?
By the way...IMHO, you shouldn't be messing with the socket buffer sizes
unless you have already gotten everything else working, you know exactly
what you're doing, _and_ you have run into some problem that requires you
to change the default buffer sizes.

We started off with defaults and lost too many packets. 75 MB ended
up being a good number. Most of our machines are running with 4 GB or
more of memory, so it isn't an issue.
 
O

O.B.

Wow, this is great. All this time I have been confusing bind and
connect. Thank you for the detailed explanation.

For the record, I still have to bind to a _local_ IP address for UDP
sockets in order for the receive data callback to be invoked. From what
I gathered from your post, invoking "connect" with a remote address in a
UDP socket will ensure that the receive callback is only invoked with
data received from the specified remote address, correct?

Continued below.
Okay. If you're running all of this on a LAN, over a gigabit network,
in an extremely high-volume situation, that _might_ be reasonable.
Otherwise, that's an awful lot of data to pile up without your servers
being able to process it. If one or more of those descriptions don't
apply to your situation, you could (and should IMHO) probably fix the
issue in a more appropriate way (i.e. just fixing the code so that it's
more responsive).

The code has been rewritten using unmanaged code to get a significant
speed up over the original C++ implementation. Without the 75 MB
receive buffer, we _occasionally_ find the buffer overfilling.

In running a profiler, it appears the code is processing the data as
fast as the OS is invoking the asynchronous callback. As a test, I
wrote a receive callback that only sucked data off the socket and put it
into an internal buffer for processing. We were still encountering an
overflow ... very odd. Maybe it is the actual NIC we're using? Or is
it a limitation of Windows XP and Windows 2003 Server? This is a tough
one to figure out.
 
O

O.B.

Peter said:
[...]
The code has been rewritten using unmanaged code to get a significant
speed up over the original C++ implementation. Without the 75 MB
receive buffer, we _occasionally_ find the buffer overfilling.

I don't understand the above comment. If you're using unmanaged code,
why are you posting in the C# newsgroup? C# is dependent on the .NET
runtime, and so is inherently tied to managed code. If the original
implementation was C++, is the above statement meant to imply that you
were using managed C++?

My apologies for the miscommunication. The socket code and most
everything else is managed code. The part that is not managed is the
code that convert the byte arrays to a class with explicit StructLayout
structures (managing DIS EntityState PDUs).
How do you know you are encountering an overflow? Are you sure that you
aren't losing datagrams due to some other reason?

Each time the receive callback is invoked, the code checks to see how
full the buffer is. When it is close to 100%, we start noticing data
not being received.
The "U" in "UDP" might as well stand for "unreliable". UDP is
inherently unreliable, and you _will_ lose datagrams eventually, no
matter what you do.

Yes, it does happen. However, when more than 25% of the packets are
getting lost, we start looking at the network for issues.

Thanks again for your help. I think we're good to go for now.
 
O

O.B.

How do you check to see how full the buffer is?


25% is high, yes. Still, that doesn't mean that it's simply a buffer size
issue.


Somehow I suspect not. But if you're satisfied with the solution, I guess
that's your perogative. Good luck.

Pete

Well, doing a Socket.Connect() on a UDP socket causes a Socket.Bind()
to throw a socket exception. So it is not possible to Bind on a local
address in UDP *and* use Connect to specify a remote address at the
same time. Oh well ... that's life.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top