UdpClient

J

Julie

I am using a UdpClient object to receive UDP messages over a local
Ethernet network, all from the same client. I am noticing now that I'm
not receiving all of the messages. I suppose this makes sense, since I
know UDP messages are not guaranteed to be received.

On the other hand, I guess I foolishly came to believe that all
messages were being received because during previous debugging, I
noticed that I could be at a breakpoint while multiple messages were
being received; then up to a minute later finally step to
UdpClient.Receive(), and the first of the messages would still be
waiting for me there.

Which behavior is expected?
 
P

Peter Duniho

Julie said:
I am using a UdpClient object to receive UDP messages over a local
Ethernet network, all from the same client. I am noticing now that I'm
not receiving all of the messages. I suppose this makes sense, since I
know UDP messages are not guaranteed to be received.

On the other hand, I guess I foolishly came to believe that all
messages were being received because during previous debugging, I
noticed that I could be at a breakpoint while multiple messages were
being received; then up to a minute later finally step to
UdpClient.Receive(), and the first of the messages would still be
waiting for me there.

Which behavior is expected?

All of the above. You don't mention whether it's _just_ the first
message that still is present after the delay, but note that there's no
rule that says datagrams have to be discarded FIFO. But whether you
find other datagrams are still discarded after debugging, or all your
datagrams were preserved, the behavior falls into the "expected" category.

UDP provides for message-oriented transmission of data on a "capacity
available" basis, with very little in the way of inherent error checking
and organization. In particular, you are not guaranteed to receive a
datagram:

...at all
...only once
...in the same order relative to others as it was sent

If there is little other network traffic, or even for some reason the
network components have been configured to buffer a very large amount of
data, you could indeed find datagrams still in the buffer after a long
delay.

On the other hand, if there's a very large amount of network traffic
and/or the network buffers are relatively small, you could find
datagrams getting lost even with your code running full speed.

Any behavior that still falls within the limited guarantee provided by
UDP is "expected". Just because UDP _can_ fail to deliver datagrams,
that doesn't mean it's unexpected for it to actually deliver a datagram.

Even so, every time you get a datagram, you should feel lucky. :) Make
sure your code can tolerate the irregularities in UDP and you'll be
fine. If you don't want to bother writing the code to tolerate those
irregularities, use TCP instead. Then all you have to worry about is
that TCP is stream-oriented and won't keep your message boundaries
intact. (In other words, whether you use UDP or TCP, there's
_something_ that's going to complicate your life :) ).

Note that if you are able to use a network protocol other than TCP/IP
(for example, ProtocolFamily.NetBios) there are in fact reliable,
message-based protocols available (for example, SocketType.Seqpacket).
These days though, TCP/IP is pretty much the one protocol that's
ubiquitous. A lot of LANs aren't even configured for anything else any
more (not even for NetBIOS over TCP/IP).

Pete
 
J

Julie

All of the above.  You don't mention whether it's _just_ the first
message that still is present after the delay, but note that there's no
rule that says datagrams have to be discarded FIFO.  But whether you
find other datagrams are still discarded after debugging, or all your
datagrams were preserved, the behavior falls into the "expected" category..

UDP provides for message-oriented transmission of data on a "capacity
available" basis, with very little in the way of inherent error checking
and organization.  In particular, you are not guaranteed to receive a
datagram:

     ...at all
     ...only once
     ...in the same order relative to others as it was sent

If there is little other network traffic, or even for some reason the
network components have been configured to buffer a very large amount of
data, you could indeed find datagrams still in the buffer after a long
delay.

On the other hand, if there's a very large amount of network traffic
and/or the network buffers are relatively small, you could find
datagrams getting lost even with your code running full speed.

Any behavior that still falls within the limited guarantee provided by
UDP is "expected".  Just because UDP _can_ fail to deliver datagrams,
that doesn't mean it's unexpected for it to actually deliver a datagram.

Even so, every time you get a datagram, you should feel lucky.  :)  Make
sure your code can tolerate the irregularities in UDP and you'll be
fine.  If you don't want to bother writing the code to tolerate those
irregularities, use TCP instead.  Then all you have to worry about is
that TCP is stream-oriented and won't keep your message boundaries
intact.  (In other words, whether you use UDP or TCP, there's
_something_ that's going to complicate your life :) ).

Note that if you are able to use a network protocol other than TCP/IP
(for example, ProtocolFamily.NetBios) there are in fact reliable,
message-based protocols available (for example, SocketType.Seqpacket).
These days though, TCP/IP is pretty much the one protocol that's
ubiquitous.  A lot of LANs aren't even configured for anything else any
more (not even for NetBIOS over TCP/IP).

Pete

Awesome response! Thanks so much. I think we went with UDP because of
the speed of going connection-less.

Anyway, I'm in the process of changing my code so that all the thread
that's receiving the messages does is put it into an array, and
another thread does all of the work to process it. Of course, if this
still causes a dropped packet somewhere down the line, I may have to
change to TCP/IP.
 
P

Peter Duniho

Julie said:
Awesome response! Thanks so much. I think we went with UDP because of
the speed of going connection-less.

TCP has extra overhead to set up the connection, true. But if you need
reliable communications and don't want to re-implement everything TCP
already includes, you might as well use TCP.

If you still want to use UDP, but need reliable communications, you
might look at RDP. It's a protocol layered on top of UDP, and I would
guess there's already at least one public C# implementation of RDP
floating around somewhere (it's not built into the network API...it's an
application-level protocol). If nothing else, the RDP protocol may give
you ideas as to how to implement reliability over UDP in terms of what
features to include.
Anyway, I'm in the process of changing my code so that all the thread
that's receiving the messages does is put it into an array, and
another thread does all of the work to process it. Of course, if this
still causes a dropped packet somewhere down the line, I may have to
change to TCP/IP.

UDP is already using TCP/IP. You probably mean "change to TCP". Yes, I
know...the nomenclature can be confusing. :)

As far as "if this still causes a dropped packet"...if you are using
UDP, you _will_ eventually have a datagram fail to be delivered.
There's nothing you can do in your own code to change that fact. So,
you either need to write your code so that an undelivered datagram isn't
a problem, or you need to use a protocol that is inherently reliable
(i.e. not UDP).

Don't forget about the other two delivery "issues" with UDP: datagrams
delivered in a different order than sent; and the same datagram arriving
more than once. Any reliable use of datagrams will have to take all
three issues into account.

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top