udp broadcast throughput difference: udpclient vs socket

  • Thread starter Stephan Steiner
  • Start date
S

Stephan Steiner

Hi

The project I'm currently working on involves sending large UDP broadcasts.
As the .NET framework already provides an easy facility for sending and
receiving UDP packets I thought it was a good idea to use UdpClient rather
than sockets directly. A few weeks back I ended up rewriting the receiver
part to use sockets directly because I had to manipulate some low level
socket properties, and those manipulations would fail on the underlying
socket of an UdpClient. One of my subsequent problems was that the sending
part did not perform as well as I wished. I have two NICs on my PC, and when
making broadcasts over the secondary one, things would come almost to a
standstill.. sending would happen at one packet per several seconds (packet
size 750 bytes), and only when I was lucky and disabled the primary NIC,
would sending performance be up to par.
However, when making large broadcasts it just took too long so I started
measuring how long it takes to send a packet. I used both Ethereal and
HiPerfTimer
(http://www.codeproject.com/useritems/highperformancetimercshar.asp) and
found out that UdpClient.Send took roughly 5 ms for a 750 byte packet,
resulting in a throughput of 150'000bytes/second which is way below what a
100mbit card can do.

Subsequently, I rewrote the sender part to use sockets directly which had a
considerable impact on performance. Sending one packet now takes 0.64ms, and
the throughput increased to 1'160'360 bytes/second which is a considerable
improvement. It is however, still way below what the card can do (assuming a
NIC can use 20% of its bandwith in a real world scenario, my 100mbit card
should still make 2.5MB/s, not 1.16MB/s so it's still a factor two).
So, I'm concluding that
a) UdpClient is almost 8 times slower than using a socket directly
b) .net sockets are unable to saturate a good 100mbit NIC (the crappy
secondary card I have eventually starts dropping packets if the broadcast is
too large, so it gets saturated in a way.

Here's the code I've been using for my tests:

UdpClient (the class containing this code derives from UdpClient)

UdpClient updClient = new UdpClient(localPort);
Socket s = this.Client;
s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1);
Byte[] packet = new Byte[packetSize];
int nBytesSent = 0;
Byte[] seqNr = null;
for (short i = 0; i < nbPackets; i++ )
{
nBytesSent = updClient.Send(packet, packet.Length, address, remotePort);
}

Sockets only:

sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram,
ProtocolType.Udp);
sock.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast,
1);
IPEndPoint localhost = new IPEndPoint(IPAddress.Any, localPort);
sock.Bind(localhost);
IPEndPoint remotehost = new IPEndPoint(IPAddress.Parse(address),
remotePort);
EndPoint remoteEP = (EndPoint)remotehost;
byte[] packet = new Byte[packetSize];
int nBytesSent = 0;;
for (short i = 0; i < nbPackets; i++ )
{
nBytesSent = sock.SendTo(packet, remoteEP);
}

Comments are welcome.

Regards
Stephan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top