G
Guest
I've got a client/server app that I used to send large amounts of data via
UDP to the client. We use it in various scenarios, one of which includes
rendering a media file on the client as it is transferred via the underlying
UDP transport. In this scenario it is very important to keep CPU usage as
low as possible.
Both the client and server were originally written in C++, but I've
re-written the client in C#, partly to simplify it, but also just to see if
C# is up to the task. I can come pretty close to keeping up in terms of
throughput (within tolerable limits), but I'm finding CPU usage is an issue
at times.
I'm using asynchronous I/O, and in the receive handler I issue another
BeginReceiveFrom immediately in order to have an I/O ready to receive as
quickly as possible. This works very well, but I find that my CPU usage will
suddenly (and apparently randomly) increase dramatically (ie, from < 30% to
about 60% on average). I've added performance counters, and identified that
the majority of the increase in CPU time is spent in the BeginReceiveFrom
call. This is surprising to me for a couple of reasons. First, this is a
non-blocking call, so I would expect it to return quickly whether data is
received or not. I also don't see any corresponding increase/decrease in
either the number packets received per second or the number of reads I
complete per second (which tracks very closely to UDP packets/second), so I
don't believe the number of BeginReceiveFrom calls I'm making is changing
commensurately.
So my question is, is there anything in the Socket implementation that might
be causing this unexpected increase in CPU time? I've looked at
BeginReceiveFrom with .Net Reflector, and I'm wondering if perhaps the call
to ThreadPool.RegisterWaitForSingleObject might be stalling. I'm issuing on
the order of about 3500 I/Os per second, but I don't see my ThreadPool
availability being impinged. Any ideas? Thanks.
UDP to the client. We use it in various scenarios, one of which includes
rendering a media file on the client as it is transferred via the underlying
UDP transport. In this scenario it is very important to keep CPU usage as
low as possible.
Both the client and server were originally written in C++, but I've
re-written the client in C#, partly to simplify it, but also just to see if
C# is up to the task. I can come pretty close to keeping up in terms of
throughput (within tolerable limits), but I'm finding CPU usage is an issue
at times.
I'm using asynchronous I/O, and in the receive handler I issue another
BeginReceiveFrom immediately in order to have an I/O ready to receive as
quickly as possible. This works very well, but I find that my CPU usage will
suddenly (and apparently randomly) increase dramatically (ie, from < 30% to
about 60% on average). I've added performance counters, and identified that
the majority of the increase in CPU time is spent in the BeginReceiveFrom
call. This is surprising to me for a couple of reasons. First, this is a
non-blocking call, so I would expect it to return quickly whether data is
received or not. I also don't see any corresponding increase/decrease in
either the number packets received per second or the number of reads I
complete per second (which tracks very closely to UDP packets/second), so I
don't believe the number of BeginReceiveFrom calls I'm making is changing
commensurately.
So my question is, is there anything in the Socket implementation that might
be causing this unexpected increase in CPU time? I've looked at
BeginReceiveFrom with .Net Reflector, and I'm wondering if perhaps the call
to ThreadPool.RegisterWaitForSingleObject might be stalling. I'm issuing on
the order of about 3500 I/Os per second, but I don't see my ThreadPool
availability being impinged. Any ideas? Thanks.