G
Guest
We're developing an client/server application where the client exports well
known services using remoting (using the TCP default formatter) and the
clients (usually there is only 1) attaches to these remoted classes and
dequeues data from a threadsafe queue. Functionally it works great, but the
amount of actual throughput we can get seems CPU limited. That is sending
large amounts of data such as serializable classes containing 5000 - 40000
element arrays of ushorts seems to bring both the client and the server to
their knees. Our experiments using the TCP/IP classes are a bit more
promising and seem to show that TCP/IP has a lot more thoughput. Is Remoting
really that inefficient in it's data transmission, and if so, is there
anything I can do about it (or do I have to use TCP/IP directly)?
Thanks!
known services using remoting (using the TCP default formatter) and the
clients (usually there is only 1) attaches to these remoted classes and
dequeues data from a threadsafe queue. Functionally it works great, but the
amount of actual throughput we can get seems CPU limited. That is sending
large amounts of data such as serializable classes containing 5000 - 40000
element arrays of ushorts seems to bring both the client and the server to
their knees. Our experiments using the TCP/IP classes are a bit more
promising and seem to show that TCP/IP has a lot more thoughput. Is Remoting
really that inefficient in it's data transmission, and if so, is there
anything I can do about it (or do I have to use TCP/IP directly)?
Thanks!