Asynchronous socket programming vs. remoting

  • Thread starter Thread starter Michael Lindsey
  • Start date Start date
M

Michael Lindsey

I need to write a server app to send images to client GUIs that are outside
of the server's domain.
The client will have the file system path to the image but can not access
the file system.
I am trying to decide if I should use remoting vs. writing a server that
uses networkstreams.

I have read that networkstreams\tcp programming should be faster than
remoting and is a better choice for what I am doing but that it is difficult
to code.

I'd like to get
1. a recommendation on remoting vs socket programming for what I'm doing
and
2. If socket programming is recommended ... the url of a full sample of
an ansychronous server\client file sharing application.

Thank you,
Michael Lindsey
 
Hard question Michael in that there's a lot to be considered. I'd agree
that Sockets will be faster but I don't know how much - because it depends.
If you have a lot of users or expect that the user base could grow big time,
then the trade offs are a lot to consider in both directions. Remoting is a
LOT cleaner and a lot easire to deploy, maintain, update etc. You can also
piggy back off of IIS Security quite easily. On the other hand, if you need
every bit of performance you can get, then it's a trade off.

I'd need to know a lot more about the architecture before I could
responsibly make a recommendation. One nice thing about Remoting though is
that you can easily move stuff around and split up the load - we Remote TONS
of stuff and performance hasn't been an issue in more than a few large scale
enterprise apps.

can you tell me a little more about the scenario?
 
Ok, here is the scenario:
This will not be an internet application and it will not require security -
nothing fancy anyhow.
The client application is key from image data entry application where
information is read from an image and keyed
by the de operators and the data is sent back to a sql server.

The client starts up and requests a batch of images to data enter.
The unc paths to the image are pulled from the database.
Currently the application is used by only those that are joined to our
network and so they have access to the unc path and
can download the image directly from the file system.

It has been requested that we allow our sister companies (residing all
around the US and some over seas) be able to help out with data entry.
To this point they cannot use the application because they cannot access our
file system.

I am planning on using remoting or tcp server to get around this issue.
Either way, I will pull down 10 to 50 images on a background thread while
the user is performing data entry on a batch.

The max users are currently 20 to 40 users. The max potential probably
around 100 users.

I would like to do load balancing by having the each instance of the host
service that is running on multiple server (~6), whether remoting or tcp
server, update a db table with it's current cpu usage every five seconds.
Before the client starts downloading images it will first see which instance
of the service has the lowest cpu usage, by calling a stored proc, and use
that server to download the images.

The 6 servers are dual Pentium II (650mhz) with 1 gb ram.
There are several applications that run on these servers.
The database is MSSQL server cluster running dual Pentium II processors.
The clients will be WinXp Pentium III or IV.

Thank You,
Michael Lindsey
 
Thanks Frank.
You are the first person that said I should use sockets.
The argument against sockets is usually "all the coding and maintenance"
that it takes to get it working well.
I decided to give it a try and got a real nice solution up and running
quickly, with surprisingly little code, using asynchronous network streams.
The performance is awesome! I am pulling images from FL to GA and loading
them quicker than I can load them from my harddrive using the file system.

It scales nice too - I tried throwing 400 requests at the server in a span
of 30 secs and it returned all images without a hitch.
I was pulling them at 129kb/s and that limitation is likely due to using DSL
over a VPN.
The processor usage during my test never went above 8% on a Pentium II 500
with 1 gig of ram.
The ram usage was at 10 mb but dropped back down to 2-3 mb after each test.
The threads got up around 100 during the test and dropped back down to 6
after the test. The extra threads were due to the asynchronous processing
and it was all handled by the system. I didn't have to create a single extra
thread manually.

Thanks for the advice.

Michael Lindsey
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top