Asynchronous socket operations and threadpool

Y

Yifan Li

Dear All,

I have a general question regarding the best practice for using asynchronous
socket operations and threadpool.

It often happens that I need to perform lenghty operations after
asynchronous receive/send/accept. These operations themselves are often
composed of multiple atom operations which can be completed asynchronously.
For example, upon accepting a connection I might want to lookup the remote
host using dns, and then send/receive some data. These things must be
completed in order (e.g. have to wait for dns to finish before sending).

My question is this:

I understand that the callbacks are queued and executed on a threadpool
thread, which is limited in quantity. I believe that this will become a
bottle neck of my application if the subsequent actions are done in a
synchronous fasion. Should I simply increase the number of thread pool
threads (which is easy) or should I make everything asynchronous? What will
be the complications if I simply increase the number of threadpool threads?

Many thanks in advance!

Li
 
C

Chris Mullins

Yifan Li said:
I have a general question regarding the best practice for using
asynchronous
socket operations and threadpool.

Async sockets don't really interact with the .Net threadpool, so your
question isn't quite right. Async sockets really leverage the I/O Completion
Port Thread Pool, which is a very different beast.
It often happens that I need to perform lenghty operations after
asynchronous receive/send/accept. These operations themselves are often
composed of multiple atom operations which can be completed
asynchronously.
For example, upon accepting a connection I might want to lookup the remote
host using dns, and then send/receive some data. These things must be
completed in order (e.g. have to wait for dns to finish before sending).

Alright. I've done very similar things in the past when building Coversant's
SoapBox server, which is a highly scalable XMPP server written all in .Net.
We're on the same page so far.
Lengthy operations generally include: ADO.NET Operations, DNS Operations,
Encryption, Policy Applications, and so on down the line.
I understand that the callbacks are queued and executed on a threadpool
thread, which is limited in quantity.

This isn't true, as I mentioned above. The threads your callbacks are on are
IOCP threads, not .NET threadpool threads. There are 1000 IOCP threads
available for use by default, not the 25 per processor that the normal
thread pool has.
I believe that this will become a bottle neck of my application if the
subsequent actions are done in a synchronous fasion. Should I simply
increase the number of thread pool threads (which is easy) or should I
make everything asynchronous?

It will, in all likleyhood, be your limitation no matter what you do. On the
positive side you have 1000 threads to play with, not 25, so the limitation
is prety high. On the downside, when you do finally hit the limit it's a
difficult thing to get past. On the very downside, stepping through 1000
threads in WinDbg using Son of Strike against a crashdump really sucks.

I can say it's easier and less bug prone to get data from a call to an async
socket, then process all that data synchronously. Otherwise you're dealing
with quite a bit more complexity to get all the thread events to sync up the
right way, and the failure cases get... crazy.

Also, if you have sy 600 IOCP threads processing at once, there's really no
way to do other async operations - you can't use the .Net thread pool, as it
doesn't have enough threads. You certainly don't want to spin up hundreds of
your own threads, so pragmatically it's normally best to do things
synchronously within the IOCP callback.
What will be the complications if I simply increase the number of
threadpool threads?

Long before you run into IOCP threadpool issues, you're going to run into
heap fragmentation issues. Each time you put a socket into an async mode
(BeginRead, is the cannonical culprit here), your receive buffer get's
pinned in the heap for the handoff to Win32 land. This pinning will create
lots of holes in the heap, and cause you to run out of memory long before
you would otherwise experct. The 2.0 GC algorithms have improved this, but
it's still likley to be the problem you run into first.

This is discussed in detail at:
https://blogs.msdn.com/yunjin/archive/2004/01/27/63642.aspx

The CLR 2.0 improvements are discussed here:
http://blogs.msdn.com/maoni/archive/2005/10/03/476750.aspx
 
Y

Yifan Li

Hi Chris,

Thank you very much for your prompt reply.

As far as the IOCP issue, it seems from the rotor source that the callback
from async socket operation DOES happen on a threadpool thread (done using
RegisterWaitForSingleObject, see _overlappedasyncresult.cs). I would really
have much less to worry about if it was running on the IOCP thread but...
Acutally I just realized this yesterday when I finally decided to take a
better look at the rotor to see what the hell is going on there.

Again, please let me know if you know for sure that the callback happens on
IOCP thread, then I'll just dump everything synchronously. After all, it
seems wierd that the threadpool doens't really help with the most common
situation - where you have a lot of thread spending most of their time
waiting...

Cheers!

Li
 
C

Chris Mullins

Hi Li,

The Async Callbacks absolutly happen on IOCP threads - don't just take my
word for it though, verify it yourself:
1 - call ThreadPool.GetAvailableThreads
2 - get a few dozen sockets blocked in an operation [ Sleep(30000) just
after Being Read ]
3 - call ThreadPoolGetAvailableThreads

Now, IIRC the .Net framework does use old-fashioned overlapped I/O on the
older platforms such as Win98 and WinMe. On all the NT based platforms
though, it's absolutly IOCP.

Looking at the Rotor Source isn't going to be of too much value for this -
the IOCP stuff is unique to Windows, whereas Rotor is intended to run on a
much broader range of platforms. I would suggest breaking out Reflector to
look at the actual .Net code, rather than looking at Rotor.

--
Chris Mullins
Coversant, Inc.


Yifan Li said:
Hi Chris,

Thank you very much for your prompt reply.

As far as the IOCP issue, it seems from the rotor source that the callback
from async socket operation DOES happen on a threadpool thread (done using
RegisterWaitForSingleObject, see _overlappedasyncresult.cs). I would
really have much less to worry about if it was running on the IOCP thread
but... Acutally I just realized this yesterday when I finally decided to
take a better look at the rotor to see what the hell is going on there.

Again, please let me know if you know for sure that the callback happens
on IOCP thread, then I'll just dump everything synchronously. After all,
it seems wierd that the threadpool doens't really help with the most
common situation - where you have a lot of thread spending most of their
time waiting...

Cheers!

Li
 
Y

Yifan Li

Chris,

Yes you are right. Callbacks does happen on IOCP threads. I am using
..net 2.0 so the fragmentation problem might hit me slightly later, hopefully
I'll never have to deal with it. I'm really grateful for your quick help, it
makes
my life much easier. Relavent information is very scarce on the internet.

BTW, have you ever had the need to deal with timeouts? Threading.Timer
seem to use the system timer queue pool which has 500 threads, am I
right on that?

Like you said, developing asynchronous socket applications is
really a great pain in the xxx!

Cheers!

Li

Chris Mullins said:
Hi Li,

The Async Callbacks absolutly happen on IOCP threads - don't just take my
word for it though, verify it yourself:
1 - call ThreadPool.GetAvailableThreads
2 - get a few dozen sockets blocked in an operation [ Sleep(30000) just
after Being Read ]
3 - call ThreadPoolGetAvailableThreads

Now, IIRC the .Net framework does use old-fashioned overlapped I/O on the
older platforms such as Win98 and WinMe. On all the NT based platforms
though, it's absolutly IOCP.

Looking at the Rotor Source isn't going to be of too much value for this -
the IOCP stuff is unique to Windows, whereas Rotor is intended to run on a
much broader range of platforms. I would suggest breaking out Reflector to
look at the actual .Net code, rather than looking at Rotor.
 
C

Chris Mullins

Yifan Li said:
BTW, have you ever had the need to deal with timeouts? Threading.Timer
seem to use the system timer queue pool which has 500 threads, am I
right on that?

Dealing with timeouts is a pain, and needs to be done. You won't be able to
use the timer stuff, as you'll max out the thread pool, so you need to get
trickier.

Our solution was very custom to our IM server, and wouldn't really be
applicable to anyone else...
 
W

William Stacey [MVP]

| Also, if you have sy 600 IOCP threads processing at once, there's really
no
| way to do other async operations - you can't use the .Net thread pool, as
it
| doesn't have enough threads. You certainly don't want to spin up hundreds
of
| your own threads, so pragmatically it's normally best to do things
| synchronously within the IOCP callback.

But if your returning after beginning the new async operation, then your
releasing the thread and another IOCP thread (or same one) will handle the
new callback. So you keep going. Am I wrong? Thanks for the links.
 
W

William Stacey [MVP]

What about a one cleanup thread that sleeps for N timeout ms. Then checks
all current client state objects for a timeout and removes them. Then sleep
again.
 
C

Chris Mullins

William Stacey said:
| Also, if you have sy 600 IOCP threads processing at once, there's
| really no way to do other async operations - you can't use the
| .Net thread pool, as it doesn't have enough threads. You certainly
| don't want to spin up hundreds of
| your own threads, so pragmatically it's normally best to do things
| synchronously within the IOCP callback.

But if your returning after beginning the new async operation, then your
releasing the thread and another IOCP thread (or same one) will handle the
new callback. So you keep going. Am I wrong? Thanks for the links.

Before I get into too much detail, there is one thing I want to mention -
the use cases I'm talking about below all are geared to optimizing the
performance of the entire system, not the performance of any single user
connected to the system. Most of the "kick off an async operation" arguments
do this so that a user's request into the system is performend that much
faster (which makes sense: if you can do things in parallel, then the user
always appriciates that). You don't actually save any work on the host cpus
by doing things async, you just gain a bit of parallelism on a particular
user's reqeust.

Optimizing this for a single user though (by using async calls) actually
degrades the overall performance of the system. The system still has to wait
for the async tasks to complete, and doing so sucks up more threads, more
memory, and more context switches. It almost always requires allocating
another Event, Waiting on it, and (potentially) having your thread put
briefly to sleep, all of which are fairly expensive operations. This means
that (in a high load case) user 1 had his operation completed slightly
faster, but user 13192 didn't even get to connect to the system. It's also
less prectiblehow long an operation will take with all the async calls in
there, as when you're under high load, context switching gets unpredictable.

The scenario I keep seeing is:

0 - You get the Socket.BeginRead callback, and get your data. You're now on
an IOCP thread.
1 - Perform an async operation (say, lookup MX or SRV records in the DNS).
Pass in a delegate for the callback.
2 - While that operation is under way, your IOCP thread keeps going doing
whatever it can do.
3 - Eventually the IOCP thread hits a WaitHandle and has to sync up with the
async operation you kicked off.

.... But

4 - Although the async operation you kicked off seemed like it was async, it
hasn't run yet because 600 other IOCP threads are also kicking off the same
operation, and the .Net threadpool is was past the point of starvation.

5 - So now your IOCP thread is stuck hanging around for a very long time
(much longer than it needed to).

6 - If you goofed just a little bit, your IOCP thread will be deadlocked,
and eventually you're whole app will stop responding.

At the end of the day, with a very high load of relativly light transactions
(which seems to be the common scenario), the async callbacks happening on
threadpool threads just kills everything. With 1000 IOCP threads and 25
threadpool threads (or 50, or 100), the operations just can't happen fast
enough. Even if you get things just right, threadpool starvation is still
too likley a canidate, as so many things in the .Net framework make use of
it.

At the end of the day, I've found it easier to just do everything
synchronously once you hit your IOCP callback. This makes for simpler code,
easier debugging, far fewer context switches, and (hopefully you won't need
to) far, far easier crashdump analysis.

Just to complicate things, here are some of the other architectures that
I've tried:
1 - As soon as I get a valid data chunk off a socket, I stick it into a
queue for later processing, then put the socket back into BeginRead mode.
Use a pool of worker threads [usually a custom threadpool, as too many other
things steal thrads from the .Net Threadpool] to pull data off the Queue and
process things. This approach seemed like the best canidate for a while, but
thread context switching absolutly destroys the performance. There are all
sorts of issues that also arise, such as how many worker threads to use, how
to manage thread affinity on multi-proc systems, how to restart threads that
get hung, etc. It turns out that on large, high-availability production
system this is all very difficult.

This case has another interesting side affect - pulling the data out of the
socket (in pure Win32 land) and into manged code where it sits until I can
process it in the queue means the heap fragments that much faster. I've
found it best to leave data in the socket until I'm actually ready to
process it.


2 - Choking the number of "running" IOCP threads. As soon as data came in
off the socket, I would block in a semaphore so that only a predetermined
number of IOCP threads there actually active at any one time. This seemed to
work well for a while, but ended up having so many weird side effects that
it was abandoned. It wasn't uncommon during load tests to see 15 threads
active, and 985 blocked in the semaphore, which caused strange things to
happen.
 
C

Chris Mullins

William Stacey said:
What about a one cleanup thread that sleeps for N timeout ms. Then checks
all current client state objects for a timeout and removes them. Then
sleep
again.

That works for a number of some scenarios, but all. If nothing else, you
quickly get into "how many cleanup threads?". One a single proc system,
there's an easy answer: One. On a dual proc machine with hyperthreading
enabled, and you're running the Server CLR for the Garbage Collection
algorithm it brings to the table, the answer isn't so obvious.

You also need a monitoring thread for the cleanup thread, as it may
terminate for a variety of unexpected reasons. In .Net 1.1, there are a few
cases where threads just disappear (which have been cleaned up in .Net 2.0).

There are also some details - where do you store the client state objects?
You need a locking mechanism around the collection, and this is normally
stuck being a Monitor - you can't use a reader writer lock, as there are a
number of writers. You also then need to store a reference into the thread
that owns the state object (asuming you model works like this), so you can
inject a ThreadAbortException into it and free it up.
 
W

William Stacey [MVP]

| That works for a number of some scenarios, but all. If nothing else, you
| quickly get into "how many cleanup threads?".

Just one.

| One a single proc system,
| there's an easy answer: One. On a dual proc machine with hyperthreading
| enabled, and you're running the Server CLR for the Garbage Collection
| algorithm it brings to the table, the answer isn't so obvious.

Why would SMP make a difference in terms of a cleanup thread?

| You also need a monitoring thread for the cleanup thread, as it may
| terminate for a variety of unexpected reasons. In .Net 1.1, there are a
few
| cases where threads just disappear (which have been cleaned up in .Net
2.0).

I have never heard about that issue in this ng. Returning from the method
or exception are the the only ways I know and you can catch exceptions. Is
there a doc on this issue? I would love to know this issue a bit better.

| There are also some details - where do you store the client state objects?
| You need a locking mechanism around the collection, and this is normally
| stuck being a Monitor

You may want to store client state objects in a list anyway for various
instrumentations for your server. A management tool that at least shows all
user connections would seem a reasonable thing to want in a modern server.
You need to lock the list for all changes, but that is not a big deal.

| - you can't use a reader writer lock, as there are a
| number of writers.

You could still use a RW if you wanted, but a monitor would probably perform
better

| You also then need to store a reference into the thread
| that owns the state object (asuming you model works like this), so you can
| inject a ThreadAbortException into it and free it up.

If the client socket is waiting on a read, then just closing the socket
should allow the callback to fire where you will get the exception and clean
up.
 
W

William Stacey [MVP]

|You don't actually save any work on the host cpus
| by doing things async, you just gain a bit of parallelism on a particular
| user's reqeust.

Agreed. If work is N, then using async will be N+x, where x is the overhead
of thread switches. Naturally, elapsed time can be lower for any given
client by using parallelism.

| The scenario I keep seeing is:
|
| 0 - You get the Socket.BeginRead callback, and get your data. You're now
on
| an IOCP thread.
| 1 - Perform an async operation (say, lookup MX or SRV records in the DNS).
| Pass in a delegate for the callback.
| 2 - While that operation is under way, your IOCP thread keeps going doing
| whatever it can do.
| 3 - Eventually the IOCP thread hits a WaitHandle and has to sync up with
the
| async operation you kicked off.

I thought we are talking about a pure async server? This is more like a
thread per connection server because your blocking on a Wait. In a full
async server, you would not block at all (or for very short times). In
Socket.EndRead, you would get data, update state, and Begin your next async
operation, and on down the line like walking a "virtual" task list
controlled by state. Not pretty to code (I totally agree), but if you need
huge number of connections (i.e. more then ~1500) then maybe is the only
way. So you should not have threads building up because they are blocking.
So most the time, your system will be waiting on BeginReceives and
EndReceives to fire. AFAIK, IOCP thread is not used to wait on the hardware
interrupt. Once the hw fills the read (if ever), that is when the IOCP
thread is invoked to handle the callback.

| At the end of the day, I've found it easier to just do everything
| synchronously once you hit your IOCP callback. This makes for simpler
code,
| easier debugging, far fewer context switches, and (hopefully you won't
need
| to) far, far easier crashdump analysis.

I might flip that around. Use a thread per connection for the client
request, then use async for things that could be done in parallel (i.e. dns,
db, etc). Things like Concurrency Runtime (CCR) from MS will help here for
coordination.

| Just to complicate things, here are some of the other architectures that
| I've tried:
| 1 - As soon as I get a valid data chunk off a socket, I stick it into a
| queue for later processing, then put the socket back into BeginRead mode.
| Use a pool of worker threads [usually a custom threadpool, as too many
other
| things steal thrads from the .Net Threadpool] to pull data off the Queue
and
| process things. This approach seemed like the best canidate for a while,
but
| thread context switching absolutly destroys the performance. There are all
| sorts of issues that also arise, such as how many worker threads to use,
how
| to manage thread affinity on multi-proc systems, how to restart threads
that
| get hung, etc. It turns out that on large, high-availability production
| system this is all very difficult.

Why would you need to worry about thread affinity? Anytime you make an
async call, your effectivity going to pay a context switch tax as well
(unless the call is completed sync). So a threadpool blocking on queue
items would not seem to be more overhead compared to async.
This link shows a variation on this type of server (SEDA):
http://www.eecs.harvard.edu/~mdw/papers/seda-sosp01.pdf Pretty interesting
read. I am doing this kind of server now, and seems to be working well.
Good discussion. Cheers.
 
C

Chris Mullins

:
[Threads Disappearing in .Net 1.1]
| You also need a monitoring thread for the cleanup thread, as it may
| terminate for a variety of unexpected reasons. In .Net 1.1, there are a
few
| cases where threads just disappear (which have been cleaned up in .Net
2.0).

I have never heard about that issue in this ng. Returning from the method
or exception are the the only ways I know and you can catch exceptions.
Is
there a doc on this issue? I would love to know this issue a bit better.

The Bugslayer to the rescue:
(Specifically, "What happended to my thread!?"

http://msdn.microsoft.com/msdnmag/issues/05/07/Bugslayer/

In my case, the timing of what I was working on, and what this article
addressed, was just about perfect or else I think I would have just lost my
mind...
| You also then need to store a reference into the thread
| that owns the state object (asuming you model works like this), so you
can
| inject a ThreadAbortException into it and free it up.

If the client socket is waiting on a read, then just closing the socket
should allow the callback to fire where you will get the exception and
clean
up.

What I'm going to say here is strange, and very tough to back up with facts,
but here goes:

I agree with what you say, but the number of race conditions that have
turned up in closing TCP sockets has been absolutly shocking. The reality is
that in tests, closing the socket gets the callback to fire, I get the
exception, and everything proceeds along happly. However, while this works
just great in our lab, it seems to not work in production.

Many of the crashdumps that we've had to analyze have been related to
threads hanging during socket close events. It's been a crazy trial and
error process, despite knowing the network stack quite well, and working
with other people who know it really well. Sockets just don't close cleanly
100% of the time. With 10K+ simultanous connections, and users contantly
coming in and out, even a 0.1% failure rate quickly impacts the server.

A few of the crashdumps that we had during Alpha and Beta Testing were,
"User reports Server is totally unresponsive. Clients unable to connect.".
Loading these minidumps into WinDbg and SoS, we would see 1000 deadlocked
IOCP threads. We would then start poking at the sockets, and see that
they're in a closed state. All of these IOCP threads were deadlocked deep in
code that wasn't ours - and couldn't (so far as we were able to tell) really
be affected by what we were doing. Now, to see this, we had to really,
really, really beat on the server - and even then is was exceptionally rare.
The cases where we did see it were ones with a wide variety of connected
users, Lan, Wan, Dsl, T1, Dial-Up, etc.
 
C

Chris Mullins

William Stacey said:
| The scenario I keep seeing is:
|
| 0 - You get the Socket.BeginRead callback, and get your data. You're now
on
| an IOCP thread.
| 1 - Perform an async operation (say, lookup MX or SRV records in the
DNS).
| Pass in a delegate for the callback.
| 2 - While that operation is under way, your IOCP thread keeps going
doing
| whatever it can do.
| 3 - Eventually the IOCP thread hits a WaitHandle and has to sync up with
the
| async operation you kicked off.

I thought we are talking about a pure async server? This is more like a
thread per connection server because your blocking on a Wait.

The wait is on a WaitHandle for an operation performed during a callback.
For example, you get a big chunk of data for the user (via the async call)
and realize you need to perform a database lookup to satisfy the reqeust.
You kick off the DB Request async, do as much more of the user request as
you can, then wait for the DB Request to complete. Once it's done, you send
the user back his data, and put the socket back into BeginRead.

I tend do either this, or just do everything synchronous once I'm on the
IOCP thread in my callback.

At various times, I've tried doing this a number of other ways and always
come back to this approach.
In a full async server, you would not block at all (or for very short
times).
In Socket.EndRead, you would get data, update state, and Begin your
next async operation, and on down the line like walking a "virtual" task
list
controlled by state.

An additional complication is that your socket is back in BeginRead mode so
you may get another request in on that socket. Now you need to decide which
request to process first - is it key requests are processed in order? If so,
then more logic is needed (a state machine, as you allude to).

One thing I'm not clear on, given what you describe - when you get data from
a socket, and realize you have something signifigant to do (and you want to
do it async), how do you do it? You can't just post it to the ThreadPool, as
there aren't enough threads in there. You don't want to manage a ton of
threads manually if you can help it...
So most the time, your system will be waiting on BeginReceives and
EndReceives to fire.

Agreed - if you have 10K connected TCP sockets, almost all of them are going
to be stuck in "BeginReceive" at any particular moment in time.This is by
design and one of the biggest strenghts of the IOCP infrastructure - it
manages which threads are awake, what processors they run on, what socket
data they have, and all of the other good stuff.
 
W

William Stacey [MVP]

| The Bugslayer to the rescue:
| (Specifically, "What happended to my thread!?"
|
| http://msdn.microsoft.com/msdnmag/issues/05/07/Bugslayer/

Thanks for the link. AFAICT, applies to *uncaught exceptions. If you wrap
your thread worker in a try/catch, you should not have this issue, unless
there is some Exception type that is not caught by "catch(Exception ex) {}"
..

| A few of the crashdumps that we had during Alpha and Beta Testing were,
| "User reports Server is totally unresponsive. Clients unable to connect.".
| Loading these minidumps into WinDbg and SoS, we would see 1000 deadlocked
| IOCP threads. We would then start poking at the sockets, and see that
| they're in a closed state. All of these IOCP threads were deadlocked deep
in
| code that wasn't ours - and couldn't (so far as we were able to tell)
really
| be affected by what we were doing. Now, to see this, we had to really,
| really, really beat on the server - and even then is was exceptionally
rare.
| The cases where we did see it were ones with a wide variety of connected
| users, Lan, Wan, Dsl, T1, Dial-Up, etc.

That is most interesting. Sounds like maybe a bug deep in Winsock with some
wierd lock issue (I assume you ruled out app errors and things like multiple
threads posting overlapped reads to same socket, etc). Makes you wonder if
async is worth it after adding up all the potential issues with it. An
alternative would be a blocking Read Stage (e.g. SEDA stage). So this Read
Stage has its own custom user thread pool with a bounded Queue in front of
it. You post reads to the queue (the listener will post here as well as
your server after a write). The TP then has maybe a min 1 and max 300
threads. You can still have thousands of connected sockets, but you will do
client reads sync with timeouts. Posted reads will just build up in the
queue and allow an adjustable "back-pressure" knob (also serves as a nice
performance stat). The queue should stay fairly empty as long as many
clients are not slow sending data. If they are too slow, the read will
timeout and you just kill it. Maybe not quite as fast as async reads, but
can more easily verify correctness overall as your logic is sync. Your Write
stage would act the same way. And potentially a more robust server all
things considered.
 
W

William Stacey [MVP]

| The wait is on a WaitHandle for an operation performed during a callback.
| For example, you get a big chunk of data for the user (via the async call)
| and realize you need to perform a database lookup to satisfy the request.
| You kick off the DB Request async, do as much more of the user request as
| you can, then wait for the DB Request to complete. Once it's done, you
send
| the user back his data, and put the socket back into BeginRead.

Thanks. I see what your doing. However, the waitHandle really turns you back
into a blocking server instead of an async. If db requests will be handled
by every client, this could starve the IOCP TP pretty fast as you said in
other posts. Why not do the db lookup async and in the callback do the
beginwrite back to the client. All state driven, and a pain, but no
blocking threads on IO. That said, you pointed out some potential issues
that could be very hard to diag with using a lot of async. So maybe the
pipe-line server deserves another look.


| One thing I'm not clear on, given what you describe - when you get data
from
| a socket, and realize you have something significant to do (and you want
to
| do it async), how do you do it? You can't just post it to the ThreadPool,
as
| there aren't enough threads in there. You don't want to manage a ton of
| threads manually if you can help it...

Just update your state and kick off another async like above then do the
next "thing" in the callback (i.e. write to client, next stage, etc).

Cheers Chris.
--wjs
 
M

Michael D. Ober

Is there a easy to follow example of a callback based async server for .NET
2.0. In VC++/MFC 6 and VB 6/WinSock, this was easy to do, but I can't
figure out how to do this in VB 2005 using the .NET socket classes. Since I
have never had server stalls in VC++/MFC or VB 6/WinSock, I have to assume
that the deadlock issue is actually a problem in the .NET framework.

Mike Ober.
 
C

Chris Mullins

Michael D. Ober said:
Is there a easy to follow example of a callback based async server for
.NET
2.0. In VC++/MFC 6 and VB 6/WinSock, this was easy to do, but I can't
figure out how to do this in VB 2005 using the .NET socket classes. Since
I
have never had server stalls in VC++/MFC or VB 6/WinSock, I have to assume
that the deadlock issue is actually a problem in the .NET framework.

The Async socket stuff in .NET 1 and .NET 2 are practically identical. The
environment works extremly well, scales to crazy numbers, and is very
reliable. There are a few problems along the way, but I can say overall it's
the best Sockets programming environment I've ever used, and I've used quite
a few.

Do you have a small but complete (to quite Jon Skeet) code sample that's not
working for you?
 
M

Michael D. Ober

Chris,

Here's the complete code. I have also included the code snippet that shows
the entry into this module. Note I just found a possible route where the
ClientConnected AutoResetEvent doesn't get triggered, but that shouldn't
impact existing clients. When this module stops accepting IP connections,
it also stops accepting new messages across existing IP connections.

Mike.

Sub Main()
' Set up for IP Connections
Dim IPListener As New Thread(AddressOf IPMessageHandler.CreateListener)
IPListener.IsBackground = True
IPListener.Start()
' Do More work
' Don't Terminate Sub Main Until an external event triggers termination
(i.e., clock)
End Sub

'================== All the socket code is in this module
Option Compare Text
Option Strict On
Option Explicit On

Imports System.Net.Sockets
Imports System.Net
Imports System.Threading
Imports System.Text.ASCIIEncoding

Module IPMessageHandler
Private ConnectionCounter As Long = 0
Private ClientConnected As New AutoResetEvent(False)

Public Sub CreateListener()
Dim ServerAddress As IPAddress =
Dns.GetHostEntry(My.Computer.Name).AddressList(0)
Dim LocalHost As New IPEndPoint(ServerAddress,
OSInterface.iniWrapper.ReadInt("Dialer", "Port", "Wakefield.ini"))
Dim tcpServer As New TcpListener(LocalHost)
tcpServer.Start()
WriteLog("Ready for IP Connections")
Do
tcpServer.BeginAcceptSocket(AddressOf AcceptRequest, tcpServer)
Debug.Print("Waiting for a connection")
ClientConnected.WaitOne()
Loop
End Sub

Private Sub AcceptRequest(ByVal ar As System.IAsyncResult)
Dim sock As Socket = Nothing
Dim ClientEndPoint As New IPEndPoint(0, 0)
Dim ClientName As String = ""
Dim MsgIn As String = ""
Dim BytesIn(1024) As Byte
Dim i As Integer

Try
Debug.Print("Incoming Connection")
Dim Listener As TcpListener = CType(ar.AsyncState, TcpListener)
sock = Listener.EndAcceptSocket(ar)
ClientEndPoint = CType(sock.RemoteEndPoint, IPEndPoint)
ClientName = ClientEndPoint.Address.ToString
ClientName = Dns.GetHostEntry(ClientName).HostName
ClientName &= ":" & ClientEndPoint.Port.ToString
Interlocked.Increment(ConnectionCounter)
UpdateCaption("Client Connection", ClientName)
ClientConnected.Set()
' This could be the problem as the Catch doesn't do this. I'll add that

' If the socket remains unused for 5 minutes, error out and
release server resources; Lily_Tomlin is the workstation
' All our in-house clients are coded to reconnect if required
If Not ClientName.Contains("Lily_Tomlin") Then
sock.ReceiveTimeout = 5 * 60 * 1000
WriteLog(ClientName & ": Socket Timeout is set to " &
sock.ReceiveTimeout.ToString("#,##0") & " milliseconds")
Do
Dim BytesReceived As Integer = sock.Receive(BytesIn)
Select Case BytesReceived
Case 0
WriteLog(ClientName & ": Client closed connection")
Exit Do

Case Else
MsgIn &= ASCII.GetString(BytesIn, 0, BytesReceived)
i = InStr(MsgIn, BEL)
Do While i > 0
Dim msg As String = Left$(MsgIn, i - 1)
MsgIn = Mid$(MsgIn, i + 1)
' Process msg
WriteLog(ClientName & " => " & msg)
Dim msgOut As String = ProcessMessage(msg)
If msgOut <> "" Then
Dim BytesOut() As Byte =
ASCII.GetBytes(msgOut & BEL)
sock.Send(BytesOut)
WriteLog(ClientName & " <= " & msgOut)
End If
i = InStr(MsgIn, BEL)
Loop
End Select
Loop
Catch ex As Exception
WriteLog(ex.Message)
Finally
'sock.Shutdown() ' I have tried both with and without
this - the app stops responding to IP faster with it.
sock.Close()
Interlocked.Decrement(ConnectionCounter)
UpdateCaption("Socket Closed", ClientName)
End Try
End Sub

Private Sub UpdateCaption(ByVal msg As String, ByVal Client As String)
Dim MsgConnections As String
Dim Connections As Long = Interlocked.Read(ConnectionCounter)
Console.Title = Connections.ToString("#,##0") & ": " & AppName()
Select Case ConnectionCounter
Case 0 : MsgConnections = "No Connections"
Case 1 : MsgConnections = "1 Connection"
Case Else : MsgConnections = Connections.ToString("#,##0") & "
Connections"
End Select
WriteLog(msg & ": " & Client.ToString & ": " & MsgConnections)
End Sub

Private Function ProcessMessage(ByVal msg As String) As String
Dim msgReturn As String = ""

' Application Specific processing - there are no calls into the IP
interface

Return msgReturn
End Function

End Module

Mike.
 
C

Chris Mullins

I think I see the problem:

Your algorithm is:
1 - You create a TcpListener, and call BeginAccept
2 - Inside the BeginAccept Callback, you call EndAccept, and then call
Receive in an endless loop.

The pattern should be:
1 - Create the Listener and call BeginAccept
2 - Inside the BeginAccept Callback
2.0 - Call EndAccept(ar) to get back a new socket.
2.1 - Call BeginAccept on the TcpServer (not the NEW socket, but the old
server) and pass in the same AcceptRequest as the callback. This allows
multiple sockets to become connected, as the instant you get one connection,
you begin listening for more connections.
2.2 - Now that you have a connected Socket, call Socket.BeginRead() and pass
in a callback method called ReadComplete.
2.3 Let the method exit, don't put in a loop.

3 - Inside the ReadComplete method (which gets called when you have data)
3.1 - Call EndRead on the socket, and pass in the IAsyncRequest.
3.2 - Now you have some data, process it.
3.3 - Call BeginRead on the socket again, and pass in ReadComplete as the
callback.
3.4 - Let the method exit, don't loop.

As a small suggestion, when you call BeginReceive on the socket, pass in the
socket itself as state to the call. This way when you get the callback, you
can pull the actual socket off of "ar.state", cast it to a socket, and then
call, socket.Endreceive(ar) on it. This avoids all sorts of member variables
and collections.This is the same as you'r already doing for TCPListener.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top