Process.Start limit?

C

CM

I need to run many processes at the same time but I keep running into this
exception:

System.ComponentModel.Win32Exception: Not enough storage is available to
process this command
at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo
startInfo)
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)

Are there any limits to the number of processes started at the same time?
For instance, about 20 processes are being started from multiple threads
at roughly the same time.

Here is how the processes are being started:

System.Diagnostics.ProcessStartInfo psi = new
System.Diagnostics.ProcessStartInfo("geizer.exe");
psi.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
psi.CreateNoWindow = true;
//psi.UseShellExecute = false; // Tried this, didn't help.
System.Diagnostics.Process proc =
System.Diagnostics.Process.Start(psi);

It's on 64-bit Windows with plenty of free memory.
Thanks
 
P

Peter Duniho

CM said:
I need to run many processes at the same time but I keep running into
this exception:

System.ComponentModel.Win32Exception: Not enough storage is available to
process this command
at
System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo
startInfo)
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)

Are there any limits to the number of processes started at the same
time? For instance, about 20 processes are being started from multiple
threads at roughly the same time.

There is a limit, but 20 processes – even started simultaneously –
should be well under it (I don't know off the top of my head, but even
on 32-bit Windows, I'd expect the limit to be on the order of 1000 or
so…even hundreds of processes should not be a problem, at least in terms
of them running; see below for my observations with respect to starting
them simultaneously).

So, you've got a problem. But the vague description of the problem
doesn't suggest any particular solution and there's no way for anyone to
advise regarding the problem without a concise-but-complete code example
that reliably demonstrates the problem.

For what it's worth, I've included an actual concise-but-complete code
example (see below), demonstrating the near-simultaneous initiation of
200 (or any number you want) processes, and at 200 it runs without any
trouble at all on my 32-bit Windows 7 installation, in a virtual machine
no less (configured for 768MB of RAM).

Interestingly, beyond 200 processes, I start to see failures. But the
exact nature, and in fact exact number, is variable. If I increase the
number to 300, then I only get the exception you're seeing, and actually
in some cases wind up starting _fewer_ than 200 processes successfully
(the exact outcome is variable). It seems that there is some temporary
overhead involved during the initialization of a process such that
simultaneous initialization of an excessive number of processes can
reduce the total number possible.

At 500, I still only see about 100 failures. There's obviously some
kind of pipeline that, as the throughput suffers and the code maxes out
on just how many threads can really be all fighting to start a process
at once, more successes are possible, because things are stretched out
longer.

At much higher numbers – 750 to 1000 – I also start seeing plain old
"out of memory" exceptions. I have a harder time understanding these;
the exception you're seeing, I assume relates to some unmanaged resource
that is in short supply (non-paged pool, for example). But it's hard to
imagine what kind of managed data structure could be large enough and
necessary for this operation that only 1000 of them causes a problem.

Interestingly, even at 1000 I don't actually have a problem with the
number of _threads_ I'm creating. It's still the processes themselves
(though, keeping in mind that some of the same kinds of resources
required for a process may be required for a thread, so increasing the
number of threads can actually make a failure to start a process more
likely; that may explain the occasions where past 200 processes, the
program actually fails to create even the 200 that it could before).

I ran the same program on a different PC, with 64-bit Windows 7
installed, and oddly enough got almost the same behavior. So whatever
limit you're running into, it's a resource limitation that is the same
between 32-bit and 64-bit Windows (I'd think that would rule out
non-paged pool memory, since even though the allocation limits are the
same for 32-bit Windows and 64-bit Windows, the two installations I
tried have very different physical RAM configurations: 768MB vs 12GB;
the 64-bit machine should have a limit 16 times higher than the 32-bit one).

The only difference I noticed was that on 64-bit Windows, I didn't get
any "out of memory" exceptions, even at 1000 processes (I didn't try
more, but I expect that to be true even at much higher numbers). But
that makes sense to me, as whatever normal managed memory allocation was
failing, the 64-bit system definitely should be able to handle vastly
more. I only saw the "Not enough storage…" exception on the 64-bit PC.
But I did get those in about the same numbers, and at about the same
thresholds I saw on 32-bit Windows.

One thing I did notice was that in fact, the non-paged pool does get
ridiculously high for the process while the process is trying to start
up the other processes. I saw it go as high as 500K, which for
non-paged pool is pretty large. Whether that's actually related to the
error, I'm not sure. As I note above, the limit for that should be much
higher on 64-bit Windows 7 and I didn't see a difference in behavior for
the two installations.

Anyway, the bottom line is that I only start to see failures when I get
to a point where I am starting a ridiculously large number of processes
all at once. And even there, the number of failures is mitigated by the
inability of that many threads to actually execute at once, so the more
I try to start, the more I am actually able to start.

As long as you really are only just trying to start on the order of
dozens at once, that should be no problem. If you find yourself trying
to start hundreds at once, then yes…it's entirely possible you'll start
seeing errors. But then, if you're trying to start that many processes
at once, there's something wrong with your design. That really is a
ridiculous number of processes to have running simultaneously, never
mind _started_ simultaneously, never mind have that many threads running
simultaneously (even that many threads total is usually excessive, but
having that many _runnable_ at the same time is definitely excessive).

If you can provide more specific information about the exact scenario,
perhaps more specific advice can be offered. In the meantime, if it
turns out you really are trying to start hundreds of processes
simultaneously, I'll suggest you fix the design so that doesn't happen.

Ideally, you'd simply bring whatever processing you're trying to do into
the main process, or at the very least implement the processing in some
kind of service that your process can interact with. But if you're
dealing with some external program that you have no way of modifying,
then at the very least serialize the execution of that program so that
you only have a small number of instances running at once (preferably no
more than the total number of CPU cores on the computer, since anything
much more than that you're only going to hurt total throughput anyway).

Pete



using System;
using System.Threading;
using System.Diagnostics;
using System.Collections.Generic;

namespace TestMultiProcess
{
class Program
{
const int kcprocessMax = 200;

static void Main(string[] args)
{
int cprocessMax = kcprocessMax;

if (args.Length > 0)
{
int cprocess;

if (!int.TryParse(args[0], out cprocess))
{
Console.WriteLine("dummy process");
Thread.Sleep(2000);
return;
}

cprocessMax = cprocess;
}

Console.WriteLine("attempting to start {0} processes",
cprocessMax.ToString());

object objLock = new object();
ManualResetEvent mre = new ManualResetEvent(false);
int cthreadStarted = 0;

// Create a bunch of threads, all of which will start a process
for (int cprocess = 0; cprocess < cprocessMax; cprocess++)
{
new Thread(delegate()
{
lock (objLock)
{
if (++cthreadStarted == cprocessMax)
{
Monitor.Pulse(objLock);
}
}

// Don't proceed until all threads are waiting
mre.WaitOne();

// At this point, all threads should try to start
// a new process, more or less simultaneously
StartProcess();

lock (objLock)
{
if (--cthreadStarted == 0)
{
Monitor.Pulse(objLock);
}
}
}).Start();
}

lock (objLock)
{
while (cthreadStarted < cprocessMax)
{
Monitor.Wait(objLock);
}
}

// Once all the threads are ready to go, wake them
// all up
mre.Set();

Console.WriteLine("threads released");

lock (objLock)
{
while (cthreadStarted > 0)
{
Monitor.Wait(objLock);
}
}

Console.WriteLine("all threads done");
Console.WriteLine(" success: " +
cprocessStarted.ToString());
foreach (KeyValuePair<ExceptionDescription, int> kvp in
dictExceptions)
{
Console.WriteLine(" failure, {0}: {1}",
kvp.Key.Type.ToString(), kvp.Value.ToString());
Console.WriteLine(" (\"{0}\")", kvp.Key.Message);
}

Console.ReadLine();
}

static int cprocessStarted;
static Dictionary<ExceptionDescription, int> dictExceptions =
new Dictionary<ExceptionDescription, int>();
static readonly object objDict = new object();

struct ExceptionDescription
{
public readonly Type Type;
public readonly string Message;

public ExceptionDescription(Type type, string strMessage)
{
Type = type;
Message = strMessage;
}

public override bool Equals(object obj)
{
if (obj.GetType() != typeof(ExceptionDescription))
{
return false;
}

return ((ExceptionDescription)obj).Type == Type;
}

public override int GetHashCode()
{
return Type.GetHashCode();
}
}

static void StartProcess()
{
try
{
ProcessStartInfo psi = new
ProcessStartInfo("TestMultiProcess.exe", "foo");

Process.Start(psi);
Interlocked.Increment(ref cprocessStarted);
}
catch (Exception exc)
{
AddException(exc);
}
}

static void AddException(Exception exc)
{
ExceptionDescription exd = new
ExceptionDescription(exc.GetType(), exc.Message);

lock (objDict)
{
int cexd;

if (!dictExceptions.TryGetValue(exd, out cexd))
{
cexd = 0;
}

cexd++;
dictExceptions[exd] = cexd;
}
}
}
}
 
W

Wilson, Phil

Anything the event log? Event ID: 2011 may be written referring to the
configuration parameter IRPStackSize.

Running any AV software?

Google returns plenty of suggestions - have you looked at any of them
 
P

Peter Duniho

CM said:
I need to run many processes at the same time but I keep running into
this exception:

System.ComponentModel.Win32Exception: Not enough storage is available to
process this command
at
System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo
startInfo)
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)

Are there any limits to the number of processes started at the same
time? For instance, about 20 processes are being started from multiple
threads at roughly the same time. [...]

Quick update:

I modified my program to have the option of creating each new process
serially, in a single thread. With that modification, no exceptions
whatsoever happen, and ironically (though perhaps not that surprisingly,
since the threads themselves are significant overhead) it actually runs
more smoothly and efficiently.

Monitoring the handle count for the process, I can see that it does
climb dramatically. But AFAIK not nearly high enough to explain a
specific error. One thing I do notice is that in either the
asynchronous or synchronous implementation, once the program has
completed its work, the handle count does not actually drop back to the
original idle value.

Using the SysInternals handle.exe utility, I can see that the handles
being created in the greatest amounts (by far) are Event and Thread type
handles. But even fixing the "failed to dispose the Process object" bug
that I had in my test program (by calling Dispose() on the Process
object returned by the Process.Start() method) did not affect the
behavior at all.

So, two conclusions I draw from that:

– .NET may well have some kind of resource leakage bug, in which it
fails to close handles. The bug may actually be in .NET, or it may
actually be in the unmanaged API and just show up in the way .NET uses
the unmanaged API.

– Regardless of the bug, it's clear that the main issue with respect
to the failure is some kind of _transient_ resource consumption issue,
unrelated to the more general handle leaking that seems to be going on.
That is, starting processes serially, I can get the total handle count
significantly higher than what I see with the async implementation, and
yet no error occurs. So whatever the cause of the error, the primary
cause of that lies elsewhere, even if the handle leakage is somehow
exacerbating the problem.

I'm sure that someone with more time and knowledge about the low-level
handle architecture of Windows (times like these I really miss Willy's
contributions to this newsgroup) could explain exactly what's going on
here, both in terms of the apparent leakage and in terms of the "Not
enough storage…" error. But it seems clear to me from my tests that the
error itself shows up only when a process is attempting to create an
unreasonably large number of processes in an unreasonably short period
of time, from an unreasonably large number of threads.

Thus, whatever the underlying cause of the issue, it seems to me that
the correct fix is to limit thread and process creation to more
reasonable numbers. In particular, keep in mind the very important
matter of just how many threads or processes can actually be actively
running at once on a computer. Even if you can create thousands of
processes, if you've only got eight CPU cores, you can only have eight
threads executing at once. Even if each process has only one thread,
you can only create eight processes before they start fighting with each
other over the CPU and reducing throughput.

Even if those processes are i/o bound, all you wind up doing is shifting
the contention to the i/o resource instead of the CPU. Unless each
process literally has a unique resource it is using (CPU or i/o) and can
do so without interfering with any other process (which when you get to
hundreds, never mind thousands of processes seems very unlikely to me),
an excessively large process count will only cause your total throughput
to suffer.

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top