R
Ricardo M. Urbano - W2K/NT4 MVP
We have 2 main file servers and one has a network application share for
production apps, and the other has the same exact share name for test
apps.
Both servers have Gigabit NIC's and are running at 1000MB/full duplex.
They are both running Windows 2000 Server, SP3. The shares are on
~500GB hardware RAID 5 volumes.
There is a particular network application folder that is nearly 5GB in
size. On 2 separate occasions now, one of my colleagues has launched an
xcopy from his workstation (100MB/half, fully switched network) of this
folder from production over the corresponding folder on our "test" file
server, and then the "source/production" file server starts to throw up
errors when anyone tries to access any share on the server that there is
not enough server storage to process the command and the shares become
inaccessible.
I've had to reboot the server both times. The last time, it had been up
only about 60 days.
When this happens, the server also records the following error in the
event log several times:
Server is unable to allocate memory from the System Paged Pool
MS KB 312362 describes the error and it seems relevant, but I find it
hard to believe that a Windows 2000 Server cannot handle such a simple
task with it's default settings. I mean, we only have about 50
concurrent users at any given time.
Anyway, has anyone had experience with this? The article says to set
the PoolUsageMaximum value to 40% and the PagedPoolSize value to
0xFFFFFFFF.
Are there implications to hard coding these values that I should be
aware of?
BTW, if I telnet into either server and do the same xcopy command server
to server, no errors are generated.
TIA, everyone!
production apps, and the other has the same exact share name for test
apps.
Both servers have Gigabit NIC's and are running at 1000MB/full duplex.
They are both running Windows 2000 Server, SP3. The shares are on
~500GB hardware RAID 5 volumes.
There is a particular network application folder that is nearly 5GB in
size. On 2 separate occasions now, one of my colleagues has launched an
xcopy from his workstation (100MB/half, fully switched network) of this
folder from production over the corresponding folder on our "test" file
server, and then the "source/production" file server starts to throw up
errors when anyone tries to access any share on the server that there is
not enough server storage to process the command and the shares become
inaccessible.
I've had to reboot the server both times. The last time, it had been up
only about 60 days.
When this happens, the server also records the following error in the
event log several times:
Server is unable to allocate memory from the System Paged Pool
MS KB 312362 describes the error and it seems relevant, but I find it
hard to believe that a Windows 2000 Server cannot handle such a simple
task with it's default settings. I mean, we only have about 50
concurrent users at any given time.
Anyway, has anyone had experience with this? The article says to set
the PoolUsageMaximum value to 40% and the PagedPoolSize value to
0xFFFFFFFF.
Are there implications to hard coding these values that I should be
aware of?
BTW, if I telnet into either server and do the same xcopy command server
to server, no errors are generated.
TIA, everyone!