Ram Drive

G

Gareth Tuckwell

Way back in DOS days, it was possible to setup an area of memory that would
be assigned a drive letter and act as a disk drive (only RAM fast). I want
to do this in windows to give me a lightning fast 'drive' that I can use for
temporary files, when building code to improve my 10 minutes of hard disk
intensive build time!

Is it possible to setup a ramdrive in Windows XP? I have 2GB of RAM and
would like to use about 512MB for a ramdrive which I can assign a drive
letter to - help!???
 
D

Detlev Dreyer

Gareth Tuckwell said:
Is it possible to setup a ramdrive in Windows XP?

Yes. There is a Microsoft Ramdisk available, however, it's limited
to 32MB of size.
I have 2GB of RAM and would like to use about 512MB for a ramdrive
which I can assign a drive letter to - help!???

There are many Ramdrives available, for instance
http://www.arsoft-online.de/products/product.php?id=1

I didn't try this one myself, however, most of these Ramdrives cause
the System Restore tool to fail (restore points missing) under WinXP.
 
W

w_tom

DOS did not do pre-emptive multi-tasking. Therefore the
program had to wait for data to be written to disk drive.
Today we have multi-tasking. Data is literally left in memory
for another program to write it to the disk. What you want to
do with a Ram Drive is now called cache. Already part of the
OS. Your Ram drive would only slow the system by adding more
work and probably creating more memory faults (virtual memory
access to hard drive).
 
G

Gareth Tuckwell

w_tom said:
DOS did not do pre-emptive multi-tasking. Therefore the
program had to wait for data to be written to disk drive.
Today we have multi-tasking. Data is literally left in memory
for another program to write it to the disk. What you want to
do with a Ram Drive is now called cache. Already part of the
OS. Your Ram drive would only slow the system by adding more
work and probably creating more memory faults (virtual memory
access to hard drive).

You are half right!! Hard drives do have a certain amount of cache - mine is
2MB I think, but the temporary files I am talking about 'cacheing' on a ram
drive total just over 600MB, so my 2MB cache on the real hard disk is waaaay
too small!!

I am talking about effectively creating a hard disk in memory, which the OS
will treat as a hard disk and store temporary build files and precompiled
header files at RAM speed not hard disk speed. This would speed up the build
of my 22 environment C++ application by a significant amount!

A Ram drive would not slow the system down!! I don't have figures to hand,
but compare access time on a hard disk with access time for pc2700 memory -
I think you'll find the memory wins!! Using one quarter of my 2GB RAM as a
RAM drive would not cause any more memory faults as you suggest. Besides,
with this much RAM, I have virtual memory turned off!
 
W

w_tom

You are confusing cache on hard drive with other cache -
what I should have called data buffers. In a pre-emptive OS,
your program no longer waits for data to be written to drive.
Program says write to disk, then moves on. Data stays in
semiconductor memory and eventually is written to disk by
another process - without slowing the application program.
IOW pre-emptive OS should do same thing as your Ram disk
without consuming more virtual memory and other resources.
Ramdisk is, for all practical purposes, already part of a
pre-emptive OS.

And no, I was not talking about cache inside a hard drive.
Virtual memory is already a Ram Drive provided automatically
by the OS and that can be enlarged if more DRam is installed.

If you use a RAM drive, then the RAM drive program must
consume more virtual memory. IOW, the OS must swap something
else out to hard drive more often to run the Ram drive
program. OS also must transfer data to that Ram Drive program
memory buffer. IOW Ram Drive now has resources of another
program AND does same thing that OS data buffers (previously
called cache) is doing automatically.

Again, Ram Drive did provide such advantages in DOS because
DOS had no such cache (data) buffers. Pre-emptive OSes
already do the equivalent of what you want to accomplish with
a Ram Drive and should do it even better if DRAM is enlarged.
 
A

Alex Nichol

Gareth said:
You are half right!! Hard drives do have a certain amount of cache - mine is
2MB I think, but the temporary files I am talking about 'cacheing' on a ram
drive total just over 600MB, so my 2MB cache on the real hard disk is waaaay
too small!!

I am talking about effectively creating a hard disk in memory, which the OS
will treat as a hard disk and store temporary build files and precompiled
header files at RAM speed not hard disk speed. This would speed up the build
of my 22 environment C++ application by a significant amount!

He was not talking about cache on hard drives. He was talking of the
way XP's Memory management system caches files. Any file such as you
are talking of will be held in RAM and your I/O will go direct to it.
The advantage of a RAM drive over that is in that it will *never* be
written out to disk, saving some overhead. RAM drive for Work files in
cases where programs have been written that way does make some sense.
But a modern program should not be written in such a way. Use a program
global memory area for the work, and let system Virtual memory
management look after it. Or the compiler and its support may include a
'memory mapped file' facility, which will do all that a RAM drive will
do, without tying up RAM at times when the program is not in use.
 
W

w_tom

And the data in the cache remains in cache after copy was
written to disk. IOW if system then needs that data, it does
not reread disk. Instead it just reads from memory buffer.

BTW, this is also why NTFS file systems run faster for
smaller files. Small files are not even saved to sectors.
Small files are saved inside disk directory meaning 1) small
files remain available in 'RAM' memory and 2) file save only
involves a single seek - not two seeks as FAT based
filesystems require.

How to take advantage of semiconductor memory access speed?
Install more DRAM. Then more of disk information will be
available with less disk access.
 
G

Gareth Tuckwell

w_tom said:
And the data in the cache remains in cache after copy was
written to disk. IOW if system then needs that data, it does
not reread disk. Instead it just reads from memory buffer.

BTW, this is also why NTFS file systems run faster for
smaller files. Small files are not even saved to sectors.
Small files are saved inside disk directory meaning 1) small
files remain available in 'RAM' memory and 2) file save only
involves a single seek - not two seeks as FAT based
filesystems require.

How to take advantage of semiconductor memory access speed?
Install more DRAM. Then more of disk information will be
available with less disk access.

More than the 2GB I already have?? And I am talking about 600MB of temporary
files being used over a 10-15 minute period - I know a little bit of OS
cache doesn't store that all in memory - I can hear the hard disk thrashing
away during the build!!

A Ram drive WILL be faster than hard drive... No matter what cacheing and
pre-empting the OS does, it will do the same amount regardless of the final
destination of the files. The OS will see the RAM Drive as an ordinary drive
so will not treat it any differently! The only difference is that my files
will end up being written to a drive that is actually an area of memory
rather than a disk which is a physical disk. Given that RAM is much faster
that HDs, this entire process, taken as a whole over 10-15 minutes, IS
significantly faster. As I said before, I have 2GB of RAM and am talking
about using maybe 650MB of that as a RAM Drive, which leaves well over 1GB
of actual RAM for the OS to 'play' with.
 
G

Gareth Tuckwell

w_tom said:
You are confusing cache on hard drive with other cache -
what I should have called data buffers. In a pre-emptive OS,
your program no longer waits for data to be written to drive.
Program says write to disk, then moves on. Data stays in
semiconductor memory and eventually is written to disk by
another process - without slowing the application program.
IOW pre-emptive OS should do same thing as your Ram disk
without consuming more virtual memory and other resources.
Ramdisk is, for all practical purposes, already part of a
pre-emptive OS.

And no, I was not talking about cache inside a hard drive.
Virtual memory is already a Ram Drive provided automatically
by the OS and that can be enlarged if more DRam is installed.

Virtual memory is NOT a ram disk. Virtual memory is an area of the hard
disk, set aside to be used by the operating system when RAM is low. And as I
said - I have virtual memory turned off as I have 2GB of RAM.
If you use a RAM drive, then the RAM drive program must
consume more virtual memory.

No, no, no. As I said, virtual memory is a space on the hard disk that the
OS uses when it is low on RAM, or uses to keep memory free. I have 2GB of
physical RAM, so have turned off virtual memory. Creating a RAM drive in
virtual memory would be rediculous - what you are talking about is setting
aside an area of the hard disk to work as RAM (virtual memory), then using
that RAM as a drive!!
IOW, the OS must swap something
else out to hard drive more often to run the Ram drive
program. OS also must transfer data to that Ram Drive program
memory buffer. IOW Ram Drive now has resources of another
program AND does same thing that OS data buffers (previously
called cache) is doing automatically.

The OS data buffers do not extend as far as cacheing the 600MB of temporary
files used over a 10-15 minute period during the code generation sequence
that I am talking about!!
Again, Ram Drive did provide such advantages in DOS because
DOS had no such cache (data) buffers. Pre-emptive OSes
already do the equivalent of what you want to accomplish with
a Ram Drive and should do it even better if DRAM is enlarged.

Enlarge my RAM??? I already have 2GB!!

A Ram drive WILL be faster than hard drive... No matter what cacheing and
pre-empting the OS does, it will do the same amount regardless of the final
destination of the files. The OS will see the RAM Drive as an ordinary drive
so will not treat it any differently! The only difference is that my files
will end up being written to a drive that is actually an area of memory
rather than a disk which is a physical disk. Given that RAM is much faster
that HDs, this entire process, taken as a whole over 10-15 minutes, IS
significantly faster. As I said before, I have 2GB of RAM and am talking
about using maybe 650MB of that as a RAM Drive, which leaves well over 1GB
of actual RAM for the OS to 'play' with.
 
A

Alex Nichol

Gareth said:
More than the 2GB I already have?? And I am talking about 600MB of temporary
files being used over a 10-15 minute period - I know a little bit of OS
cache doesn't store that all in memory - I can hear the hard disk thrashing
away during the build!!

In XP yes it does, provided there is no other more important use around
(which is unlikely). But if you have a RAM drive it will just insert
its cache in between that and your program, thus using RAM twice over.
 
G

Gareth Tuckwell

Alex Nichol said:
In XP yes it does, provided there is no other more important use around
(which is unlikely). But if you have a RAM drive it will just insert
its cache in between that and your program, thus using RAM twice over.

OK. If, as you say, Windows is going to cache all 600+MB of files in memory
for me when I do a build, then can you tell me why do I see over 600MB of
files being created, yet my memory usage not rise any higher than the
200-250MB normal usage. If it is cacheing all these temporary files, then it
should push the memory usage up to 600MB + the normal 200MB = over 800MB
when I do a large code build? However, it moves no higher than the normal
200-250MB usage during a code generation and I can hear the hard disk
rattling away as the temporary files are written out to disk, then read back
in a few minutes later! Also, if they are cached in memory by the operating
system, then how does it know I have finished doing my build and therefore
delete the cache? There are 21 separate environments built in sequence and
they all refer to some of the temporary files in the other environments.

My understanding is that a cache is there to act as a buffer so that the
operating system does not have to sit around waiting for the hard disk to
save or load data. Any files stored in cache will only exist for a fraction
of a second until the disk catches up. The cache is not there to save
temporary files in RAM for 10-15 minutes so that a manual build procedure
can take advantage of RAM speeds over hard disk speeds!!

These temporary pre-compiled header files and intermediary files ARE written
to and read from the DISK (via a buffer) across 21 different build
environments and over a 10-15 minute period! If I use a RAM disk, then we
REPLACE the need for the hard disk accesses with RAM accesses instead and
the whole process is much faster. The RAM is NOT used twice and the hard
disk is not thrashed as the temporary files are written to a disk which is
based in RAM not in a physical drive.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top