Raid0 or Raid5 for network to disk backup (Gigabit)?

M

markm75

That is about 36MB/s, far lower than the stated 60MB/s benchmark.
If the slowdown on a linear, streamed write is that big, maybe the
slopw backup you experience is just due to the write strategy of
the backup software. Seems to me the fileserver OS could be
not too suitable for its task....

Arno- Hide quoted text -

- Show quoted text -

Well I know when i did the tests with BackupExec.. at least locally..
they would start off high.. 2000mB/min.. by hour 4 it was down to 1000
by the end down to 385 MB/min.. with BackupExec if I did one big 300gb
file, locally, it stayed around 2000, if there was a mixture then it
went down gradually.

Of course acronis imaging worked fine locally, so it must be some
network transfer issue with the software.. I'll try ShadowProtect next
and see how it fares.
 
M

markm75

That is about 36MB/s, far lower than the stated 60MB/s benchmark.
If the slowdown on a linear, streamed write is that big, maybe the
slopw backup you experience is just due to the write strategy of
the backup software. Seems to me the fileserver OS could be
not too suitable for its task....

Arno- Hide quoted text -

- Show quoted text -

I'm currently running an image using ShadowProtect Server, its getting
27 MB/s... stating 3-4 hours remaining, after 10 minutes.. we shall
see how it does in the end..
 
N

NB

Keep in mind that any application that writes enormous files to a
Windows network share will experience gradual but steady performance
degredation over time. This is due to a performance bug in the
Windows itself, and has nothing to do with the application that is
writing the data. This can be easily reproduced by writing a simple
app that does nothing but constantly write a continuous stream of data
to a specified file.

The degredation will occur more slowly if the host of the share has
more memory. Also interesting (and important) is the fact that if the
app that is writing the file closes the file and then reopens a new
file, the performance will jump back up to its peak. This fact is
important because it suggests a good workaround for this issue.

If you are backing up huge (100's of GB, or even TB) sized volumes to
network shares, you should configure your backup so that it will split
the backup image into ~50GB pieces. Most backup apps support
splitting the backup image file. This way performance will stay at
reasonable levels.
 
M

Maxim S. Shatskih

Keep in mind that any application that writes enormous files to a
Windows network share will experience gradual but steady performance
degredation over time. This is due to a performance bug in the
Windows itself, and has nothing to do with the application that is
writing the data. This can be easily reproduced by writing a simple
app that does nothing but constantly write a continuous stream of data
to a specified file.

Exactly so, we have noticed it and measured it.

This is MS's issue, and is possible related to cache pollution - polluting the
cache faster then the lazy writer will flush it. Tweaking the cache settings in
the registry (after finding the MS's KB about them) can be a good idea.
the backup image into ~50GB pieces. Most backup apps support
splitting the backup image file.

ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR
too.
 
M

markm75

Exactly so, we have noticed it and measured it.

This is MS's issue, and is possible related to cache pollution - polluting the
cache faster then the lazy writer will flush it. Tweaking the cache settings in
the registry (after finding the MS's KB about them) can be a good idea.


ShadowProtect surely supports this, and I think Acronis and Norton Ghost/LSR
too.

Thanks for that tid bit.. I'll either break them up and test again or
try to find the MS solution.

Sure enough, ShadowProtect ended up at 9 hours as well.
 
A

Arno Wagner

Thanks for that tid bit.. I'll either break them up and test again or
try to find the MS solution.
Sure enough, ShadowProtect ended up at 9 hours as well.

Well, that would explain it. Once again. MS is using substandard
technology. I hope you find a solution to this, but I certainly
have ni clus what it could be.

Arno
 
M

Maxim S. Shatskih

Well, that would explain it. Once again. MS is using substandard
technology.

I would not say that SMB slowdown on files >100GB is "substandard" for a mass
market commodity OS.

This is a rare corner case in fact, with the image backup software being nearly
the only users of it, and they can split the image to smaller files.

Note that lots of UNIX-derived OSes still have 4GB file size limit :)
 
A

Arno Wagner

I would not say that SMB slowdown on files >100GB is "substandard"
for a mass market commodity OS.

Hmm. I think that if it supports files > 100GB, then it should support
them without surprises. Of course, if you say ''commodity'' = ''not
really for mission critical stuff'', then I can agree.
This is a rare corner case in fact, with the image backup software
being nearly the only users of it, and they can split the image to
smaller files.
Note that lots of UNIX-derived OSes still have 4GB file size limit :)

I wouldn't know. Linux ext2/3 has a 2TB file size limit.

But that was actually not my point. My point is that if it is
supported, then it should be supported well. If it is not supported
that is better than if you think you can use it, but on actual usage
things start to go wrong. I believe this whole thread shows that ;-)

So ''substandard'' = ''the features are there but you should not
really use them to their limits'', a.k.a. ''we did it, but we did not
really do it right''.

Arno
 
M

markm75

Hmm. I think that if it supports files > 100GB, then it should support
them without surprises. Of course, if you say ''commodity'' = ''not
really for mission critical stuff'', then I can agree.


I wouldn't know. Linux ext2/3 has a 2TB file size limit.

But that was actually not my point. My point is that if it is
supported, then it should be supported well. If it is not supported
that is better than if you think you can use it, but on actual usage
things start to go wrong. I believe this whole thread shows that ;-)

So ''substandard'' = ''the features are there but you should not
really use them to their limits'', a.k.a. ''we did it, but we did not
really do it right''.

Arno

Results are in.. Used shadow protect.. set to 50GB files at a time on
the backup file side.. average throughput 23 MB/s.. finished in 4hr 25
minutes, the same time as a local backup took (this was across
gigabit).

So I guess its true.. there is something to the polution of the cache/
registry issue? Anyone have a KB article where I could find the tweak
and try this again without splitting the backup files? (Not sure what
I'm searching for exactly).

Thanks
 
A

Arno Wagner

Results are in.. Used shadow protect.. set to 50GB files at a time on
the backup file side.. average throughput 23 MB/s.. finished in 4hr 25
minutes, the same time as a local backup took (this was across
gigabit).
Interesting.

So I guess its true.. there is something to the polution of the cache/
registry issue? Anyone have a KB article where I could find the tweak
and try this again without splitting the backup files? (Not sure what
I'm searching for exactly).

Why not just split the backup? This seems to work, after all.
If you want this a bit better sorted, put each backup set
into one subdirectory.

Arno
 
M

Maxim S. Shatskih

I wouldn't know. Linux ext2/3 has a 2TB file size limit.

Sorry. See the cite from include/linux/ext2_fs.h below and "__u32 i_size;" in
it.

ext2's limit is 4GB. I remember ext3 being compatible with ext2 in on-disk
structures in everything except the transaction log, so, looks like ext3 is
also limited to 4GB per file.

More so, if you will also find the superblock structure, you will see that ext2
is also limited to 32bit block numbers in the volume. There are good chances
that this means the volume size limit of 2TB (if "block" is really the disk
sector and not a group of sectors).

/*
* Structure of an inode on the disk
*/
struct ext2_inode {
__u16 i_mode; /* File mode */
__u16 i_uid; /* Owner Uid */
__u32 i_size; /* Size in bytes */
__u32 i_atime; /* Access time */
__u32 i_ctime; /* Creation time */
__u32 i_mtime; /* Modification time */
__u32 i_dtime; /* Deletion Time */
__u16 i_gid; /* Group Id */
__u16 i_links_count; /* Links count */
__u32 i_blocks; /* Blocks count */
__u32 i_flags; /* File flags */
supported, then it should be supported well. If it is not supported
that is better than if you think you can use it, but on actual usage
things start to go wrong. I believe this whole thread shows that ;-)

Let's wait for MS's hotfixes and service packs. Such a "corner case"
(circumstances rarely met in the real life) issues do occur in any software.
 
F

Folkert Rienstra

Well, that would explain it. Once again. MS is using substandard technology.
I hope you find a solution to this, but I certainly have ni clus what it could be.

Not uncommon when you have so many brainfarcts as you have. babblebot.
 
F

Folkert Rienstra

Arno Wagner said:
Hmm. I think that if it supports files > 100GB, then it should support
them without surprises. Of course, if you say ''commodity'' = ''not
really for mission critical stuff'', then I can agree.



I wouldn't know. Linux ext2/3 has a 2TB file size limit.
But that was actually not my point. My point is that if it is
supported, then it should be supported well. If it is not supported
that is better than if you think you can use it, but on actual usage
things start to go wrong.

Nothing goes 'wrong', you babblebot moron, it only gets slow.
I believe this whole thread shows that ;-)

What this thread shows is that you don't know anything, babblebot, that you
are just feeding on others for information to badmouth MS, you Lunix zealot.
So ''substandard'' = ''the features are there but you should not really use
them to their limits'', a.k.a. ''we did it, but we did not really do it right''.

It's the OS showing it's limits, not the file system.
 
T

Torbjorn Lindgren

Maxim S. Shatskih said:
Sorry. See the cite from include/linux/ext2_fs.h below and
"__u32 i_size;" in it.

ext2's limit is 4GB. I remember ext3 being compatible with ext2 in
on-disk structures in everything except the transaction log, so,
looks like ext3 is also limited to 4GB per file.

Don't be silly, even a minimal amount of checking would have shown
this to be false for a long time now. The exact file size limit for
ext2/ext3 depends on blocksize, with the default 4 kB it's 2 TB (and
it's been rare to see any other blocksize for a long time now).

IIRC at least (some?) 2.2 kernels had this, though glibc support on
32-bit platforms lagged a bit. Since I was on 32-bit platforms back
then it might well have been MUCH earlier (2.0? 1.2?).

ext2/ext3 has a system of "features" which can be added, both fully
compatible and forward compatible flags are available so to avoid
corruption on incompatible features (ext3 is a set of IIRC two
options, one which says that it has a journal, one which is set when
mounted and removed when umounted, this is why EXT2 only mounts
*clean* EXT3 filesystems). IIRC NTFS has something not that
dissimilar...

The feature you are looking for is "large_size", this is set
automatically when the first >2GB file is created by a kernel which
supports this. I've not read the code, but the following line from the
same file you quoted makes me think they stash the upper bits of the
file size in dir_acl (which probably isn't used for files anyway).

From include/linux/ext2_fs.h:
struct ext2_inode {
....
__u32 i_size; /* Size in bytes */
....
__u32 i_dir_acl; /* Directory ACL */
};
#define i_size_high i_dir_acl

The 2 TB file size limit actually comes from i_blocks, Google found a
patch to extend this but I don't think anyone is really that
interested at the moment. There are a LARGE number of other
filesystems that supports this for Linux if someone actually need
this!

More so, if you will also find the superblock structure, you will see
that ext2 is also limited to 32bit block numbers in the volume. There
are good chances that this means the volume size limit of 2TB (if
"block" is really the disk sector and not a group of sectors).

I have no reason to doubt the statement in Wikipedia and other places
which for Linux 2.6 means 16 TB for ext3 assuming the standard 4kB
block size.

(It depends on block size but unpatched 2.4 and earlier has a hard
limit at 2TB, not sure if this applies to all 2.4 distributions, some
were heavily enhanced with some features from 2.6)

http://en.wikipedia.org/wiki/Ext2
http://en.wikipedia.org/wiki/Comparison_of_file_systems
 
M

Maxim S. Shatskih

ext2/ext3 has a system of "features" which can be added

So, >4GB files for ext2 is one of these additional features? OK, thanks, will
know this.
*clean* EXT3 filesystems). IIRC NTFS has something not that
dissimilar...

NTFS is more like ReiserFS. From what I've read on ReiserFS design - it is just
plain remake based on the same ideas as NTFS - attribute streams, B-tree
directories, MFT etc.

NTFS just predates ReiserFS by around 10 years, which is a clear sign of
"substandard technologies used by MS" :) The only competitors to NTFS that
time of 1993 were VMS's filesystem and Veritas's product for Solaris.
(It depends on block size but unpatched 2.4 and earlier has a hard
limit at 2TB

So, I'm not this wrong. 2TB limit was there very small time ago.
 
B

Bill Todd

Maxim S. Shatskih wrote:

....

From what I've read on ReiserFS design - it is just
plain remake based on the same ideas as NTFS

Then you haven't read nearly enough to have a clue.

- bill
 
A

Arno Wagner

Sorry. See the cite from include/linux/ext2_fs.h below and "__u32
i_size;" in it.
ext2's limit is 4GB. I remember ext3 being compatible with ext2 in on-disk
structures in everything except the transaction log, so, looks like ext3 is
also limited to 4GB per file.

Well, yes, if you use a pretty old kernel. Or turn large
file support off. Standard limit is 2TB at the moment. And
you don't need to quote kernel source at me, I happen to
have files > 4G on ext2 at this moment. The inode type
has been extended some time ago.

An overview over the current limits of ext2 is, e.g., here:

http://en.wikipedia.org/wiki/Ext2

One thing you need to do in your software for it to be able
to handle the large files is to define
#define _FILE_OFFSET_BITS 64
in order for all the relevant types to be 64 bits transparently.
Note that you need the functions using ''off_t'' for position
specification.
More so, if you will also find the superblock structure, you will
see that ext2 is also limited to 32bit block numbers in the
volume. There are good chances that this means the volume size limit
of 2TB (if "block" is really the disk sector and not a group of
sectors).

Filesystem size currently is 16TB. But you need large block device
support enabled in the kernel to use that. I think that is not
yet the default.

Arno
 
A

Arno Wagner

So, >4GB files for ext2 is one of these additional features? OK, thanks, will
know this.
NTFS is more like ReiserFS. From what I've read on ReiserFS design - it is just
plain remake based on the same ideas as NTFS - attribute streams, B-tree
directories, MFT etc.
NTFS just predates ReiserFS by around 10 years, which is a clear sign of
"substandard technologies used by MS" :) The only competitors to NTFS that
time of 1993 were VMS's filesystem and Veritas's product for Solaris.
So, I'm not this wrong. 2TB limit was there very small time ago.

Well, the 2.6.0 was published in december 2003. I would not call 4 years
''very small time'', considering disk sizes in 2003.

There are, BTW, some more filesystems available under Linux and
they are basically all pretty compatible. For really large
filesystems you would probably not use ext2 anyways, but perhaps
XFS (which also has been available on Linux since around 2001).
XFS has a file size limit and filesystem limit of 8 exabytes.

Arno
 
J

John Turco

Folkert Rienstra wrote:

What this thread shows is that you don't know anything, babblebot, that you
are just feeding on others for information to badmouth MS, you Lunix zealot.

<edited>

Hello, Folkert:

"Lunix," you say? Is that a contraction of "lunatic" and "Linux," or
just a typo? <g>


Cordially,
John Turco <[email protected]>
 
M

markm75

Anyone have any thoughts on this PCIe card, motherboard combo with
this server rack:

Server rack device: Supermicro 2U SC825TQ-560LP (only supports low
profile cards)

with this:

Motherboard: Asus DSBV-D
http://www.atacom.com/program/print...RCH_ALL&Item_code=MBXE_ASUS_60_24&USER_ID=www
(PCIe x8)

and this for the storage controller:

Low profile 3ware PCIex4: 9650SE-8LPML
http://www.ewiz.com/detail.php?p=3W...fd8fbbc448d4d8bbdd8480a170ccad4fc57abfbe62883

I cant seem to find PCIe x8 controller boards to match the max speed
of the motherboard recommended for this server rack device.

WIll these 3 give me the best speed for the money (keeping costs
down). I'll be backing up data across the network, most likely using
Acronis or ShadowProtect with 50gb file splits (as this gave me 4 hour
backup times on a similar server for 332GB vs across network backup
times of 8hr 20 minutes with BackupExec 11d, where acronis over the
network with 50gb splits took 4hr 40 minutes at most!)


As a side note.. the CPU will be woodcrest 2.0 5130 with 2GB of ddr2
667mhz ram.
The harddrives will be SATAII in a 2.2TB array (5 drives of 500gb
each).. http://www.zipzoomfly.com/jsp/ProductDetail.jsp?ProductCode=101923&prodlist=froogle
Seagate ES ST3500630NS type drives (does the ES really make a
difference for an org with about 40 users, but most not using the
backup server, it will just backup 5 other server's data, or can I use
regular Seagate 500gb drives to save a few bucks)?

Thanks
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top