FRS just stops...

S

Slimline

2 - new 2003 AD servers running DFS, FRS and acting as file servers -
notning more. I set-up empty Tmp directorys on each (AD1 and AD2). I set-up
\Tmp on AD1 as a domain root and then \Tmp on AD2 as a target root. I set-up
replication witht he config replication wizard and told it that AD1 was the
initial master. AD1 and AD2 are the ony machines in the new domain and I set
them up in different sites even though they're right next to each other on
the same subnet.

When I copy 7 GB of test data from a client on the same subnet (not
logged-in to the domain but in a separate workgroup) the data starts copting
(using drag/drop) to \Tmp on AD1 and AD2 and finishes in 20 mnutes.

The problem is that I have no errors or warnings in the event viewers for
AD1 or AD2, DCdiag passes all test but \Tmp on AD2 is fine and \Tmp on AD1
has only about 700 MB of data in it. I've tried seveal times and the same
thing happens each time.

Anyone know whats happening here?
Scott
 
S

Slimline

This site says that if the staging area fills that the system will
automaticaly delete unused files. Besides, if this were the problem then FRS
would only be useable in very small environments. I have 300 GB to replicate
and many of my customers replicate this much every day (out of many TB of
data). Are you saying that I should increase my staging area to a full 300GB
in order to overcome this problem?

Pardon my directness but this just doesn't mak sense.
Scott
 
E

Eric Fleischman [MSFT]

Hi,

What service pack are you running? If not at least 3, that should be
priority 1. SP4 is prefered.
NTFRS has gotten more and more reliable throughout the last few years. SP4
is pretty much the latest and greatest, and sometimes you'll find that just
going to the latest resolves the issue without any further work. Note that
you should go to the latest on all servers in the FRS replica set, so in the
case of SYSVOL that would be all DC's in the domain.

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no rights.
 
E

Eric Fleischman [MSFT]

I just read this a bit more closely.
In addition to my last post, 660MB is no where near large enough for 300GB
in most situations.

It's tough to answer what the size should be without more details on your
useage. What is average/largest file size, and what type of change frequency
are we looking at? How many replicas? Over wan links, or all local replica
partners?

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no rights.
 
M

Mark Ramey

Slim,

From your original post it looks like the root directories are being
replicated. This is a NO-NO. Create the root and replicating them is fine,
but the root needs to be empty. Create links below the root for the data
folders.

The DFS was mainly designed for data that does not consistently change.
Depending on how much data is changing/replicating will determine the size
of the staging folder. If you wanting to replicate 300 GB of data daily,
then DFS most likely is not going to work for you. Even tho the largest
directory is 150 GB I would assume that there is a large number of files.
What is the largest file that your attempting to replicate?

Also from a previous post you mentioned "staging" the data over to the
second server. This will not save you any replication. The proper way to
stage a replica can be found in the article below.

266679 Pre-staging the File Replication Service Replicated Files on SYSVOL
and
http://support.microsoft.com/?id=266679

Basically you need two replicating partners before you can stage any data on
a 3 replica. There are certain steps that you have to go by so that the MD5
checksums are created.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

Slimline said:
Eric,
Thanks for the responses, this may help:

I'm running 2003 so I was unaware of any SPs available.

Also, what about initial replication? My largest single directory is 150 GB
and I was told by others here that I should not do an initial manual copy of
the data to each root target but instead, let FRS do it. However, even when
I increase the staging area to 16 GB the replication would eventually fail.
That is, I would get 1-3 warnings that the staging area was full and that
old files were being deleted from it (down to 60%) and all seemed well.
Then, for some reason, replication just stops at say 25 or 30 or 35 GBs.
These are clean, unmodified installs and FRS just seems extremely unreliable
and unusable in anything but very small environments.

My set-up is simple. 2 Server 2003 AD servers on one domain. A couple test
machines (all updated XP installs). One network with all the systems on it
and one workgroup on which the clients reside. The clients do not logon to
the domain since this is a lab environment I'm tryng to kep it simple.
Replication takes place between the 2 AD servers that are a couple inches
apart. All the systems are connected to a single basic 8 port GigE switch.
All systems have GigE ports and manual copying is very fast.

I also keep getting warnings on each AD server stating that there is no IP
address for the server for the site it's in (EventID 5802 and 5807
NETLOGON). Other than that I get no errors or warnings and all the systems
see and can talk to each other.

Scott

Eric Fleischman said:
I just read this a bit more closely.
In addition to my last post, 660MB is no where near large enough for 300GB
in most situations.

It's tough to answer what the size should be without more details on your
useage. What is average/largest file size, and what type of change frequency
are we looking at? How many replicas? Over wan links, or all local replica
partners?

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no rights.


Slimline said:
This site says that if the staging area fills that the system will
automaticaly delete unused files. Besides, if this were the problem
then
FRS
would only be useable in very small environments. I have 300 GB to replicate
and many of my customers replicate this much every day (out of many TB of
data). Are you saying that I should increase my staging area to a full 300GB
in order to overcome this problem?

Pardon my directness but this just doesn't mak sense.
Scott

FRS staging area is limited to 660Mb. See
http://support.microsoft.com/?kbid=264822 on how to change that.

--
Regards

Matjaz Ladava, MCSE (NT4 & 2000), Windows MVP
(e-mail address removed)
http://ladava.com

2 - new 2003 AD servers running DFS, FRS and acting as file servers -
notning more. I set-up empty Tmp directorys on each (AD1 and AD2). I
set-up
\Tmp on AD1 as a domain root and then \Tmp on AD2 as a target
root.
I AD1
was and
\Tmp
 
S

Slimline

Mark,
I think you've hit on my problem as I found a reference in a MS article that
says not to replicate domain roots. However, I only have 2 AD servers and I
only want them to provide file services. Most of the time there isn't a lot
of changes to the 300 GB but I'll sometimes re-organize the data and that's
when there may be a lot of changes. The largest files are just under 1 GB
and the total file system has may 100K files.

When I create a domain root it asks me for a folder to share so I tell it
\Tmp on AD1. Then I set-up a root target and tell it \Tmp on AD2. After that
I set-up replication between these two. These folders then appear as shares
to my clients and I've been copying \Tmp\*.* from a client to \Tmp on AD1.

From what you've said and what I'ev read this is wrong. Im finding that \Tmp
on AD2 is a perfect copy of \TMp on my client. But \Tmp on AD1 may have
anywhere from 800GB to 1.2 GB of incomplete data in it.

So here is my question: with only 2 AD servers (running 2003) how do I
set-up file shares for all my clients (for example: my documents on John
Smiths computer is actuallya replicated share in the domain) and have them
replicate in case one server goes offline for some reason?

Is this even possible with 2 AD servers? Could I use a \Tmp2 folder on AD1
as a link to \Tmp (the domain root) on AD1 to make it look like I had a root
and two targets? Do I need root links or can I just use 2 rot targets? This
level of operational detail does not seem to be documented anywhere that I
can find so thanks for the help - I've been struggling with this for 2
weeks.

Scott

Mark Ramey said:
Slim,

From your original post it looks like the root directories are being
replicated. This is a NO-NO. Create the root and replicating them is fine,
but the root needs to be empty. Create links below the root for the data
folders.

The DFS was mainly designed for data that does not consistently change.
Depending on how much data is changing/replicating will determine the size
of the staging folder. If you wanting to replicate 300 GB of data daily,
then DFS most likely is not going to work for you. Even tho the largest
directory is 150 GB I would assume that there is a large number of files.
What is the largest file that your attempting to replicate?

Also from a previous post you mentioned "staging" the data over to the
second server. This will not save you any replication. The proper way to
stage a replica can be found in the article below.

266679 Pre-staging the File Replication Service Replicated Files on SYSVOL
and
http://support.microsoft.com/?id=266679

Basically you need two replicating partners before you can stage any data on
a 3 replica. There are certain steps that you have to go by so that the MD5
checksums are created.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

Slimline said:
Eric,
Thanks for the responses, this may help:

I'm running 2003 so I was unaware of any SPs available.

Also, what about initial replication? My largest single directory is 150 GB
and I was told by others here that I should not do an initial manual
copy
of
the data to each root target but instead, let FRS do it. However, even when
I increase the staging area to 16 GB the replication would eventually fail.
That is, I would get 1-3 warnings that the staging area was full and that
old files were being deleted from it (down to 60%) and all seemed well.
Then, for some reason, replication just stops at say 25 or 30 or 35 GBs.
These are clean, unmodified installs and FRS just seems extremely unreliable
and unusable in anything but very small environments.

My set-up is simple. 2 Server 2003 AD servers on one domain. A couple test
machines (all updated XP installs). One network with all the systems on it
and one workgroup on which the clients reside. The clients do not logon to
the domain since this is a lab environment I'm tryng to kep it simple.
Replication takes place between the 2 AD servers that are a couple inches
apart. All the systems are connected to a single basic 8 port GigE switch.
All systems have GigE ports and manual copying is very fast.

I also keep getting warnings on each AD server stating that there is no IP
address for the server for the site it's in (EventID 5802 and 5807
NETLOGON). Other than that I get no errors or warnings and all the systems
see and can talk to each other.

Scott

Eric Fleischman said:
I just read this a bit more closely.
In addition to my last post, 660MB is no where near large enough for 300GB
in most situations.

It's tough to answer what the size should be without more details on your
useage. What is average/largest file size, and what type of change frequency
are we looking at? How many replicas? Over wan links, or all local replica
partners?

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no rights.


This site says that if the staging area fills that the system will
automaticaly delete unused files. Besides, if this were the problem then
FRS
would only be useable in very small environments. I have 300 GB to
replicate
and many of my customers replicate this much every day (out of many
TB
of
data). Are you saying that I should increase my staging area to a full
300GB
in order to overcome this problem?

Pardon my directness but this just doesn't mak sense.
Scott

FRS staging area is limited to 660Mb. See
http://support.microsoft.com/?kbid=264822 on how to change that.

--
Regards

Matjaz Ladava, MCSE (NT4 & 2000), Windows MVP
(e-mail address removed)
http://ladava.com

2 - new 2003 AD servers running DFS, FRS and acting as file servers -
notning more. I set-up empty Tmp directorys on each (AD1 and
AD2).
I root. domain
and \Tmp
 
M

Mark Ramey

Share out the Tmp and Tmp2 with no data in the folders. Replicating the root
is perfectly acceptable as long as there is no data being replicated. Once
this is done, in the DFS GUI create links to other shared folders.

You will have to share out one of the folders you are replicating and create
a folder on the destination server for it to accept the information. Don't
get replication crazy, add folders in little by little and let them
replicate before adding more.

So under your root name in the DFS GUI you should see links on both servers
and set up replication this way.

You can replicate member servers if you want, only thing you have to do is
set the FRS service to auto startup and start the service. By default the
service is set to manual startup.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

Slimline said:
Mark,
I think you've hit on my problem as I found a reference in a MS article that
says not to replicate domain roots. However, I only have 2 AD servers and I
only want them to provide file services. Most of the time there isn't a lot
of changes to the 300 GB but I'll sometimes re-organize the data and that's
when there may be a lot of changes. The largest files are just under 1 GB
and the total file system has may 100K files.

When I create a domain root it asks me for a folder to share so I tell it
\Tmp on AD1. Then I set-up a root target and tell it \Tmp on AD2. After that
I set-up replication between these two. These folders then appear as shares
to my clients and I've been copying \Tmp\*.* from a client to \Tmp on AD1.

From what you've said and what I'ev read this is wrong. Im finding that \Tmp
on AD2 is a perfect copy of \TMp on my client. But \Tmp on AD1 may have
anywhere from 800GB to 1.2 GB of incomplete data in it.

So here is my question: with only 2 AD servers (running 2003) how do I
set-up file shares for all my clients (for example: my documents on John
Smiths computer is actuallya replicated share in the domain) and have them
replicate in case one server goes offline for some reason?

Is this even possible with 2 AD servers? Could I use a \Tmp2 folder on AD1
as a link to \Tmp (the domain root) on AD1 to make it look like I had a root
and two targets? Do I need root links or can I just use 2 rot targets? This
level of operational detail does not seem to be documented anywhere that I
can find so thanks for the help - I've been struggling with this for 2
weeks.

Scott

Mark Ramey said:
Slim,

From your original post it looks like the root directories are being
replicated. This is a NO-NO. Create the root and replicating them is fine,
but the root needs to be empty. Create links below the root for the data
folders.

The DFS was mainly designed for data that does not consistently change.
Depending on how much data is changing/replicating will determine the size
of the staging folder. If you wanting to replicate 300 GB of data daily,
then DFS most likely is not going to work for you. Even tho the largest
directory is 150 GB I would assume that there is a large number of files.
What is the largest file that your attempting to replicate?

Also from a previous post you mentioned "staging" the data over to the
second server. This will not save you any replication. The proper way to
stage a replica can be found in the article below.

266679 Pre-staging the File Replication Service Replicated Files on SYSVOL
and
http://support.microsoft.com/?id=266679

Basically you need two replicating partners before you can stage any
data
on
a 3 replica. There are certain steps that you have to go by so that the MD5
checksums are created.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

150
GB copy
on
logon
no
IP
address for the server for the site it's in (EventID 5802 and 5807
NETLOGON). Other than that I get no errors or warnings and all the systems
see and can talk to each other.

Scott

I just read this a bit more closely.
In addition to my last post, 660MB is no where near large enough for 300GB
in most situations.

It's tough to answer what the size should be without more details on your
useage. What is average/largest file size, and what type of change
frequency
are we looking at? How many replicas? Over wan links, or all local replica
partners?

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no
rights.


This site says that if the staging area fills that the system will
automaticaly delete unused files. Besides, if this were the
problem
then
FRS
would only be useable in very small environments. I have 300 GB to
replicate
and many of my customers replicate this much every day (out of
many
TB AD2). that
AD1 and
the
 
S

Slimline

Mark,
Thanks for the reply but this is very confusing.

I'm runnig 2 identical Server 2003 domain controllers in a single domain.
The FRS service is started automatically (according to the services panel).
Here is what I've been doing - please tell me where I'm going wrong because
the instructions below are confusing. I'm only sharing/testing witha single
folder - \Tmp.

1.) On server AD1 I run the DFS panel and select "New Root".
2.) I select "Domain Root".
3.) It asks for a domain name and I give it my domain name
4.) It asks for a server name to host the root and I select AD1 from the
list (of AD1 and AD2)
5.) It asks me for a root name and I give it "Tmp"
6.) It asks for a folder name and I select N:\Tmp from the folder selection
dialog
7.) Done with new root
8.) On my new root "Tmp" I select "New Root Target" and then give it server
AD2 and folder N:\Tmp on AD2 for the root target folder.
9.) On the new root I configure replication.
10.) I select AD1 as my initial master and full mesh topology (I've tried
ring and hub/spoke as well but thy all work the same with only 2 servers).

N:\Tmp on both AD1 and AD2 are empty and permission on both allow everybody
full control (for testing purposes only).

On my client XP machine I drag the contents of my test folder (roughly 10 GB
worth) to the \Tmp folder shared from AD1 in my domain and let it rip. After
about 25 minutes the copying stops and the \Tmp folder on AD2 perfectly
matches the original test folder on my client. However, \TMp on AD1 (my
primary AD server), I get 1-2 GB of partial data.

I've tried settingup some links but this is confusing. Sorry to be dnse
about this but if you can interject in my cookbook instructions above what
I've mis-configured, I'd appreciate it. I'm trying to keep the set-up as
simple as possible for reliability.

Scott

Mark Ramey said:
Share out the Tmp and Tmp2 with no data in the folders. Replicating the root
is perfectly acceptable as long as there is no data being replicated. Once
this is done, in the DFS GUI create links to other shared folders.

You will have to share out one of the folders you are replicating and create
a folder on the destination server for it to accept the information. Don't
get replication crazy, add folders in little by little and let them
replicate before adding more.

So under your root name in the DFS GUI you should see links on both servers
and set up replication this way.

You can replicate member servers if you want, only thing you have to do is
set the FRS service to auto startup and start the service. By default the
service is set to manual startup.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

Slimline said:
Mark,
I think you've hit on my problem as I found a reference in a MS article that
says not to replicate domain roots. However, I only have 2 AD servers
and
I
only want them to provide file services. Most of the time there isn't a lot
of changes to the 300 GB but I'll sometimes re-organize the data and that's
when there may be a lot of changes. The largest files are just under 1 GB
and the total file system has may 100K files.

When I create a domain root it asks me for a folder to share so I tell it
\Tmp on AD1. Then I set-up a root target and tell it \Tmp on AD2. After that
I set-up replication between these two. These folders then appear as shares
to my clients and I've been copying \Tmp\*.* from a client to \Tmp on AD1.

From what you've said and what I'ev read this is wrong. Im finding that \Tmp
on AD2 is a perfect copy of \TMp on my client. But \Tmp on AD1 may have
anywhere from 800GB to 1.2 GB of incomplete data in it.

So here is my question: with only 2 AD servers (running 2003) how do I
set-up file shares for all my clients (for example: my documents on John
Smiths computer is actuallya replicated share in the domain) and have them
replicate in case one server goes offline for some reason?

Is this even possible with 2 AD servers? Could I use a \Tmp2 folder on AD1
as a link to \Tmp (the domain root) on AD1 to make it look like I had a root
and two targets? Do I need root links or can I just use 2 rot targets? This
level of operational detail does not seem to be documented anywhere that I
can find so thanks for the help - I've been struggling with this for 2
weeks.

Scott

Mark Ramey said:
Slim,

From your original post it looks like the root directories are being
replicated. This is a NO-NO. Create the root and replicating them is fine,
but the root needs to be empty. Create links below the root for the data
folders.

The DFS was mainly designed for data that does not consistently change.
Depending on how much data is changing/replicating will determine the size
of the staging folder. If you wanting to replicate 300 GB of data daily,
then DFS most likely is not going to work for you. Even tho the largest
directory is 150 GB I would assume that there is a large number of files.
What is the largest file that your attempting to replicate?

Also from a previous post you mentioned "staging" the data over to the
second server. This will not save you any replication. The proper way to
stage a replica can be found in the article below.

266679 Pre-staging the File Replication Service Replicated Files on SYSVOL
and
http://support.microsoft.com/?id=266679

Basically you need two replicating partners before you can stage any
data
on
a 3 replica. There are certain steps that you have to go by so that
the
MD5
checksums are created.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

Eric,
Thanks for the responses, this may help:

I'm running 2003 so I was unaware of any SPs available.

Also, what about initial replication? My largest single directory is 150
GB
and I was told by others here that I should not do an initial manual copy
of
the data to each root target but instead, let FRS do it. However, even
when
I increase the staging area to 16 GB the replication would eventually
fail.
That is, I would get 1-3 warnings that the staging area was full and that
old files were being deleted from it (down to 60%) and all seemed well.
Then, for some reason, replication just stops at say 25 or 30 or 35 GBs.
These are clean, unmodified installs and FRS just seems extremely
unreliable
and unusable in anything but very small environments.

My set-up is simple. 2 Server 2003 AD servers on one domain. A
couple
test
machines (all updated XP installs). One network with all the systems
on
it
and one workgroup on which the clients reside. The clients do not
logon
to
the domain since this is a lab environment I'm tryng to kep it simple.
Replication takes place between the 2 AD servers that are a couple inches
apart. All the systems are connected to a single basic 8 port GigE switch.
All systems have GigE ports and manual copying is very fast.

I also keep getting warnings on each AD server stating that there is
no
IP
address for the server for the site it's in (EventID 5802 and 5807
NETLOGON). Other than that I get no errors or warnings and all the systems
see and can talk to each other.

Scott

I just read this a bit more closely.
In addition to my last post, 660MB is no where near large enough for
300GB
in most situations.

It's tough to answer what the size should be without more details on
your
useage. What is average/largest file size, and what type of change
frequency
are we looking at? How many replicas? Over wan links, or all local
replica
partners?

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no
rights.


This site says that if the staging area fills that the system will
automaticaly delete unused files. Besides, if this were the problem
then
FRS
would only be useable in very small environments. I have 300 GB to
replicate
and many of my customers replicate this much every day (out of
many
TB
of
data). Are you saying that I should increase my staging area to
a
full
300GB
in order to overcome this problem?

Pardon my directness but this just doesn't mak sense.
Scott

FRS staging area is limited to 660Mb. See
http://support.microsoft.com/?kbid=264822 on how to change that.

--
Regards

Matjaz Ladava, MCSE (NT4 & 2000), Windows MVP
(e-mail address removed)
http://ladava.com

2 - new 2003 AD servers running DFS, FRS and acting as file
servers -
notning more. I set-up empty Tmp directorys on each (AD1 and AD2).
I
set-up
\Tmp on AD1 as a domain root and then \Tmp on AD2 as a target
root.
I
set-up
replication witht he config replication wizard and told it that
AD1
was
the
initial master. AD1 and AD2 are the ony machines in the new domain
and
I
set
them up in different sites even though they're right next to each
other
on
the same subnet.

When I copy 7 GB of test data from a client on the same subnet
(not
logged-in to the domain but in a separate workgroup) the data
starts
copting
(using drag/drop) to \Tmp on AD1 and AD2 and finishes in 20
mnutes.

The problem is that I have no errors or warnings in the event
viewers
for
AD1 or AD2, DCdiag passes all test but \Tmp on AD2 is fine and
\Tmp
on
AD1
has only about 700 MB of data in it. I've tried seveal times and
the
same
thing happens each time.

Anyone know whats happening here?
Scott
 
S

Slimline

Also, before starting the DFS set-up the AD1 and AD2 \Tmp directories are
empty and not shared at all. I've even tried the set-up (per my
instructions) with no \Tmp on AD1 and AD2 - creating them when setting-up
the Root and Root Target. Nothing has had any effect - the results are still
the same.

What I would really like is a single share to show-up on my clients. That
share would the reside on AD1 or automatically switch to AD2 in AD1 goes off
line for somne reason. This seems like the way DFS should work in the first
place - abstracting the share-server connection.

Scott
 
M

Mark Ramey

Ok, try to make this short and simple,

1) In the DFS GUI create the new domain based DFS and point it to the TMP
folder on AD1.
2) In the DFS GUI now you should see the root listed as \\AD1\<Root name>
then on the right it should list the replica as \\ADI\TMP
3) Right click the root, new root replica, browse to AD2 and the TMP2 share.
4) Now we should see \\AD1\TMP1 and \\AD2\TMP2 on the right.
5) On AD1 create a new share and call it data1, on AD2 create a new share
and call it data2.
6) Right click the new root, select New DFS Link, give it a name (data)
browse to the data1 share.
7) Now you should see the link below the DFS root. Right click the link, New
Replica, browse to the data2 folder on the AD2, on the Replication policy
select manual.
8) Now when you highlight the link you should see the \\AD1\data1 and
\\AD2\data2 on the right as replica's.
9) Right click the link Data, replication policy, select one of the shares
to be master and enable the other to replicate. At this point it does not
matter who is master as they are both empty.
10) Wait and monitor the FRS event logs and make sure the connections are
built.
11) Copy your data into the data1 or data2 folder and let it replicate.

This is what we mean by do not put data in the DFS root to replicate. Use
links below the root and replicate the data that way.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.



Slimline said:
Mark,
Thanks for the reply but this is very confusing.

I'm runnig 2 identical Server 2003 domain controllers in a single domain.
The FRS service is started automatically (according to the services panel).
Here is what I've been doing - please tell me where I'm going wrong because
the instructions below are confusing. I'm only sharing/testing witha single
folder - \Tmp.

1.) On server AD1 I run the DFS panel and select "New Root".
2.) I select "Domain Root".
3.) It asks for a domain name and I give it my domain name
4.) It asks for a server name to host the root and I select AD1 from the
list (of AD1 and AD2)
5.) It asks me for a root name and I give it "Tmp"
6.) It asks for a folder name and I select N:\Tmp from the folder selection
dialog
7.) Done with new root
8.) On my new root "Tmp" I select "New Root Target" and then give it server
AD2 and folder N:\Tmp on AD2 for the root target folder.
9.) On the new root I configure replication.
10.) I select AD1 as my initial master and full mesh topology (I've tried
ring and hub/spoke as well but thy all work the same with only 2 servers).

N:\Tmp on both AD1 and AD2 are empty and permission on both allow everybody
full control (for testing purposes only).

On my client XP machine I drag the contents of my test folder (roughly 10 GB
worth) to the \Tmp folder shared from AD1 in my domain and let it rip. After
about 25 minutes the copying stops and the \Tmp folder on AD2 perfectly
matches the original test folder on my client. However, \TMp on AD1 (my
primary AD server), I get 1-2 GB of partial data.

I've tried settingup some links but this is confusing. Sorry to be dnse
about this but if you can interject in my cookbook instructions above what
I've mis-configured, I'd appreciate it. I'm trying to keep the set-up as
simple as possible for reliability.

Scott

Mark Ramey said:
Share out the Tmp and Tmp2 with no data in the folders. Replicating the root
is perfectly acceptable as long as there is no data being replicated. Once
this is done, in the DFS GUI create links to other shared folders.

You will have to share out one of the folders you are replicating and create
a folder on the destination server for it to accept the information. Don't
get replication crazy, add folders in little by little and let them
replicate before adding more.

So under your root name in the DFS GUI you should see links on both servers
and set up replication this way.

You can replicate member servers if you want, only thing you have to do is
set the FRS service to auto startup and start the service. By default the
service is set to manual startup.


--
Mark Ramey [Microsoft]

This posting is provided "AS IS" with no warranties, and confers no rights.

article
that and a
lot After
that that
\Tmp a
root
that
I the
size
way
to is
150 35
GBs. systems
on is
no
details
on
your
useage. What is average/largest file size, and what type of change
frequency
are we looking at? How many replicas? Over wan links, or all local
replica
partners?

~Eric

--
Eric Fleischman [MSFT]
Directory Services
This posting is provided "AS IS" with no warranties, and confers no
rights.


This site says that if the staging area fills that the system will
automaticaly delete unused files. Besides, if this were the problem
then
FRS
would only be useable in very small environments. I have 300
GB
to
a times
and
 
S

Slimline

Mark,
Thanks for the instructions, it seems to be working now. I copied 11 GB from
\Tmp on a client to \Data1 on AD1 and a few minutes after it was done
copying, it started replicating from AD1 to AD2. I increased the size of my
staging space from the default 675 MB to 6.75 GB to help stop the staging
full warnings and delays. Just a couple last questions:

1.) When you say "root replica", in Server 2003 you mean "root target"
correct? There is no reference to replicas in 2003
2.) Data1, Tmp and Temp (the name I gave to my domain root) all show-up as
shares on my client from both AD1 and AD2. "Temp" points to \Data1 on AD1
(or respectively AD2 for Temp on AD2). Is there anyway to hide Data1 and Tmp
and just present the domain root Temp) so that I don't need to be concenred
with users eeing anything but the domain root? The system will allow me to
copy data to any one of the three shares and even though Temp and Data1 are
the same, I wouldn't want to mistakenly copy data to \Tmp (the actual domain
root target).
3.) I set-up replication manually (custom in 2003) and told it to replicate
from AD1 to AD2 only. Should I set-up replication from AD2 to AD1 for those
times when AD1 may be off line?

4.) On both AD1 and AD2 periodically (about every 304 hours) I get the same
warning: "None of the IP addresses (192.168.6.189) of this domainc
controller map to the configured site 'DR'" 192.168.6.189 is the IP addrs
for AD2. I get the same warning on AD1 for it's IP address. Do you know
what's causing this?

Scott
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top