Can someone please explain the difference between server model and...

M

marsha

I am now curious about the difference in the way two different systems work.
One is a network system where one computer shares a folder with data that
any user can change. That folder on the one computer is mapped to the other
users. I basically understand this workgroup model.

But what I don't understand is a domain model with the data computer using
server software. I have never used such a system. First, let's say we are
talking
about Windows Server 2003 or whatever is the most popular server software.

That is the operating system for that computer isn't it?
How does the server provide the data to the users in a more reliable way
than
a networked system?

Thanks for any understanding on this.
 
C

CJT

marsha said:
I am now curious about the difference in the way two different systems work.
One is a network system where one computer shares a folder with data that
any user can change. That folder on the one computer is mapped to the other
users. I basically understand this workgroup model.

But what I don't understand is a domain model with the data computer using
server software. I have never used such a system. First, let's say we are
talking
about Windows Server 2003 or whatever is the most popular server software.

That is the operating system for that computer isn't it?
How does the server provide the data to the users in a more reliable way
than
a networked system?

Thanks for any understanding on this.
I think you're dealing with a distinction without a difference.
 
J

Jim

Both systems as you described them are networks. One is peer-based, the
other client-server.

In the peer based network, such as that which you experience w/ say, 3 or 4
Windows PCs running WinXP, Win98, etc., in a "workgroup", there is no
designated server, that is, one PC which provides services to all the other
PCs on that network, typically 24/7. In a workgroup, all PCs has "equal
standing", that is, they come and go offline/online at any time. You
typically can't count on any peer being available 24/7 to provide you with
services (e.g., Outlook Exchange server, SQL Server database, IIS Web
Server). Peers share they're resources "when available". And there's no
hierarchy of access, and no centralized user control either. Peers define
THEIR OWN shares and services (if any), and who can and can't have access
(if anyone). It is a TOTALLY DECENTRALIZED network! This is comparable to
say, using the BitTorrent network. You have no idea what peers may be
available at any time, they come and go. And who you will obtain needed
services from (in this case, file download) is totally dependent on
happenstance. In fact, no one may be available at all. Even if a peer is
down, the network continues to function normally because there is no
hierarchical structure, any peer is as good as any other.

In a client-server model, you typically have one or more domain
controllers/servers. These are designed to share distributable services
throughout the domain/network. User accounts and privileges are
centralized. So are access to services like Outlook Exchange, SQL Server,
etc. The domain controller(s) is typically running 24/7 because the domain
controller is part of a heirarchy, w/ the domain controller (or server)
being necessary for the control and smooth operation of that network. You,
as a client, expect that server to be available when you need access to its
shared resources. This is comparable to say, the old Scour file sharing
network, where if the central server goes down, a major component/feature of
the network is inaccessible! No centralized server, no network
(effectively). Of course, the advantages of a centralized server are many,
including user accounts w/ access controls, indexed databases, centralized
storage (therefore, less redundancy), reliability, etc.

Each type, peer-based or client-server, has its advantages and
disadvantages. For the home user or small office, usually peer-based
networks are simpler to construct and manage. Most ppl just want to share
some folders, printers, etc., and have only 2-3 PCs at most. Peer-based
networking is more than adequate. For large organizations, however,
peer-based simply won't cut it. Organizations have far too many PCs to make
peer-based architecture practical, they need tight controls on user accounts
and access, they want to share large, complex services like databases, need
24/7 uptime reliability, etc.

HTH

Jim
 
M

marsha

Jim said:
Both systems as you described them are networks. One is peer-based, the
other client-server.

Thanks Jim, excellent info. I realize I need to describe my very real
system
so that I can ask a couple of questions.

I have maintained a network of 6 users for a couple of years. It is a LAN
workgroup
network made up of
xp and w2k machines. We use a specialized software that needs to access a
data
folder. It is very simple in that
the data is kept on a separate computer. All the data is in sub folders
within the data folder.
The data folder is shared of course. The computer that holds the data
folder lists
the 6 other computers as users all with administrative privileges since they
must
all change the data. Obviously, every computer is part of the same
workgroup name.
Each of those users have a password associated with their computer
name. The data folder is mapped on each of the user computers. So
every morning we turn on the server and then the user computers. Each user
computer
reconnects to the mapped drive. The system works fairly well except that
mapped
folders do disconnect now and then. And for whatever reason, the passwords
for
each user have to be reset on occasion.

The problem is that none of the 6 users know much about computers. They
know how
to use the software. But over the years, all kinds of files have been put
in the wrong place,
etc. You can imagine how these problems have added up, I'm sure.

The computer that held the data folder has gotten too slow (it's old). We
just bought a new
computer. Hence the questions.

The software developer said that soon we would need to go with server
software. I know
nothing about server software. We want to keep the system as similar as
possible to how we
do it right now. Obviously, we don't need a web server. We just need to
"serve" the data
that was previously only a shared folder.

With all of that in mind, which server software do you think we need?? And
how does the
server software "serve" the software? The way it works now is a user will
click on the application
which needs to pull up some photos and a report. They don't need access to
the data folder until
and UNLESS they launch the application. And when they bring it up, they
will probably change
the report some and put it back in the data folder so that other users have
access to it. The main
problem is concurrent users trying to share the data folder. There is a
limit in a LAN network to the
number of concurrent users. At least, that is what the software developers
have said.

How does a server make the system any better? Is it more reliable? Is it
faster? Right now, with a network
situation, there is a slowdown with concurrent users. Does that problem go
away with a server?

Thanks in advance for any insights you have about what I have said!!!
 
J

Jim

Marsha,

As you can imagine, once you start delving into network architecture, the
conversation can get VERY complex. As a consultant, I would be asking a wide
variety of questions to determine your needs and what would work best. In a
forum such as this, I can only provide a brief overview. Even the answer to
a seemingly simple question on my part might lead me to a different
conclusion/suggestion. So it's important that you do find a GOOD
consultant. With that caveat...

Your's is a common problem. A peer-based network typically doesn't scale
well when it comes to users and data growth. For a small office, let's say
3-4 people, a few shared document repositories, shared printers, etc., it
works well enough. But as you add more users, and more data, and more
shares, etc., it just doesn't scale well. Pretty soon you find multiple
copies of files scattered across many PCs (as users become frustrated with
the lack of access to, what is suppose to be, the one and only source).
Pete doesn't come in one day, no one knows his password, and suddenly we
have to reach Pete who's on a business trip to London, and ..., oh well, you
get the picture. And of course, access control is so limited, and
distributed itself, you soon have 5-6 versions of the same document, and
pretty soon no one's sure which is the most current and accurate one.

Your typical Window peer-based network doesn't scale well for another very
practical reason, which is probably what most concerns your software
developer (?) -- Microsoft doesn't want people trying to make a peer-based
infrastucture such as yours, into a client-server architecture, so they
purposely CRIPPLE Windows XP Pro, for example, to only allow 10 concurrent
network connections. AND THIS IS IMPORTANT -- that 10 count includes OTHER
network connections that PC may make, such as doing a Windows Update,
updating the Anti-Virus software's signatures, providing web services, etc.
IOW, your users will find they sometimes cannot connect to the shared
resources *if* those 10 connections are exhausted! So connectivity will
prove intermittent as you add more concurrent users. That's by design.

What Microsoft wants you do when you start to get BIG, is move to a
client-server architecture. In that case, your peer-based network is
replaced w/ a centralized server which defines and manages user accounts,
provides shared document repositories, maybe a SQL database, mail server,
etc. And most importantly in your case, POTENTIALLY unlimited connections.
Of course, Microsoft will charge considerable more for the server software
based on # of user accounts, concurrent connections, etc. It's all part of
$$$ game @ MS.

If you're simply interested in including a server (e.g., W2K or W2K3) to
your network (as opposed to a domain controller w/ user accounts, access
control, etc.), then a server will add more concurrent connections, and you
could force everyone to use THAT server for shared resources. IOW,
eliminate some of these peer-based shares. That *will* eliminate that one
problem. It doesn't in and of itself address the issue of concurrent
access, i.e., prevent multiple users from trying to read/write at the same
time. That's a function of the application (and that capability varies
widely) and if applicable, the database(s) you're using. As far as being
faster, more reliable? Again, in and of itself, using a server doesn't make
this happen. A server like W2K and W2K3 is designed to handle some things
more efficiently than say, WinXP. A server, for example, will typically be
"tuned" to give background processes higher priority, as compared to
foreground apps, like your desktop. That's because a server is not
typically used for foreground apps, the UI is only accessed for
adminstrative functions. In contrast, your peer-based collegues using WinXP
have higher priority given to foreground apps, since these are typically
being used as client workstations, the UI is actively used for a varirty of
tasks. So taking out CPUs cycles to feed hungry peers w/ data is NOT a high
priority. Granted, on a small network, this is not usually a big
difference, but it is different. Same thing w/ reliability. A server can
be installed on a bigger, more reliable PC w/ lots of redundant hardware,
backup software, UPS, etc., if that's what you want. But at it's core, the
server is just another OS that's been optimized for server operations. It's
no more intrinsically reliable than your client PCs. You only make it more
reliable by installing a server OS on better, faster, and more reliable
hardware. Nothing just MAGICALLY makes the server better or more reliable.
You can install W2K on the same hardware as your WinXP machines and have
absolutely no more reliability than before. But most people who move to a
server also upgrade the hardware to MAKE it more reliable, faster, etc. It
only makes sense since you are typically using it for critical functions,
centralized data storage, etc. IOW, while a given peer may be expendable,
your server usually isn't.

As far as network traffic slowdowns, adding a server is probably not going
to provide any advantages. If you're having network performance issues,
there may be something else going on there. For example, if you were using
a hub (vs. a switch), a hub does not scale well either. As more users are
added, collisions on the hub (which require retransmission of ethernet
packets) eventually bog down the network. Installation of a switch would
correct that problem. Then there's the issue of 10/100BaseT vs. Gigabit
ethernet, the latter would improve network speed too, assuming your clients
had the necessary network adapters. IOW, there's a LOT of things that
affect network performance, I'd have to examine your network closely, your
working habits, how you conduct business, etc., over a period of time to
determine where problems lie and offer appropriate remedies. Based on a NG
discussion, I'd simply be throwing darts to do anything more than give a
broad spectrum of ideas to consider.

Jim
 
M

marsha

Jim said:
Your's is a common problem. A peer-based network typically doesn't scale
well when it comes to users and data growth.
.............................................................................
.....................

If you're simply interested in including a server (e.g., W2K or W2K3) to
your network (as opposed to a domain controller w/ user accounts, access
control, etc.), then a server will add more concurrent connections, and you
could force everyone to use THAT server for shared resources. IOW,
eliminate some of these peer-based shares. That *will* eliminate that one
problem.


Super information - all of it!!! You have raised another question. Is a
domain
controller w/user accounts an intermediate step? Our office truly doesn't
need
all the stuff served. Only one machine gets the email via the connection to
internet.
 
J

Jim

marsha said:
.............................................................................


Super information - all of it!!! You have raised another question. Is a
domain
controller w/user accounts an intermediate step? Our office truly doesn't
need
all the stuff served. Only one machine gets the email via the connection to
internet.

You don't HAVE to move to a full blown domain, i.e., w/ user accounts.
Nothing prevents you from installing W2K/W2K3 as just another workgroup peer
(you're asked upon installation whether the server should be installed as a
workgroup or domain). While you lose some of the advantages of a domain,
you do gain the advantage of more concurrent network connections, an OS
optimized for background operations, centralized repositories, etc. IOW,
while the server would be installed just like any other peer (you could
share its folders, printers, etc., just like any other PC), you "treat" it
as if it has heirarchical predominance, but it's not enforced through domain
control, but rather, simply convention. You run it (typically) 24/7 and
have everyone use ONLY it for shared documents and resources. This doesn't
preclude anyone from still sharing resources among the other peers, just as
before, but with the presence of the server, you establish a convention that
your staff must use the server for "shared" resources. If they, on their
own, continue wanting to share personal resources, they certainly can do so.
But you have to make it clear this is NOT something that should be done for
documents and resources that are intrinsically shared within the entire
network.

Here's where a domain proves it worth. Suppose Pete or Susie decides they
don't want to follow convention? Or simply forgets. Instead, they continue
to share resources at the peer level. Without a domain and domain
controller, and thus user accounts, you have no means to FORCE them to use
the server. If you were using a domain, you'd have the ability to ENFORCE
rules through control of user privileges that could otherwise be violated .
That's the rub about NOT using a domain. You don't have enforcement, only
convention to work with. Granted, for a small office, this may be
unnecessary. But if you have a continuing problem of enforcing conventions,
even among a small group of users, even if only by mistake, a domain can
provide that enforcement.

If anything, I'd say installing a server w/o user accounts is the
intermediate step. IOW, installing it ONLY as a simple server gives you
some of the benefits of a server, w/o (what some might consider, such as
yourself) the burden/overkill of user account and access control.
Eventually you may wish to reinstall the OS at a later date as a domain
controller, add user accounts, etc., if you feel this would provide more
control of your network. With a domain, you do gain significant control of
your network in ways no peer based network can match. But for small
operations, limited and well known users, domains can be overkill. Using a
small office network myself, I rarely use a domain, just too much hassle for
only myself and a few family members. But I do use NT, W2K, and W2K3 from
time to time to take advantage of other server features. Unlimited
concurrent network connections is a significant one, but access to other
services can be just as important. You can't even install some MS services
on anything BUT a server. So if you want to run your own Exchange Server
(thereby centralize email repositories and control), or run a full-blown
database (e.g., SQL Server), a server would be for all practical purposes
the only route (their are some versions of MS software that can run on
simple WinXP/Win98 peers, but are often limited in features and capacity).
Once you install a server, it only makes sense to move more and more of your
now peer-based operations to it. So, for example, you might even make it
the network router! It could provide DHCP, NAT, firewall, and other kinds
of Internet services, maybe a proxy server to allow centralized filtering,
all kinds of things. You could even make it support remote access by
setting up VPN services on it. Another example of why 24/7 w/ a server is
typical. And why "beefing up" that server machine is often an obvious step
in the process, you're placing more and more dependence on that one server.

So if you want to narrowly focus on installing a server w/o a domain and
address some of the problems, you certainly can. People do it all the time.
Then perhaps decide later to upgrade the install to a domain if the
situation dictates.

Jim
 
M

marsha

Jim said:
You don't HAVE to move to a full blown domain, i.e., w/ user accounts.
Nothing prevents you from installing W2K/W2K3 as just another workgroup peer
(you're asked upon installation whether the server should be installed as a
workgroup or domain).

Now I am confused. This is what we have done... I think but with w2k. Or
is it that by w2k/w2k3 you are talking about the server software???? If so,
then I
understand. I'm sorry. I am so ignorant of server stuff that I don't even
recognize
that nomenclature if that is what that is.
While you lose some of the advantages of a domain,
you do gain the advantage of more concurrent network connections, an OS
optimized for background operations, centralized repositories, etc. IOW,
while the server would be installed just like any other peer

Now that sounds interesting. Very simple if I understand correctly. I
just install
the server software as the OS and configure it basically the way we have
done.
But has advantages over xp home or w2k. Did I understand you correctly???
(by the way, thanks again over and over for your help!!! :)
You run it (typically) 24/7 and

We don't run 24/7. It is hard enough to get people to show up at 9am. lol
have everyone use ONLY it for shared documents and resources. This doesn't
preclude anyone from still sharing resources among the other peers, just as
before, but with the presence of the server, you establish a convention that
your staff must use the server for "shared" resources. If they, on their
own, continue wanting to share personal resources, they certainly can do so.
But you have to make it clear this is NOT something that should be done for
documents and resources that are intrinsically shared within the entire
network.

So basically, it would be set up exactly as we have our system now.
Here's where a domain proves it worth. Suppose Pete or Susie decides they
don't want to follow convention? Or simply forgets. Instead, they continue
to share resources at the peer level. Without a domain and domain
controller, and thus user accounts, you have no means to FORCE them to use
the server. If you were using a domain, you'd have the ability to ENFORCE
rules through control of user privileges that could otherwise be violated ..
That's the rub about NOT using a domain. You don't have enforcement, only
convention to work with. Granted, for a small office, this may be
unnecessary.

We have absolutely no problem with people NOT using the convention. They
know so little about computers that they do exactly what I say. (With some
mistakes
of course. And they sneak a great deal of use of the internet, but that's
life...:)
But if you have a continuing problem of enforcing conventions,
even among a small group of users, even if only by mistake, a domain can
provide that enforcement.

We don't need that.
If anything, I'd say installing a server w/o user accounts is the
intermediate step. IOW, installing it ONLY as a simple server gives you
some of the ...................But I do use NT, W2K, and W2K3 from
time to time to take advantage of other server features. Unlimited
concurrent network connections is a significant one, but access to other
services can be just as important.

I need unlimited concurrent network connections!!!!! That is one of our
biggest limitations with our current setup!!

.............................................................................
....
So if you want to narrowly focus on installing a server w/o a domain and
address some of the problems, you certainly can. People do it all the time.
Then perhaps decide later to upgrade the install to a domain if the
situation dictates.

That sounds like the perfect compromise!!!
Thanks Much!!!!!
 
J

Jim

When I said you run the server 24/7, I was talking about the typical case.
To better understand what I'm driving at, let's consider the current
situation, w/o a server.

Imagine you have 4-5 ppl in the office, each night they turn off their PCs,
typically at different times as they leave. Yet, if Bill turns off his PC
and Marsha still needs access to Bill's resources, Marsha is out of luck.
She has to wait for Bill to return in the morning and restarts his PC. OR,
Bill leaves his PC running (at Marsha's request), but Marsha promises to
shutdown Bill's machine before she leaves. Maybe Marsha does, maybe she
doesn't, she simply forgets. Now multiply this scenario by 3 or 4 more
users, and well..., you get the picture. In a "server-less" environment,
everyone is a peer, and so ppl (and their PCs) come and go. This is a
distinct disadvantage *if* you must gain access to a specific peer at any
given time. Peer-based networks don't make such promises, by definition.
Again, peers come and go. But whenever you have constructed a peer-based
network that builds in a dependence on a specific machine, or set of
machines, you have, in fact, created SERVERS! No, they are not servers in
the traditional sense of NT, W2K, or W2K3, but that are servers in the
*logical* sense that they are expected to be available whenever you need
them.

It is in this sense that I referred to a server typically running 24/7! If
what I described above sounds similar to what you experience in your own
situation, you have in fact, created servers, again, not in the
hardware/software sense, but "conceptually", "as used". Yeah, you "call it"
a peer-based network, but you're fooling yourself. What you have in fact
attempted to do (if unwittingly) is FORCE a peer-based network to work as if
it was a client-server network, but done so by relying on hardware/software
that was never intended to be used in this fashion. That's what happens as
orgainizations grow out of the peer-based architecture into a client-server
based architecture. They try to "stretch" the peer-based hardware/software
to such an extent that it is *operationally* a client-server network, it is
peer-based in name only. That's when the problems start. It's like trying
to continually retune your stock Ford to compete w/ the big boys at the
track. Sooner or later, if you're serious, and expect to compete, you have
to recognize that you have to move into a whole new league of equipment and
knowledge. Similarly, your peer-based network cannot address your growing
need for a client-server architecture indefinitely (assuming I have
accurately described your situation). You had your first taste of this
inevitability when you exhausted the connection limits of XP! If your
peer-based network is failing due to exhausted network connections, then you
are, by definition, trying to force a "square, peer based, peg" into a
"round, server based, hole"!

Anyway, technically, no, you wouldn't HAVE to run 24/7 either. Yes, you
could have the server booted each morning (first person in boots it), and
shut it down nightly (last person out shuts it down). That would certainly
work. It's definitely an improvement over a WinXP peer in that no one
typically takes ownership of the server as "their PC", it's truly shared.
But at some point in the future, you may wish to provide remote access (for
example), so you could work from home, send/retrieve documenrts on the road,
etc., in which case, you almost certainly would want to be running 24/7. At
least the use of a server, even one running only during the day, allows you
to move in that direction rather easily.

Jim


IOW, the term "server" has multiple meanings, based on context. If we're
discussing server "hardware", we can talk about building in redundant
hardware, RAID arrays, high end components, etc. If we're talking about
server "software", we can talk about the OS (e.g., NT, W2K, W2K), user
accounts and access control, centralized databases and repositories, etc.
But we can talk about servers "conceptually",
 
M

marsha

Jim said:
If
what I described above sounds similar to what you experience in your own
situation,

No, this is not how we work. No one user is dependent on another users
computer except when they need a printer that is local to that other user.
Three
of the users have local unique printers that other users will use. But,
when that rarely happens (that they need another users local printer and
that person
is not there - they just put it off for a day). They never use files from
another users
computer. What our specialty software does is access the data folder on
that one
"server" computer and also put the changed data back on that one computer.
That
is it in a nutshell. The problem (the whole problem) is when one of the
users cannot
access that one "server" computer. The fix is to shut down the server
computer and
all the user computers and then to turn on the server computer, let it come
up completely,
and then turn on the user computers. On a rare day, I will have to reset
the passwords
for one or more users on the "server" computer. For some reason, that seems
to fix
things so the users can then reconnect as they need to get data.

That is pretty much it as far as our little office goes. Each of the users
connect to a switch
via ethernet. It is a D-Link DSS superswitch. That connects to another
D-Link DSS
superswitch that connects to the DSL modem, and the "server" computer.
There you have
it. That's it as far as our system goes.
If your
peer-based network is failing due to exhausted network connections, then you
are, by definition, trying to force a "square, peer based, peg" into a
"round, server based, hole"!

Everyone seems be telling me that our system should handle more than it is.
But
when users can't connect, they get very aggravated. We think that the
special
software that we use is at fault. We have talked to many other companies
doing
what we do and they have had similar problems with our software and have
made
the switch to a better know software and are now happy. I guess we will
have to
do the same but since everyone had told me that a peer-to-peer network
working
as we are working should be able to handle more concurrent connections, I
thought
I would try to make it work.

Thanks again Jim for all of your help!!!
 
J

Jim

marsha said:
No, this is not how we work. No one user is dependent on another users
computer except when they need a printer that is local to that other user.
Three
of the users have local unique printers that other users will use. But,
when that rarely happens (that they need another users local printer and
that person
is not there - they just put it off for a day). They never use files from
another users
computer. What our specialty software does is access the data folder on
that one
"server" computer and also put the changed data back on that one computer.
That
is it in a nutshell. The problem (the whole problem) is when one of the
users cannot
access that one "server" computer. The fix is to shut down the server
computer and
all the user computers and then to turn on the server computer, let it come
up completely,
and then turn on the user computers. On a rare day, I will have to reset
the passwords
for one or more users on the "server" computer. For some reason, that seems
to fix
things so the users can then reconnect as they need to get data.

That is pretty much it as far as our little office goes. Each of the users
connect to a switch
via ethernet. It is a D-Link DSS superswitch. That connects to another
D-Link DSS
superswitch that connects to the DSL modem, and the "server" computer.
There you have
it. That's it as far as our system goes.


Everyone seems be telling me that our system should handle more than it is.
But
when users can't connect, they get very aggravated. We think that the
special
software that we use is at fault. We have talked to many other companies
doing
what we do and they have had similar problems with our software and have
made
the switch to a better know software and are now happy. I guess we will
have to
do the same but since everyone had told me that a peer-to-peer network
working
as we are working should be able to handle more concurrent connections, I
thought
I would try to make it work.

Ok. Obviously it's impossible for me to evaluate your software, whether it
indeed contributes to the problems. I would obviously have to examine it
more closely and know much more about it to even begin to give relevant
feedback.

BUT, there does seem to be one consistent and explainable problem, and
that's the connection limit. If you're telling me that rebooting the
system(s) gets ppl connected again, then it's VERY LIKELY you're running
into the XP connection limits. To repeat, MS purposely limits concurrent
connections so ppl don't try to turn a peer-based network into a (pseudo)
client-server network. When you reboot your PC, you are effectively
clearing all your open network connections. I suspect this clears up the
problem because the very first thing clients do after reboot is attempt to
connect to the application. Since little if anything else has consumed
other network connections since that reboot, the chances of success are
greatly increased. But then over time, the various peers do other work,
connecting and disconnecting, including the peer w/ the shared files.
Eventually, inevitably, some time down the road, either the client-peer or
server-peer (for lack of better terms) exhausts its connections, and you're
stuck again.

Trying to manage these network connections w/ XP is very difficult because
you can use up network connections in very subtle ways. A typical PC is
opening, using, and dropping network connections all the time, with a
variety of software applications. Just running Internet Explorer can open
FOUR (4) network connections alone! More and more applications these days
are using the network for program updates, product activation, whatever, so
it doesn't take much much to reach that 10 connection limit on even a modest
system. The situation is exacerbated when you try to turn one of those
"peers" into a quasi-server, since now *it* becomes the target of MANY
concurrent connection requests. IT JUST WON'T WORK WITH XP! Yeah, you can
get away w/ it when the application is rarely accessed, or the likelihood of
concurrent access is low. But that's not what typically happens as the
environment grows. That one machine is being asked to act as a "server",
but it just doesn't have the capacity (at the least in terms of network
connections, and perhaps other limitations as well, like processing power,
access control, etc.).

If you wanted to continue stretching the current peer architecture, you
could perhaps spend time meticulously trying to limit network connections,
on both client-peers and the server-peer. It's a hassle, and not always
easy to do, but you might be able to improve the situation somewhat w/
careful analysis. For example, on the client-peers, it's possible to make a
registry change to limit Internet Explorer connections. So if a user visits
their homepage, instead of Internet Explorer opening up four connections to
download the webpage, graphics, etc., you might limit it to only one or two.
Obviously this would impact the performance of Internet Explorer, but it
would save some network connections for your primary application. On the
server-peer side, I wouldn't allow anyone to use that PC for foreground
applications. Anyone using that PC as their desktop is almost assurdly
going to consume additional network connections from time to time, and thus
hinder access by client-peers. You could also shutdown unnecessary services
on the to further conserve network connections. It's a little difficult for
me to recommend specific services to shutdown since I don't know which you
must have, which are only nicities, and which are running needlessly. But
it's not uncommon for ppl to have quite a few unnecessary services running.
By default, XP turns a LOT of services ON so as to make the system highly
functional. But in reality, many are not needed by all users, and running
them only consumes valuable resources (including memory, CPU cycles, network
connections) that could otherwise be made available to other processes.
There are many resources on the web that describe these services and which
are most likely to be no problem in disabling for most ppl. If interested,
I'll find a few of these sites, just let me know.

But at a minimum, I suspect using NT, W2K, or W2K3 to solve your connection
limits would be a good first step. If you want to save costs, or at least
determine if this is indeed the problem, you could try some of the
techniques described above to see if you can improve the situation. If you
do have success, at a minimum, you at least KNOW what one source of the
problem is. But in the long run, I believe you will eventually succumb to
needing a true server. You could save a few bucks by perhaps picking up a
cheap release of NT off eBay, if only to see if it helps (can be had for the
price of lunch). NT is no longer supported by MS, but many organizations
still use it anyway. (btw, makes sure it's NT *server*, not *workstation*).
It might be worth spending a few bucks on NT as an experiment, or perhaps
maybe it will suffice for a couple years. Who knows. But it's an avenue I
would consider for your circumstances where cost may be a big factor.

HTH

Jim
 
M

marsha

Jim said:
BUT, there does seem to be one consistent and explainable problem, and
that's the connection limit. If you're telling me that rebooting the
system(s) gets ppl connected again, then it's VERY LIKELY you're running
into the XP connection limits.
..........................................................
But then over time, the various peers do other work,
connecting and disconnecting, including the peer w/ the shared files.
Eventually, inevitably, some time down the road, either the client-peer or
server-peer (for lack of better terms) exhausts its connections, and you're
stuck again.

I think you are exactly correct but I am trying to understand the way
concurrent
connections are counted. Remember, our "server" has always been a w2k. I
only
said that the new computer had xp home on it. It wouldn't work at all. I
replaced
xp home with another retail version of w2k which we also own. However, that
machine
has still not worked except for a few hours. But I have just downloaded and
installed
windows 2003 server on it. It has a generous trial period with it so I
figure I can find
out if it will help. I installed the file share with it on the data folder.
Monday I will test
it out.

BUT, I am concerned to understand what you mean above. Even though I think
you
are right on target, I want to be sure I understand. Are you saying that
user computers
that are xp home or xp prof. might be fouling up the network because they
have concurrent
connection limitations. So even though they do not hold any of the data
that the "server"
holds, they still impact the "server"????

Trying to manage these network connections w/ XP is very difficult because
you can use up network connections in very subtle ways. A typical PC is
opening, using, and dropping network connections all the time, with a
variety of software applications. Just running Internet Explorer can open
FOUR (4) network connections alone!

Exactly!! And I'm told that by the developers of our software that it can
open
as many as 5 connections when a user accesses the data folder ("server").
................................... That one machine is being asked to act as a "server",
but it just doesn't have the capacity (at the least in terms of network
connections, and perhaps other limitations as well, like processing power,
access control, etc.).

Remember, our "server" machine has always been a w2k.
........................................................................... ........................ But
it's not uncommon for ppl to have quite a few unnecessary services running.
By default, XP turns a LOT of services ON so as to make the system highly
functional. But in reality, many are not needed by all users, and running
them only consumes valuable resources (including memory, CPU cycles, network
connections) that could otherwise be made available to other processes.

I'm sure you are 100% correct.
There are many resources on the web that describe these services and which
are most likely to be no problem in disabling for most ppl. If interested,
I'll find a few of these sites, just let me know.

Yes, thanks, I'm very curious. You might have detected that I am full of
curiousity
and am a voracious reader.

It might be worth spending a few bucks on NT as an experiment, or perhaps
maybe it will suffice for a couple years. Who knows. But it's an avenue I
would consider for your circumstances where cost may be a big factor.

Well, as I said, I have just finished installing Windows Server 2003
Enterprise R2.
If it works, they will almost certainly buy it. The boss spends money on
things
that work. It is just that I have to do it all. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top