Gigabit switching

S

Steve

I would like to install some Gb switches and NIC's. Will they work with my
existing cat5 cable or do I need cat5e to do this?

Thanks,

Steve
 
L

Leythos

I would like to install some Gb switches and NIC's. Will they work with my
existing cat5 cable or do I need cat5e to do this?

The 1000Base-T Gigabit specification requires that you use CAT 5 UTP
cabling or greater and it must be properly cabled using all four wiring
pairs to operate at 1000Mbps. If you use lower grade cabling, or if all
four wiring pairs are implemented incorrectly, you may get a connection,
but you may also experience data loss or slow performance. You're
limited to 100 meters between any two devices.

http://support.intel.com/support/netstructure/switches/480/30618.htm
 
R

Ryan Hanisco

Steve,

Per Cisco you can use Cat5, but remember, like the minimum requirements of
2000 server, just because it'll work doesn't mean you'll get the performance
you might expect.

Make sure that your patch panels and jacks are at least cat 5, implement GB
over Cu and upgrade things in a rolling fashion with the end goal of Cat 6
everywhere.

Also, remember that most GB switches aggregate ports in the backplane. This
means you'll get a bottleneck if you locate high utilization GB servers
close together on the same switch or blade. (4506 blades have this problem,
as does all Foundry equipment)

http://www.cisco.com/en/US/products/hw/switches/ps646/products_white_paper09186a008009268a.shtml

Ryan Hanisco
MCSE, MCDBA
Flagship Integration Services
 
J

jas0n

Steve,

Per Cisco you can use Cat5, but remember, like the minimum requirements of
2000 server, just because it'll work doesn't mean you'll get the performance
you might expect.

Make sure that your patch panels and jacks are at least cat 5, implement GB
over Cu and upgrade things in a rolling fashion with the end goal of Cat 6
everywhere.

Also, remember that most GB switches aggregate ports in the backplane. This
means you'll get a bottleneck if you locate high utilization GB servers
close together on the same switch or blade. (4506 blades have this problem,
as does all Foundry equipment)

http://www.cisco.com/en/US/products/hw/switches/ps646/products_white_paper09186a008009268a.shtml

Ryan Hanisco
MCSE, MCDBA
Flagship Integration Services

im about to put in a cicso gigabit switch, but only gigabit for the
server - the rest of the network devices will be running at 100

what real world performace should i see over my existing 10/100 3com
switch seeing as the 20 pc's will be pulling data from the 1000 port
rather than a 100 port assuming the current switch is fully utilised?
 
L

Leythos

im about to put in a cicso gigabit switch, but only gigabit for the
server - the rest of the network devices will be running at 100

what real world performace should i see over my existing 10/100 3com
switch seeing as the 20 pc's will be pulling data from the 1000 port
rather than a 100 port assuming the current switch is fully utilised?

If the server is the only gig device, then why would you expect to see
any difference?

The only performance difference would be the back-plane in the switch,
not in the port speed (since the workstations are only 100 base.
 
J

jas0n

If the server is the only gig device, then why would you expect to see
any difference?

The only performance difference would be the back-plane in the switch,
not in the port speed (since the workstations are only 100 base.

because currently I have 20 pc's all hitting the server and copying
large files/lots of small files back and forth, etc, so 20 pc's are all
sharing the servers 100 base connection to the switch - if we replace
the switch and the server gets a 1000 connection then my thinking was
the server can send/receive more data more quickly so they will see an
increase in speed to their max 100 base speed. is my logic here flawed?
 
S

Scott

Yes, you should try multiple nics instead.

Scott
jas0n said:
because currently I have 20 pc's all hitting the server and copying
large files/lots of small files back and forth, etc, so 20 pc's are all
sharing the servers 100 base connection to the switch - if we replace
the switch and the server gets a 1000 connection then my thinking was
the server can send/receive more data more quickly so they will see an
increase in speed to their max 100 base speed. is my logic here flawed?
 
L

Leythos

because currently I have 20 pc's all hitting the server and copying
large files/lots of small files back and forth, etc, so 20 pc's are all
sharing the servers 100 base connection to the switch - if we replace
the switch and the server gets a 1000 connection then my thinking was
the server can send/receive more data more quickly so they will see an
increase in speed to their max 100 base speed. is my logic here flawed?

Seriously flawed. What you need is something called port trunking -
meaning that you install a card with more than one network port, they
come with 2 or 4 ports in many cases (or you can use multiple network
cards from the same vendor as long as they support the feature).

With trunking you get X amount of simultaneous communications paths -
one per port. This means the server can talk to X devices at the same
time.

If you want to make your life faster, just get some cheap $25 gig NIC's
and install them in the workstations that use the most network
bandwidth.

Think about your switch like this - each port gets a slice of the time
to talk with the server, but since the ports all talk at 100 speed
(except the server), nothing gets to the server port any faster, so it
doesn't really gain you anything. When you switch to all gig, or the
ones that use the network the hardest to gig, you will see an increase
in performance - and may also see an increase in CPU load on the server.
 
J

Jetro

Did you ever monitor the server Performance and now you're sure the CPU,
RAM, and disk subsystem ain't a bottleneck?
 
R

Ryan Hanisco

Jason,

With only one Gig port on the switch the other devices will be able to tale
to the server at their full speed. You will see an increase in performance.
The idea that there will be a speed limit to the lowest common denominator
is wrong. You could save the cost of Gb NICs and Switches at the sacrifice
of port density by using trunking -- but this isn't really practical given
the price drop in Gb NICs.

Cisco gear will cache the packets as they come into the switch and will
distribute them to the server FIFO so there will not be any problem with
speed.

Make sure you hardcode the speed on the switch port and server, turn
spanning tree off (portfast), and turn SMB signing off. There is a registry
setting on the server and workstations that will turn SMB signing off.
It'll cut huge lags off of your file sharing.

Ryan Hanisco
MCSE, MCDBA
Flagship Integration Services
 
L

Leythos

You will see an increase in performance.
The idea that there will be a speed limit to the lowest common denominator
is wrong.

The speed from the backplane to the server, since it's gig will be
faster.

The speed from the workstations to the switch will be the same.

Increase in performance will be minimal.
 
P

Phillip Windell

If the Switch has a gig Port and the Server connects to it with a GIG that
is fine. the fact that the PCs run on 100 is also fine,...and desirable in
my opinion. The bottleneck would normally occur between the Server and the
Switch because that is the cable where the traffic from all PCs "merge".

If three PCs simultaneously pass a file with the Server at 100 then that
places a 300 load on the line between the Server and the Switch which the
Gig link would handle. Limiting the PCs with the 100 link is a good
thing,...if they also ran at 1000 then just one PC could "hog" the entire
"path" and you would be right back where you started from.

It is just like the road system you drive on,...you don't run an Interstate
Freeway right up to the end of your home's drive way. You have city streets
which dump onto larger city streets with heavier combined traffic, which
dump onto highways with more combined traffic, which dump onto Interstate
Freeways with even more combined traffic. As more and more traffic combines
together you use a larger road.
 
L

Leythos

"Phillip Windell" said:
If the Switch has a gig Port and the Server connects to it with a GIG that
is fine. the fact that the PCs run on 100 is also fine,...and desirable in
my opinion. The bottleneck would normally occur between the Server and the
Switch because that is the cable where the traffic from all PCs "merge".

If three PCs simultaneously pass a file with the Server at 100 then that
places a 300 load on the line between the Server and the Switch which the
Gig link would handle. Limiting the PCs with the 100 link is a good
thing,...if they also ran at 1000 then just one PC could "hog" the entire
"path" and you would be right back where you started from.

It is just like the road system you drive on,...you don't run an Interstate
Freeway right up to the end of your home's drive way. You have city streets
which dump onto larger city streets with heavier combined traffic, which
dump onto highways with more combined traffic, which dump onto Interstate
Freeways with even more combined traffic. As more and more traffic combines
together you use a larger road.

You can think of it like that, but I have a 16 port GIG switch with 13
Servers sharing data on the GIG switch and a trunked pair of ports that
go to a 100bt switch. GIG network performs much faster than when it was
100bt, the 100bt trunk, in a single path, performs at about the same
performance level as it did before the gig switch was added.

Some things work nice in theory, but in reality it's just not the same
as on paper.
 
J

jas0n

If the Switch has a gig Port and the Server connects to it with a GIG that
is fine. the fact that the PCs run on 100 is also fine,...and desirable in
my opinion. The bottleneck would normally occur between the Server and the
Switch because that is the cable where the traffic from all PCs "merge".

If three PCs simultaneously pass a file with the Server at 100 then that
places a 300 load on the line between the Server and the Switch which the
Gig link would handle. Limiting the PCs with the 100 link is a good
thing,...if they also ran at 1000 then just one PC could "hog" the entire
"path" and you would be right back where you started from.

It is just like the road system you drive on,...you don't run an Interstate
Freeway right up to the end of your home's drive way. You have city streets
which dump onto larger city streets with heavier combined traffic, which
dump onto highways with more combined traffic, which dump onto Interstate
Freeways with even more combined traffic. As more and more traffic combines
together you use a larger road.

well, this is how my thinking was going and as I have no practical
experience of gig switches and the cisco gig switch is coming regardless
of what we want to do (all network kit moving to cisco policy put in
place) then i'll just wait and see!
 
R

Ryan Hanisco

That is true, the workstations will still only have 100 MB access to the
server, its just that you'll get less contention for access and more
simultaneous access... the speed gains are in the network performance, not
in the individual machine transfers.
 
C

CJT

Leythos said:
Seriously flawed. What you need is something called port trunking -
meaning that you install a card with more than one network port, they
come with 2 or 4 ports in many cases (or you can use multiple network
cards from the same vendor as long as they support the feature).

With trunking you get X amount of simultaneous communications paths -
one per port. This means the server can talk to X devices at the same
time.

If you want to make your life faster, just get some cheap $25 gig NIC's
and install them in the workstations that use the most network
bandwidth.

Think about your switch like this - each port gets a slice of the time
to talk with the server, but since the ports all talk at 100 speed
(except the server), nothing gets to the server port any faster, so it
doesn't really gain you anything. When you switch to all gig, or the
ones that use the network the hardest to gig, you will see an increase
in performance - and may also see an increase in CPU load on the server.
I disagree. If the bottleneck is the server connection to the network,
the strategy will work. Switches contain buffers.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top