ethernet hardware question

V

Vernon Schryver

If you try sending from both computers at once you get packet collisions and
nothing gets thru.

That's utter drivel.


Vernon Schryver (e-mail address removed)
 
S

Skybuck Flying

shope said:
Try sending something easy the other way during the transfer - ping maybe?

I am not sure but it seems that fails as well.
If the sending card cant handle inbound packets during transmit then there
may be something wrong with the driver or the IP stack. Or maybe your send
routine doesnt release any CPU?

Could be that winsock tries to send the 64 kb in packets of 1500 bytes and
forgets to receive as well.
The other possiblity is that the software chokes on such a large UDP packet
(i seem to remember some isues with bigger than 32k -1 byte packets to do
with 16 bit arithmetic) - try some smaller sizes if you can tune it.

If the cards are pretty old then they are probably half duplex. If you want
to force this then put an Ethernet repeater / 10M only hub between the 2
PCs.

The sending ethernet card and driver should impose the minimum Ethernet
inter packet gap on the transmission - around 10 uSec.

it isnt the gap so much as "reset time" in the software driver. Some old
cards (3Com 3c501) had to have the driver chip reset each time they recieved
or sent a packet - if the next one arrived during the "dead time" then it
was lost.

Indeed... I didn't know about gap's and transmission times... His remark
about the gap sent me in the right direction.

I know believe 10 mbit ethernet is 10.000.000 bits/sec. Each bit takes 100
nanoseconds to send.

So if one has to send 64.000 bytes... and one wants to receive 64.000 bytes
as well... and the cards are half duplex

Then the calculation is as follows:

64.000 bytes * 8 bits * 100 nanoseconds / (1000 * 1000) = 51.2
milliseconds...

This is a rough estimate...

The idea is my software should no longer try to keep sending, sending,
sending... but it should wait a little bit so it can receive 64 kb.

So the idea is to wait 51.2 milliseconds or maybe even 102.4 milliseconds...
to allow the card and the stack to receive a packet.
 
S

Steve Horsley

Indeed... I didn't know about gap's and transmission times... His remark
about the gap sent me in the right direction.

I know believe 10 mbit ethernet is 10.000.000 bits/sec. Each bit takes 100
nanoseconds to send.

So if one has to send 64.000 bytes... and one wants to receive 64.000 bytes
as well... and the cards are half duplex

Then the calculation is as follows:

64.000 bytes * 8 bits * 100 nanoseconds / (1000 * 1000) = 51.2
milliseconds...

This is a rough estimate...

The idea is my software should no longer try to keep sending, sending,
sending... but it should wait a little bit so it can receive 64 kb.

So the idea is to wait 51.2 milliseconds or maybe even 102.4 milliseconds...
to allow the card and the stack to receive a packet.
This is difficult to achieve in some hardware, and not required at all.
Ethernet hardware imposes a gap between frames with a slight randomising,
so that after a frame is finished, everyone has a roughly equal chance of
being able to send the next frame. You can attempt to send as fast as you
like - unless you have a faulty Ethernet adapter, you will not fully block
everyone else who is trying to send.

Steve.
 
S

Steve Horsley

That's utter drivel.


Vernon Schryver (e-mail address removed)

Correct, but impolite.

Ethernet controllers are well able to cope with two people wanting to send
at the same time. They will not begin to send if another machine is
already sending - they wait. And if 2 machines start to send at exactly
the same time, they detect this, back off and try again after a random
delay. Same way as people talking in a group of friends.

Steve
 
B

briggs

Correct, but impolite.

Ethernet controllers are well able to cope with two people wanting to send
at the same time. They will not begin to send if another machine is
already sending - they wait. And if 2 machines start to send at exactly
the same time, they detect this, back off and try again after a random
delay. Same way as people talking in a group of friends.

I thought the context was a full duplex 100 meg Ethernet between
two workstations with a UTP crossover cable.

In that environment there is no back off and try again logic. There
are no collisions. The sender just sends. If both ends want to
send at the same time they are free to do so. If one end cannot
receive at the same time that it is sending then that end is broken.

Apologies if I have failed to monitor changing context.

John Briggs
 
V

Vernon Schryver

This is difficult to achieve in some hardware, and not required at all.
Ethernet hardware imposes a gap between frames with a slight randomising,
so that after a frame is finished, everyone has a roughly equal chance of
being able to send the next frame.[/QUOTE]

That is wrong. As a painful proof of how wrong it can be, look for
"Ethernet Capture Effect" as in
http://www.google.com/search?q="Ethernet+Capture+Effect"

You can attempt to send as fast as you
like - unless you have a faulty Ethernet adapter, you will not fully block
everyone else who is trying to send.

That's close enough to true. In practice and contrary to 100VG
and token ring salescritters and various trade rag espurts, CSMA/CD
is pretty fair.


Vernon Schryver (e-mail address removed)
 
A

Alex Fraser

Steve Horsley said:
Ethernet hardware imposes a gap between frames with a slight randomising,
so that after a frame is finished, everyone has a roughly equal chance of
being able to send the next frame.

AFAIK, there is a random delay after a collision, in addition to the
interframe gap (96 bit-times?). Transmission is delayed until the end of a
"passing" frame, if there is one, but then no longer than the interframe gap
requires - no random element. This way gives better performance under light
loads, because there's a good chance of there being only one transmitter.
But under heavy loads, a random delay (as you described) is preferable; a
collision is virtually guaranteed.

Alex
 
V

Vernon Schryver

AFAIK, there is a random delay after a collision, in addition to the
interframe gap (96 bit-times?). Transmission is delayed until the end of a
"passing" frame, if there is one, but then no longer than the interframe gap
requires - no random element. This way gives better performance under light
loads, because there's a good chance of there being only one transmitter.
But under heavy loads, a random delay (as you described) is preferable; a
collision is virtually guaranteed.

No, there is not and should not be a random delay between back-to-back
transmissions except when the MAC is too slow to keep up. Decades
ago salescritters of makers of lame and broken Ethernet hardware would
claim that non-broken hardware from Sun and other vendors was broken
because it had no such mythical delay between back-to-back packets.

Such a random delay would do no good unless it were on the order of
a slot time or 64 bytes. Contrary to decades of nonsense and blarney,
profitably preventing collisions is not as easy as that. A random
delay shorter than a slot time would not be long enough to ensure that
a second station would get a chance to start transmitting before the
transmitter of the previous packet.

Standards conformant CSMA/CD systems start transmitting their next
packet immediately after their previous packet.

The venerable AMD LANCE violated the Ethernet and 802.3 standards by
delaying after defering to another station. That obscured the Ethernet
Capture Effect for many years and gave significantly better performance.
In cocktail party conversation terms, that non-standard or excessive
politeness let a speaker run down completely and avoided frustrating
a second speaker or group of speakers so much that they give up leave.


Vernon Schryver (e-mail address removed)
 
S

Skybuck Flying

Skybuck Flying said:
maybe?

I am not sure but it seems that fails as well.


Could be that winsock tries to send the 64 kb in packets of 1500 bytes and
forgets to receive as well.


Indeed... I didn't know about gap's and transmission times... His remark
about the gap sent me in the right direction.

I know believe 10 mbit ethernet is 10.000.000 bits/sec. Each bit takes 100
nanoseconds to send.

So if one has to send 64.000 bytes... and one wants to receive 64.000 bytes
as well... and the cards are half duplex

Then the calculation is as follows:

64.000 bytes * 8 bits * 100 nanoseconds / (1000 * 1000) = 51.2
milliseconds...

This is a rough estimate...

The idea is my software should no longer try to keep sending, sending,
sending... but it should wait a little bit so it can receive 64 kb.

So the idea is to wait 51.2 milliseconds or maybe even 102.4 milliseconds...
to allow the card and the stack to receive a packet.

Well I wrote a simple program to test this idea.

It sends and receives a 64000 byte packet with winsock...

On the pentiumIII 450 mhz it takes around 3200 microseconds to send it.
On the pentiumIII 450 mhz it takes around 1500 microseconds to receive it
( from the PentiumI 166 )

On the pentiumI 166 mhz it takes around 16000 microseconds to send it.
On the pentiumI 166 mhz it takes around 10343 microseconds to receive it.
( from the PentiumIII 450 )

When I press the send button on both computers repeatedly at the same
time...

The send interval increase to almost double the time.

Still these times do not make much sense when one calculates it.

64000 bytes / 1500 bytes = 42.666 packets.

42.666 * 1526 bytes * 8 bits = 520874.6667

( 520874.6667 * 100 nanosecs ) / 1000 = 52087.4 microseconds.

So in theory it would take 52 milliseconds to send a 64000 byte packet over
a 10 million bit ethernet card.

Yet winsock will show 3 milliseconds.

( 10 mbit Ethernet is 10.000.000 bits per sec right ? not 10*1024*1024 bits
per sec ? anyway that wont matter much. )

So from this I can concluded two things:

1. Winsock returns from the sendto function faster than the packet is
send....

2. When both sides start sending at the same time it will take more time to
send. Why that is I am not sure... maybe because the CPU has to process
incoming packets etc.

Also these tests were done while running zone alarm pro 4.x firewall on both
computers.
 
S

Skybuck Flying

Skybuck Flying said:
Well I wrote a simple program to test this idea.

It sends and receives a 64000 byte packet with winsock...

On the pentiumIII 450 mhz it takes around 3200 microseconds to send it.
On the pentiumIII 450 mhz it takes around 1500 microseconds to receive it
( from the PentiumI 166 )

On the pentiumI 166 mhz it takes around 16000 microseconds to send it.
On the pentiumI 166 mhz it takes around 10343 microseconds to receive it.
( from the PentiumIII 450 )

When both firewall are down the times are at best:

On the pentiumIII 450 mhz it takes around 2171 microseconds to send it.
On the pentiumIII 450 mhz it takes around 578 microseconds to receive it
( from the PentiumI 166 )

On the pentiumI 166 mhz it takes around 8975 microseconds to send it.
On the pentiumI 166 mhz it takes around 2769 microseconds to receive it.
( from the PentiumIII 450 )
 
S

Skybuck Flying

Also my UDP Full Duplex Speed Test shows the same results as my in
development application.

If both sides try to send at 500.000 bytes per sec and data size is 64000.

Then the pentiumIII 450 will send at 500.000 bytes per sec and the pentiumI
166 will receive at 500.000 bytes per sec

But

The pentiumIII 450 will receive at 122 bytes per sec and the pentiumI 166
will send at 122 bytes per sec
( little stats packets )

It seems like the pentium 166 is not able to send large packets anymore...

( this test was also with both firewalls on )
 
S

Skybuck Flying

I have also tested my in development app with somebody else

My download speed is 350 kb/sec
My upload speed is 16 kb/sec

The other side is a fast computer and can upload and download easily at 350
kb/sec and 16 kb/sec

When we would send large 64 kb packets things started to fail as well...

I will do more testing on that... I dont think the problem is limited to my
PC equipment. :D
 
A

Alex

Vernon Schryver said:
AFAIK, there is a random delay after a collision, in addition to the
interframe gap (96 bit-times?). Transmission is delayed until the end of
a "passing" frame, if there is one, but then no longer than the
interframe gap requires - no random element. This way gives better
performance under light loads, because there's a good chance of there
being only one transmitter. But under heavy loads, a random delay (as
you described) is preferable; a collision is virtually guaranteed.

No, there is not and should not be a random delay between back-to-back
transmissions except when the MAC is too slow to keep up. [...]

True, but who mentioned back-to-back transmissions?
Such a random delay would do no good unless it were on the order of
a slot time or 64 bytes. [...] A random delay shorter than a slot time
would not be long enough to ensure that second station would get a chance
to start transmitting before the transmitter of the previous packet.

Indeed, less than one slot time is surely pointless.
Standards conformant CSMA/CD systems start transmitting their next
packet immediately after their previous packet.

Yes, if you include the interframe gap as part of the frame it follows. AIUI
the interframe gap has nothing to do with arbitration (it can't for the
reason you give above), but is there for the benefit of the receiver; a
slight "breather" to allow it deal with the previous packet and prepare for
another.

Alex
 
S

Skybuck Flying

Hmmm interesting...

I just made a new version of udp full duplex test... with some update code

and also a new feature: packet interval

It's working a lot better now at the moment even with packet interval at
zero...

hmmmmm :D

Though the little stat packets seem to have trouble getting through... :D

I'll put the new version up on my website :D
 
S

Skybuck Flying

shope said:
i believe you are thinking of the required min packet supported - 512 bytes
of UDP, which with overhead is 550+ bytes

if you stick a sniffer on any ethernet running m$soft networking or NFS you
will see plenty of frames of 1500 bytes carrying UDP, and many of those will
be fragments of bigger UDP packets. 64k is a bit unusual tho.

If your application is

Indeed...

I tested 64 KB packets over the internet at different days.

Sometimes it works.. and sometimes it does not.

Sometimes only 1472 udp packets will work... and that probably might not
even work always...

So it seems 548 udp packets are safe.

When I use small udp packets the transfer rate will be very slow because of
zone alarm pro firewall 4.x on my pentium 166... which is very slow.

When I use large udp packets the transfer rate will be very high.

I wish I could use large udp packets over the internet always :D
 
S

Sin

1. Winsock returns from the sendto function faster than the packet is

Yep, that's a known fact. Happens with TCP as well. There can be several
packets and ACKs going around once you get out of send. Use a sniffer and
you'll see it right away.

Alex.
 
S

Sin

When I use large udp packets the transfer rate will be very high.
I wish I could use large udp packets over the internet always :D


Here are a couple of hints concerning your "problem" :

- UDP packets of 64K are not supported by all TCP/IP stacks. I work with QNX
at work and QNX is limited to 8K packets. QNX's stack is a port from another
unix like system if memory serves right, so I suspect other OSes might have
this limitation.

- There should be just a marginal difference as far as speed goes between
sending a 64K packet and a bunch of ~1400 bytes packets. None of the
"physical" packets will be greater than the MTU which is usually around 1500
bytes anyway.

- If you send a 64K datagram, it is segmented in several packets. If one of
these is corrupt or lost, the whole 64K transmission is compromised. By
sending small amounts at a time, you run the same loss/corruption risk, but
it gives you a chance to request only the part that is corrupt by using
checksums or other validation methods.

- If corruption and loss are a problem, UDP is NOT a good choice to begin
with.

- Making UDP reliable AND performant is not a task you want to get into.
Been there, done that. You simply do not have enough control at the
application level to conceive a generic solution.

- UDP across the internet runs a HIGH risk of packet loss. Routers are
notorious for breaking UDP communications.


Perhaps if you told us about your application and what you're really trying
to acheive we might be able to help you better? The little I know about your
project makes me think UDP is definitly not a good option.

Alex.
 
S

Skybuck Flying

Sin said:
Yep, that's a known fact. Happens with TCP as well. There can be several
packets and ACKs going around once you get out of send. Use a sniffer and
you'll see it right away.

That makes me wonder if overlapped i/o with completion routine will notify
when it is really sent or if it will also return earlier.

Skybuck.
 
S

Skybuck Flying

Sin said:
Here are a couple of hints concerning your "problem" :

- UDP packets of 64K are not supported by all TCP/IP stacks. I work with QNX
at work and QNX is limited to 8K packets. QNX's stack is a port from another
unix like system if memory serves right, so I suspect other OSes might have
this limitation.

- There should be just a marginal difference as far as speed goes between
sending a 64K packet and a bunch of ~1400 bytes packets. None of the
"physical" packets will be greater than the MTU which is usually around 1500
bytes anyway.

Well at my slow pentium 166 there is a difference... that firewall needs to
check more headers I guess
that slows it down quite a lot, like 10 times slower.
- If you send a 64K datagram, it is segmented in several packets. If one of
these is corrupt or lost, the whole 64K transmission is compromised. By
sending small amounts at a time, you run the same loss/corruption risk, but
it gives you a chance to request only the part that is corrupt by using
checksums or other validation methods.

So far my testing has shown that losing a fragment does not happen often...
which is remarkable :D
- If corruption and loss are a problem, UDP is NOT a good choice to begin
with.

- Making UDP reliable AND performant is not a task you want to get into.
Been there, done that. You simply do not have enough control at the
application level to conceive a generic solution.

- UDP across the internet runs a HIGH risk of packet loss. Routers are
notorious for breaking UDP communications.

Yes indeed... I wonder what would happen if I switch to the ip layer...

Would routers still prefer ip/tcp... ? :D

( By examing the ip protocol field. )
Perhaps if you told us about your application and what you're really trying
to acheive we might be able to help you better? The little I know about your
project makes me think UDP is definitly not a good option.

Something like sctp :D
 
S

SenderX

That makes me wonder if overlapped i/o with completion routine will notify
when it is really sent or if it will also return earlier.

If you are using buffered sends, it will return earlier.

If you are using non-buffered sends, it will return when the ack has bed
received.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top