>>-Suppose there are n players. If each player make a move, and send that
>>-move to *all* the other players, the amount of traffic grows at the
>>-same general rate as
>>- n * n
>>Only if you send the move to each machine. A general broadcast of the
>>move would only have 'n' messages.
>However, a potential problem that I see is with the Ether - you tend
>to get this kind of exponential slow down as you add more players
>because of so many more collisions happening (see Binary Exponential
>Backoff).
As the guy the wrote the first interrupt driven driver for a PC Ethernet
Card (in the world) for the Advanced Technology Group of the New York Stock
Exchange, I was specifically asked to address this issue in reference to
Token Ring networks for the NYSE.
Ethernet is more deterministic than Token Ring is, and it loads cleanly,
without any Binary Exponential Backoff of the application level of the
protocol. The Binary Exportional Backoff is only used in the case where an
actual PACKET collision occurs, which would require that the carrier sense
part of the protocol had actually failed. (CS/CDMA) This would only occur
when the packet preamble collides with another preamble from another card,
which by the nature of signal propagation and the definined maximum length
of the wire in Ethernet, given a scenario of no packet switches or other
devices, you have a possible window of opportunity of about 10 e-6 seconds
for a Carrier Sense failure to actually occur. What this means is that
once a card has established the preamble on the wire, the packet is
typically guarenteed to be fully transmitted, unless there are cards that
are misfunctioning or the wire has some sort of transmission problem. What
that breaks down to is that with the interfame spacing, that the network
will actually load up to about 9.6 Mb, with even sharing of the load and
even access. There are not more collisions, since the cards will not
attempt to transmit where a carrier signal is on the wire. The only time
that the cards may actually have a collision is in a single phase of a
single 20 M/cycle wave (manchester encoding is used to get to 10 Mbit.)
unencoded wave transmission down the 1000 foot length of the wire. Since
electricity travels at about 60% of the speed of light, Ethernet is VERY
stable. So where a collision occurs is where a two cards transmit the
preamble exactly at the same time onto the wire, (not very likely, since
they see the end of the previous packet at slightly different times
anyway). Of course, when a card fails, or isn't up to specification, or
the wire is not within specification, there are other issues, but a clean
Ethernet runs very, very well. That's what CS/CDMA is the access method
used in the higher speed fiber optic networks.
Token ring on the other hand requires that the token circulate and be free
in order to add data to the token, and each card clocks the data off the
wire and then back onto the wire. (At least the standard 4 MBit style..)
So the latency is higher and the throughput is lower for Token Ring.
(Which has been borne out in test after test of server performance on Token
Ring and Ethernet)
Given the actual fluctuation of Ethernet latency and the actual number of
collisions that exist in a properly wired Ethernet network. There would be
no discernable difference.
Wilbur