> > Then I realised that this could actually work - the same code that
> > allows up to 20 cars to take part in a single race could be modified to
> > allow thousands (or even millions) of people to watch a race in real
> > time, without affecting the performance of the race server itself.
> for the NROS, since it's an official NASCAR sanctioned series.... this has
> been though for years.
> The main problem is that even for today the connections aren't enough fast.
> Just think about how much download you would need to see all the cars on the
> track. Because even in GPL when you race online, you see what 4-5cars
> forward on fast connections and 1 in your mirrors, no ?
> Now just imagine if you had ALL the cars. And on the NROS it means 24cars.
> It's the current limitation. Even with ADSL and Cable-modem, it's barely
> acceptable. Especially with GPL and it's physics. But for the future, there
> is a great market for such a thing, and incredible possibilities.
I really don't think there is a problem with speed - the point is that
the Amway-style* networking system I've described distributes the load
equally between all machines in the tree, and because there is no need
to transmit any information back up the tree, there are NO
synchronisation problems to worry about.
Additionally, since information flow is one way only, it's possible to
almost completely eliminate latency problems - all you need to do is to
get each machine to buffer car positions in the background so that it
doesn't need to display the next frame until (say) 10 seconds after it
should have received the next set of car positions (by which time it
should already have the next several seconds' worth of positions
buffered up as well).
This way, if there is a delay between packets of anything less than 10
seconds (or whatever the time delay is set to), the user notices
absolutely nothing.
This means that the only jitter the spectators see is exactly that which
the race server sees - ie. the latency in the drivers' machines sending
their positions to the host.
The only reason there is a problem with latency when racing is that
information flow has to be two-way, so it's not possible to buffer up
positions and display them later.
Note that if they were going to do this seriously the race organisers
would probably set up the top-node spectator machine themselves on a LAN
connected to the race server, and possibly even have language-specific
servers at the next level as well, since the machines at the top of the
tree are the most critical to the working of the system - if any machine
goes down, all machines that were connected to it (and any others below
them) will have to wait while the machines that were directly connected
to the failed one renegotiate with VROC to get the IP addresses of
alternative hosts. Of course, if you set the display time delay to 10
minutes or so, there will nearly always be time to reconnect before the
user notices anything.
[*] In case you haven't heard of them (!) Amway is the company that
pioneered network marketing in the states, whereby each person recruits
several other people and sells stuff to them, which they then send to
more people that they recruit, etc. etc.