simulators. I'm primarily interested in the very top-end systems
which generate multiple channels of video for display over a large
area.
This kind of image generation requires lots of parallel processing
power. While its fairly obvious in various systems that there is
varying amounts of parallelism going on, its usually very difficult to
find out more or less how. For example, your typical image pipeline
is something like this:
1. Traverse the database at a high level
to determine what might need to be rendered
2. Geometrically transform the structures
to be rendered into screen space
3. Rasterize the structures into the frame buffers
4. Display from the frame buffers onto the CRTs
All systems I've seen (except for systems for distributed simulation)
have a single processor at stage 1. Of course, to support multiple
image channels, all systems have several frame buffers at stage 4.
For stages 2 and 3, though, this is where the systems vary widely in
terms of how many processors there are and how they are arranged.
While some systems might describe the processing units they have, and
perhaps even tell how they are connected, I've not seen any system
descriptions that really tell how multiple image channels are handled.
Does the system work on each channel in sequence, treating it like an
entirely new image? (This would seem wasteful.) Or does the system
consider the overall viewport and work on the entire composite scene
all at once? (This would seem difficult.) Or perhaps something in
between these two extremes?
If you know anything about these kind of details, or if you know where
one might find out about this information, I'd really like to hear
from you. Any kind of pointers would really be appreciated, since
rarely does one see much published information about these systems.