>> True. You might have seen the bland, simple graphics in my sig. That
>physics
>> model does the whole suspension system except for the actual physical ARB
>> installation and attachment points, tie rods, and steering rack location
>(not
>> much CPU time there really anyway with a good approach). Roll centers and
>so
>> forth are essentially moving in real time and all that fun stuff, and it
>still
>> runs around 200-250 fps on my P4. (That looks worse than 60, quite
>frankly,
>> but that's not the point.)
>I've only seen stills on your webpage.
What exactly are you implying? ;-)
>> Stefan, lookup tables like you describe are used to speed things up in some
>> areas, but to give a sense of graphics/physics CPU usage scale: This is
>> currently running so fast the physics engine has to stop and wait a few
>> milliseconds every graphics frame because the clock resolution is only
>0.001
>> second, and if the engine can't get a few cycles in during that 1
>millisecond
>> (meaning it repeats all of the physics calculations several times,
>including
>> doing the entire set of tire force calculations several times for each
>tire),
>> it'll time warp. At 200-250 fps it time warps by a factor of around 4, so
>you
>> tell me what percentage of the CPU time is being used by the physics engine
>> every cycle :-P 0.0...%? 0.2%?
>> I.e., the physics runs so fast the CPU can't even tell the engine how much
>time
>> elapsed since the last time the physics calculations RAN, much less how
>long it
>> actually took them to run... Again, that timer's minimum duration is 0.001
>> second.
>This depends on the time resolution of the simulation. If you only intend
>to simulate the vehicle dynamics... Fine. But if you are going into the
>engine to simulate the combustion also (wouldn't that be necessary to be
>able to adjust valve timings and such?)
No, this would not be necessary unless you wanted transient engine effects.
For valve timing and so forth you could run an engine simulation like yours
before the race starts and simply save the power curves at different throttle
positions, etc.. I've been doing this for about two or three years already
with QuickEngine Builder, although that model is pretty simplistic. Transient
engine effects could probably be done reasonably accurately through other
means. For instance, you could save curves at different engine acceleration or
jerk rates. However, most vehicle dynamics guys running comp simulations of
real cars don't bother from what I understand. A full power torque curve is
all that is needed.
Personally I want more too :-)
you would end up with a much
>smaller time scale and the resulting model of the car would probably be
>stiff (big difference between different time scales of the model).
If you wanted a real time engine model running, then yes, of course you're
right. The picture you posted showed piston location in world space varying
with time. If you wanted that then of course you're probably going to need to
run the system on the order of 1 MHz or better (I think I used 3 MHz on an
experimental engine model awhile back..) That would of course bring any PC to
its knees as you've already seen in your own work, I'm sure.
However, you could easily get by with running the vehicle model at a lower
sampling rate as is already done (say 200-300 Hz), then running the engine
model at a higher rate as you said. I'm not sure what you mean when you say
the car model would become stiff. I run my tires at a higher frequency than
the rest of the car and it does nothing but improve performance. Going as high
as 30,000 Hz for the tires and 200Hz for the car doesn't do anything funny to
the car that I can see.
Anyway, running a real time engine model still would have a huge impact and may
not be feasible unless you used a very simple engine model. So yes, hardware
has a long way to go for this type of thing to become possible using standard
engineering approaches.
A
>smaller time scale combined with a larger model would certainly increase
>the computational load. I read somewhere that to be able to simulate a car
>responding to small variations in the surface it is necessary to increase
>the time resolution to around 8000 - 10000 Hz... Then add some more cars to
>the
>track...
Maybe.. I've run my vehicle model at 6000Hz and things were still pretty fast.
It depends on how you're handling the suspension, I suppose.
And add the possibility to keep track of the wear of many of the
>parts in the car and/or engine. It seems as if it would be quite easy to
>get the simulation taking a lot of time...
If wear was being handled through methods used in the wear research papers I've
seen, then yes, this could kill you. However, then we get back into an area
where adding complexity doesn't necessarily improve accuracy. Are you sure the
component moduli and so forth are accurate? I don't think anybody except
manufacturers have data on brake wear, for instance. Here's an area where you
could probably do just as well with a simplified model because of so many
unknown variables.
For instance, tire wear in my system is handled through a very basic, simple
equation and then adjusted with a coefficient. As long as the basic
relationship is right you can fine tune things. The same could easily be done
for all the other parts on the car without a noticable hit. You could probably
get away with modelling wear for a particular part with only 4-10 math
operations per cycle (my PII-333MHz could do about 50-60 million per second).
And again, you could do that at the vehicle sampling rate or a fraction of it.
Cut the "wear sampling rate" from 300Hz down to 30Hz. Would that really reduce
accuracy? That can't be determined without test results available of course,
but suddenly you can model 10 times as many parts... My bet is that you could
model 1,000 parts this way and not even know it's running.
So again, depending on the method you might not even notice that every part on
the car was wearing over time. This may not be good enough for the engineers
at GM, but then again we don't have the data they have for input so using the
latest whiz-bang SAE approach may not improve accuracy anyway.
>Using better resolution would also require more detailed data... I.e.
>more data shuffling. :P
Not sure what you mean here.
>> Granted, if you threw in a bunch of cars running the same model and added
>good
>> collision detection/response code that percentage will climb, but my point
>is
>> that on today's PCs what we've considered to be really nasty car models
>over
>> the past few years will run basically for free. Collision
>detection/response
>> is most likely the greatest hog in a typical physics engine, but then only
>when
>> a few objects get close to each other.
>Agree... This is a very complicated problem if it is going to be accurate.
>One cannot test collision between all possible surfaces at the same time.
Sure you can ;-) But resolving the collision impulses between multiple
contacts is not something I've seen done even for a rigid body system, so I
have to agree.
>Hopefully these kinds of calculations will be included in future graphics
>cards... I read that in an interview with a developer at Nvidia regarding
>his vision of GPU:s in the future.
That would be interesting to read and I'm sure they've given that a lot more
thought than I have, but then manufacturer's might be getting into
standardizing something that may not be the best approach for all situations.
You can get away with simpler collision detection/response in a car sim than
you could in say a FPS where a dead character falls against a wall and needs to
bend accordingly. If the graphics card or other add-on "collision chip" is set
up to handle the complex scenario by design than we're losing a lot of
optimization potential. It's a waste. I'm sure the same thing was said about
3-D cards to begin with, so maybe this isn't really a concern.
Do you have a link to the interview? It would be interesting to read.
>> As another example, on our current project we're giving the physics engine
>> 5-10% of the CPU. Really it's not using more than probably 1-2% at this
>point,
>> and I've got just about every bell and whistle in there I can think of
>> (well....almost.... ;-)). Adding some chassis flex or tire distortion
>would
>> not impact anything unless you insisted on duplicating FEM engineering
>software
>> line for line. What if you could get within 5% of the same results but on
>code
>> that runs 10,000 times faster? If your tire data might be off by any more
>> than 5% there's no point bothering with the FEM approach because the model
>is
>> no more accurate either way. It's cool to say "this has an FEM model", but
>if
>> it's not more accurate because of it's inclusion, isn't it just hype? (Of
>> course you have to take your hat off to whoever wrote it anyway!)
>Depends on if you're into entertainment or engineering. In engineering it
>is very important to get it right. In entertainment it would not be as
>crucial, but then why use a complicated model at all?
My point is that even the accuracy of an FEM or Magic Tire Model is
questionable when introduced into a real time vehicle sim, whether it's a game
or real engineering application. There are no transient effects in Pacejka,
for example, although the steady-state stuff can of course be very close since
the force output comse from measurements. But tires do not appear to behave
the same way when a slip angle is first introduced as they do when they've been
operating at that slip angle for a few seconds, for one thing. So even if you
measure these things on a tire tester and reproduce the force data, how
accurate is the model as a whole when the steering angle changes? FEM models
are probably developed partially by looking at test data, which itself has
limitations. Regardless, this
...
read more »