| ABSTRACT
Predictive capability has emerged as a key
goal/phrase in much of modeling research. It truely is important because its
attainment would signify the quantitative maturity in understanding and modeling
science of interest. However, defining this is a difficult job. Bona fide
predictive capability will require computational models that have been shown
to be valid under widely accepted standards (Validation). This talk identifies
and explores issues that must be confronted in demonstrating the validity of
computational models (particularly models of large complex systems such as
space and fusion plasmas, climate models etc). To move toward the community
consensus, that will ultimately determine what validity means, we are trying
to begin a process of establishing guidelines and good practices in validation
of computation models.
To further this, in this talk, we will also describe
two new metrics we are developing for quantification of the validation process
as well as a nice graphical method (Taylor diagrams) for visualizing comparisons
between models and between models and experiment/observation.
A theme of this work is that this entire process must be an active
collaboration between theorists, modelers and experimentalists/observationalists
Note: Verification is the process by which it is determined
that a numerical algorithm correctly solves a mathematical model within a set
of specified, predetermined tolerances. Validation is the process by which
it is determined that the mathematical model faithfully represents stipulated
physical processes, again within prescribed limits.
| |