Physics Department Seminar University of Alaska Fairbanks


J O U R N A L    C L U B

 

Verification and Validation of Models (revisited):
It's not just for modelers (experimentalists/observationalists are needed too)

 
by
 
David Newman
Physics Dept./GI UAF

 

ABSTRACT

Predictive capability has emerged as a key goal/phrase in much of modeling research. It truely is important because its attainment would signify the quantitative maturity in understanding and modeling science of interest. However, defining this is a difficult job. Bona fide predictive capability will require computational models that have been shown to be valid under widely accepted standards (Validation). This talk identifies and explores issues that must be confronted in demonstrating the validity of computational models (particularly models of large complex systems such as space and fusion plasmas, climate models etc). To move toward the community consensus, that will ultimately determine what validity means, we are trying to begin a process of establishing guidelines and good practices in validation of computation models.
To further this, in this talk, we will also describe two new metrics we are developing for quantification of the validation process as well as a nice graphical method (Taylor diagrams) for visualizing comparisons between models and between models and experiment/observation.
A theme of this work is that this entire process must be an active collaboration between theorists, modelers and experimentalists/observationalists
Note: Verification is the process by which it is determined that a numerical algorithm correctly solves a mathematical model within a set of specified, predetermined tolerances. Validation is the process by which it is determined that the mathematical model faithfully represents stipulated physical processes, again within prescribed limits.



 

Friday, 15 October 2021

On Zoom only: https://zoom.us/j/796501820?pwd=R2xEcXNwZGVRbG0va29iN2REU241UT09

3:45PM