Uncertainty quantification (UQ) is an emerging field that focuses on characterizing, quantifying, and potentially reducing, the uncertainties associated with computer simulation models used in a wide range of applications. Although it has been successfully applied to computer simulation models in areas such as structural engineering, climate forecasting, and medical sciences, this powerful research area is still lagging behind in materials simulation models. These are broadly defined as physics-based predictive models developed to predict material behavior, i.e., processing-microstructure-property relations and have recently received considerable interest with the advent of emerging concepts such as Integrated Computational Materials Engineering (ICME). The need of effective tools for quantifying the uncertainties associated with materials simulation models has been identified as a high priority research area in most recent roadmapping efforts in the field. In this paper, we present one of the first efforts in conducting systematic UQ of a physics-based materials simulation model used for predicting the evolution of precipitates in advanced nickel–titanium shape-memory alloys (SMAs) subject to heat treatment. Specifically, a Bayesian calibration approach is used to conduct calibration of the precipitation model using a synthesis of experimental and computer simulation data. We focus on constructing a Gaussian process-based surrogate modeling approach for achieving this task, and then benchmark the predictive accuracy of the calibrated model with that of the model calibrated using traditional Markov chain Monte Carlo (MCMC) methods.
Introduction
Uncertainty quantification (UQ) is increasingly receiving attention due to advances in computational infrastructure and computer simulation models in multiple applications. These models are used to help understand the behavior of complex systems through representation of the phenomena that govern these systems using mathematical and physical formulations, and subsequently predicting specific quantities of interest (QoIs). It is widely accepted that models are imperfect since they involve modeling very complex phenomena that typically occur across different length and time scales [1]. Even if the correct values of the model inputs are used in the simulation, the output of the model will differ from the true value of the physical phenomenon being simulated due to incomplete understanding of the model parameters, incomplete knowledge of the physics, and/or intrinsic stochastic behavior of the system being simulated [2]. Consequently, the challenging task of characterizing and quantifying the uncertainties associated with these models is crucial toward leveraging their benefits and streamlining their applicability. Applications of UQ span a wide range across several science, engineering, and social disciplines such as climate and weather [3], forestry [4], nuclear engineering [5], computational fluid dynamics [6], biological phenomena [7], medicine [8], and econometrics [9].
One field in which the application of UQ is highly desired is computational materials science in general [10] and the emerging field of Integrated Computational Materials Engineering (ICME) in particular [11–13]. ICME is a new approach within the broad field of materials science and engineering that aims to integrate computational materials models to enable the optimization of the materials, manufacturing processes, and component design long before components are fabricated [14]. It has evolved due to the tremendous advances in computational materials models over the past three decades. In a recent roadmapping study by The Minerals, Metals & Materials Society (TMS) [15], UQ of these computational models was identified as a fundamental issue that needs to be addressed within the ICME framework. As stated in many recent reports [10,12,13], all computational materials models have uncertainties due to the stochastic nature of materials structures in addition to the levels of uncertainties associated with simulation methods [14]. To date, the focus in ICME has been on the development of physics-based computational models, but UQ has not been a central question despite its crucial importance [10]. In this work, we conduct one among the early efforts in applying formal UQ to an important class of computational materials models known as precipitation evolution models.
Precipitation evolution models are concerned with understanding and predicting the formation, growth and evolution of secondary phase particles (precipitates) that form from an initially supersaturated matrix phase during prescribed thermal treatments. Precipitates can play an important role in enhancing and modifying material properties (e.g., strength and hardness), and hence part performance [16,17]. Accurate modeling of the evolution of precipitate populations requires considerable understanding not only of the thermodynamics (i.e., driving forces) and kinetics (i.e., diffusion, growth rates) of the materials of interest, but also of the coupling of sophisticated large-scale simulations with difficult to acquire experimental data. Consequently, uncertainties are typically associated with these interacting components which results in hampering our understanding of the underlying process and our efforts to develop meaningful predictions [1]. These uncertainties originate from many different sources, some of which are related to the choice of model itself, while others relate to model inputs and parameters [6]. For example, the model may include oversimplified assumptions about some complex and intricate physics. Another possible source of uncertainty might originate from the lack of knowledge about the true value of intrinsic physical properties of the material, such as density or diffusion coefficient, that serve as inputs to the model. Kennedy and O'Hagan [18] provide a classification of the sources of uncertainty in computer simulation models.
The literature on formally characterizing uncertainties associated with ICME computational materials models is sparse. Traditionally, research efforts have utilized Monte Carlo methods to achieve this task [19–23]. Although Monte Carlo methods represent a powerful and well-studied approach, they require a large number of simulation runs, which becomes impractical and often infeasible with ICME models which are typically computationally expensive. In contrast, we present a UQ approach based on surrogate modeling to conduct Bayesian calibration of a physics-based computational model used to predict the evolution of precipitates in advanced shape-memory alloys subject to heat treatment. The calibration problem (sometimes known as the inverse UQ problem) refers to the adjustment of a set of model parameters such that the model's predictions are in agreement with experimental data [24].
Shape-memory alloys (SMAs) are a class of advanced metal alloys attractive to the industrial and academic communities due to their ability to remember their original shape. In other words, after deformation they have the ability to retain their original shape upon being heated [17]. The application scope of SMAs is broad across various sectors such as the automotive, aerospace, manufacturing, and energy exploration sectors [25]. Recently, it has been shown that control of the distribution of secondary strengthening precipitate phases in some SMAs can enable further control strategies to tune their transformation behavior to specific engineering applications [16,26].
The paper is organized as follows: Sec. 2 presents an overview of the literature. The current work represents a unique intersection of the fields of UQ, Bayesian calibration, physics-based precipitation models, and shape-memory alloys. Next, we present the precipitation model for nickel–titanium shape-memory alloys (NiTi SMAs) and its attributes in Sec. 3. The Bayesian calibration model is described in Sec. 4, and subsequently, Sec. 5 reports the application of the calibration model into the precipitation model and the obtained results. We finalize with conclusions, insights, and directions for future work in Sec. 6.
Literature Review
Modeling and simulation of precipitate evolution during material processing is of great interest to material scientists and engineers since it can have considerable impact on mechanical properties and part performance. Research on precipitation modeling typically starts with the development of thermodynamic models—using, for example, the CALPHAD framework—in order to identify the thermodynamic conditions that yield specific phase constitution states [27–29]. One of the earliest numerical precipitation models upon which many other studies have relied is the model provided by Wagner et al. [30]. The existing literature on precipitation modeling focuses on microstructural features and mechanical properties. For instance, Robson et al. [31] develop a model to account for the effect of nucleation processes on dislocations, and predict precipitation behavior in aluminum [32]. A different study conducted by Deschamps et al. [33,34] presents research on microstructure evolution and modeling of precipitation kinetics by integrating nucleation growth and coarsening under different heat treatments schemes, and then maps them to mechanical properties. Other works focus on phase field models to assess the morphology of precipitates at different sizes [35] and to analytically predict the evolution of the precipitate volume fraction using calorimetry techniques [36].
While there exists a plethora of research efforts on the topic of computational materials models, there is a corresponding lack of efforts on uncertainty quantification of these models. Chernatynskiy et al. [10] present a review of the few existing works on UQ in computational materials model with emphasis on multiscale simulations. One example includes Cai and Mahadevan [37] who conducted uncertainty propagation in a manufacturing process through leveraging modeling using Gaussian processes. Another example is given in Crews and Smith [38] where the authors use UQ to predict heat transfer parameters of SMA bending actuators.
One possible explanation for the lack of research on UQ in the context of computational materials models is the fact that advances in these models are recent and have focused on aspects relevant to the underlying physics responsible for the phenomena of interest. As these models continue to develop and mature, the need for UQ receives more attention as noted in multiple reports [14,15]. An important note, however, is that despite the lack of this integration, UQ as an independent field is quite mature due to the broad scope of applications that it addresses. Numerous UQ frameworks and methods have been developed to solve different problems such as regression, subset selection, hypothesis testing, and basis expansions [39,40]. In this paper, we focus on Gaussian process (GP) models to conduct UQ in ICME computational materials models.
Gaussian processes have a long history of application in mining, agriculture, and forestry since 1920 [41]. Due to their versatility, they have experienced significant growth over the past three decades, especially with the advances in low-cost high-speed computing [42]. GP models are well suited for studying and quantifying the uncertainty of functions and for making predictions given observed data. They have a very wide range of applications that include spatial modeling [43], computer model emulation [44], image analysis [45], and supervised classification and prediction in machine learning [46]. Rasmussen and Williams [46] present the theory and applications of GPs, while Cressie [47] and Gelfand et al. [42] focus on GP applied to spatial statistics (often called geostatistics), its implementations and generalization to different problems. Kennedy and O'Hagan [18,48] set foot for GP in the UQ field with their seminal article on calibration of computer models, a study that served as base for many subsequent works, including the current work, such as Refs. [49–51] for surrogate modeling and [8,52,53] for calibration.
The current work is first to conduct systematic UQ of a precipitation evolution computational model in shape-memory alloys. By conducting this task, we improve the predictive capability of the model which is very useful in the design and processing of materials. We focus on NiTi SMAs as a model material due to their unique features highlighted in Sec. 1. Two excellent review articles on SMAs are presented by Lagoudas [54] and Ma et al. [25] summarizing modeling, engineering, and high temperature applications. The nickel–titanium (NiTi) alloy considered in this paper (commercially known as nitinol) is a popular class of SMAs that has been broadly investigated for medical applications due to its biocompatibility [55,56], superelasticity, and strain control [57,58].
Precipitation Model
The focus of the current work is on conducting uncertainty quantification of a physics-based model, rather than the development of the model itself. In other words, the physics-based model is treated as a black box and the focus is on the development of the statistical UQ framework. In this section, we present a high-level overview of the physics-based model of interest but omit technical details since they are out of the paper's scope.
Material modeling tools are generally used to determine the evolution of a material over time based on the environmental conditions imposed upon it. One approach, precipitation modeling, is used to predict the nucleation and growth of a secondary phase within the matrix of a primary phase. These small precipitates (secondary-phase particles dispersed throughout the material) are compositionally and/or structurally different than the surrounding matrix phase, allowing their process-dependent morphologies to be tracked as they evolve in the presence of various environmental conditions. The sizes and shapes of these particles can drastically alter the properties of the overall material [59,60]. In the context of this research, we are interested in the nickel composition of the matrix which is calculated at any step in the process through stoichiometric considerations and a mass-balance equation involving nickel in the precipitates and the initial nickel content of the material before thermal processing. In turn, the composition of nickel in the matrix, the volume fraction of the precipitates, and their size distribution control the final shape-memory behavior in NiTi alloys [16].
MatCalc, the microstructural modeling tool used in this research, utilizes the numerical Kampmann Wagner (NKW) approach, coupled with proprietary thermodynamic and kinetic databases, to model the phase transformation of a material [61–63]. The model is unique in that it calculates the evolution of a discrete size distribution of particles in a material system when given a set of times and temperatures representing a thermal processing schedule. This model strikes a balance between the fast but overly simplified Johnson–Mel–Avrami–Kolmogorov (JMAK) approach [64] and the accurate but computationally intensive phase field approach [65]. The size distribution provides a more complete physical representation of the system than JMAK and the mean-field approximation for each size class leads to much faster calculations overall when compared to more detailed microstructural evolution techniques, such as phase field modeling which explicitly simulates the evolution of the entire microstructure, including secondary phases.
The evolution of the precipitate size distribution is tackled by applying the nucleation, growth, and coarsening equations to each bin at each time step, then updating the size distribution bin sizes to reflect the evolution that occurred during said time step. The nucleation of each bin in the distribution is calculated by governing equations based on classical nucleation theory. These nucleated bins are then individually subjected to growth-based governing equations rooted in the thermodynamic extremal principle. Finally, the coarsening process is approximated by establishing a minimum precipitate radius through an analysis of the system as a whole. Precipitates smaller than the minimum size value are then dissolved and redistributed to the next nearest bins larger than the minimum radius. These three evolution calculations are repeated at every time step, leading to the overall evolution of the size distribution of particles. Each of these sections has a different set of governing equations which are not included in this paper for the sake of brevity, but can easily be found at the reference provided [63]. Although each set of governing equations has been derived to explain a different portion of the material's overall evolution, they predominantly rely on the same parameters and variables meaning that all three sets of equations can be calibrated holistically.
Three terms (interfacial energy, diffusion correction prefactor, and diffusion correction exponential factor) significantly influence the kinetic behavior of the overall model and as such can be used to calibrate the model to experimental data. The interfacial energy term influences the behavior of the nucleation portion of the model and the diffusion correction factors equally influence all three stages of precipitate evolution listed above by modifying the substitutional diffusivity of the matrix phase. Values for interfacial energy are known with much less accuracy than they should be, especially when considering how sensitive the entire model is to this term. In fact, MatCalc attempts to calculate an interfacial energy value using the generalized broken bond (GBB) method if no calibrated interfacial energy is given [63,66]. This GBB approach works well for many systems but failed to accurately reproduce experimental results for precipitation. Previous parametric calibration attempts lead to an interfacial energy value of . The diffusion correction terms can be seen as unknown values because they are essentially fitting parameters for the diffusivity of the matrix. In the remainder of the paper, we focus on conducting statistical calibration of the three parameters described above.
Statistical Calibration of the Precipitation Model
We start by introducing some definitions and notations. We distinguish between the model presented in this section to conduct statistical calibration and the physics-based precipitation model (Sec. 3) that is to be calibrated. We will refer to the former as the statistical model and the latter as the computer model to avoid confusion. It is common practice in the literature to refer to computational models as computer models since they involve running a computer simulation code.
with mean vector and covariance matrix C defined by a mean function and a covariance function respectively. We will denote this as , with and . It is evident that the choice of mean and covariance functions will influence the distribution in Eq. (1); therefore, they need to be carefully selected. Useful guidelines for their selection are given in Refs. [46] and [67] for the interested reader.
In the calibration problem, we distinguish between two groups of inputs to the computer model:
Calibration parameters denoted by the vector represent physical parameters that can be specified as inputs to the computer model, and whose values are unknown or not measurable [18,49]. Examples include properties of materials (such as conductivity or interfacial energy, among many) which are not easily determined for materials models, or saturation constants in ecosystem models [68]. The goal of the calibration problem is to estimate these parameters such that the computer model simulations agree with the experimental observation of the real process being simulated.
Control inputs (or design variables) denoted by the vector x are variables that are set to known values by the user [18]. Examples of these inputs include temperature, pressure, force, or any other quantities that are known and controlled by the user.
where ρ is a scaling factor of the computer model, is a discrepancy function or model inadequacy function whose role is to account for the missing physics in the computer model, and is the observation or measurement error.
The calibration problem involves a synthesis of computer simulations and experimental observations. More specifically, to solve the calibration problem, one needs to collect experimental observations, , and run a sufficiently large number of computer model simulations . The main task is to use these to infer the values of the calibration parameters such that the computer model output most closely matches the real process represented by experimental observations. An important challenge within this context is the computational burden associated with generating the large number of computer model simulations needed to effectively conduct calibration. For example, some computer models might require the numerical solution to systems of coupled nonlinear partial differential equations. In this case, even generating a few simulations can be a challenging task.
When the above holds true, it is common practice to construct a surrogate model (sometimes known as an emulator or a meta-model). This represents a computationally efficient approximation that encapsulates the behavior of the original computer model and can thus be used to generate a large number of simulations. Of course, when the computer model is not computationally expensive, it can then be directly used to conduct calibration, without the need to construct this computationally efficient surrogate description. We will refer to this as direct calibration. It is important to note that although our main focus is on calibrating the precipitation model using a surrogate model, we will also present the direct calibration approach for benchmarking and comparison purposes.
Direct Calibration.
We model the discrepancy function defined in Eq. (2) as a Gaussian process, . The mean function , where is a row vector of basis functions and is a vector of regressors. In other words, . The covariance function C will depend on an additional set of parameters (commonly known as hyperparameters). These will be presented in detail in Sec. 5. Finally, the measurement error terms are modeled as independent and identically distributed (iid) normal random variables, for all . Hence, the statistical model is fully characterized by the set of parameters .
where is the likelihood function defined as the probability of observing Z conditioned on the model parameters ( in this case) [69]. The term is the prior distribution of that captures our prior belief or knowledge, and is the posterior distribution of conditioned on experimental observations.
Selection of these prior distributions plays an important role in the parameter estimation process. This selection is driven by our prior knowledge or belief regarding the parameters. It is common practice to assign uninformative prior distributions to parameters for which little knowledge is available (for example, a uniform distribution or normal distribution with large variances). Finally, utilization of conjugate priors such that the prior and posterior distributions of the parameter belong to the same family is usually recommended and offers practical and computational advantages [41].
Several methods can be utilized to obtain this posterior distribution. We rely on Markov chain Monte Carlo (MCMC) methods, specifically Gibbs sampler and Metropolis–Hastings (MH) algorithm [70,71].
The BLUP predictor will be subsequently used in Sec. 5 to assess the prediction accuracy of the calibrated model using a test set or cross-validation procedures.
Surrogate Calibration.
for and .
where the star notation for the control inputs x in the set D2 indicates that they might be of different values than the control inputs used in D1.
where and . Similar to the direct calibration case, the covariance functions and are fully characterized by the sets of hyperparameters , respectively, and henceforth, all of the parameters from this statistical model can be grouped into two sets: and , which along with calibration parameters represent all the statistical parameters that will be estimated using the Bayesian procedure. Notice that the measurement error is still following a normal distribution .
Bayesian estimation of the parameters is conducted in two stages. The first stage is to estimate the hyperparameters by maximizing the posterior probability distribution . Subsequently, in the second stage, the remainder of the parameters is estimated by maximizing the conditional posterior distribution . A detailed procedure to carry out these optimizations is presented in the seminal work by Kennedy and O'Hagan [48].
Following these two stages, the remaining step is to obtain the posterior distribution of the calibration parameters . We employ MCMC methods in a similar fashion to the direct calibration case.
Case Study: Calibration of the Precipitation Model for NiTi SMAs
In this section, we implement both calibration approaches presented in Sec. 4 to calibrate the computer model of interest. As detailed in Sec. 3, the computer model in our case predicts the evolution of precipitates in NiTi shape-memory alloys subject to heat treatment. More specifically, the model predicts the atomic percent (at. %) of nickel in the matrix following heat treatment for a specified period of time at a preset temperature.
The control inputs and calibration parameters are given as follows:
Three control inputs x:
- —
Initial nickel content in the alloy (at. %)
- —
Aging (heat treatment) temperature (°C)
- —
Aging (heat treatment) time (s)
- —
Three calibration parameters :
- —
Interfacial energy ()
- —
Diffusion correction prefactor (dimensionless)
- —
Diffusion correction activation ()
- —
The output of the process and of the computer model correspond to the final nickel content of the surrounding matrix (at. %). The posterior distribution of the unknown calibration parameters will be derived using a synthesis of experimental observations and computer model simulations.
The execution time of the computer model is highly dependent on its inputs. For example, although simulation under a heat treatment setting with longer aging time is generally expected to take longer than with shorter aging time, some certain values of the calibration parameters can result in a different outcome. From running multiple simulations in the initial model testing phase, execution times ranged between 3 s and 5 min. It is important to point out that although this might not represent significant computational burden from the perspective of obtaining model prediction for a particular input combination, in conducting statistical calibration a large number of computer simulations is necessary to perform MCMC simulation (10,000 or more). For example, the median execution time is 2.5 min, thus generating a sufficient number of simulations to conduct MCMC calibration can take up to 2 weeks. This makes the task of conducting subsequent sensitivity analysis, uncertainty analysis, and what-if scenarios that are typically needed by the user quite challenging. Our focus in this work is thus on conducting surrogate model calibration. We will implement the direct MCMC calibration for benchmarking purposes.
Experimental Observations.
The experimental observations consisted of 36 data points. Each data point represents the final nickel content of the matrix (at. %) corresponding to a specific heat treatment regime (aging temperature and aging time) for a particular NiTi alloy with initial composition. It is obvious that these data are consistent with the computer model output and control inputs, respectively.
To avoid oxygen contamination, the samples were individually sealed into quartz tubes under a high-purity argon atmosphere. Heat treatments were performed in a muffle furnace followed by subsequent water quenching. A technique called differential scanning calorimetry (DSC) was first used to directly measure the resulting martensitic transformation temperature of each heat treated alloy. The martensitic transformation temperature is a property of NiTi alloys that is highly sensitive to the amount of nickel in the surrounding matrix. The formation of nickel-rich precipitates causes a change in the composition of the alloy, which in turn reflects into a change in the martensitic transformation temperatures. In other words, the martensitic transformation temperature determined through DSC can be used to indirectly measure the matrix composition.
The relationship between NiTi composition and the martensitic transformation temperatures has been extensively studied in literature, such as Refs. [73–75]. In the current work, the DSC measurements of the martensitic transformation temperatures were converted to nickel composition (at. %) using curve fitting to data presented in Frenzel et al. [76]. This yielded the 36 experimental observations data set used henceforth. From this set, 31 will be used to calibrate (or train) the precipitation model as detailed next and five will be used for test and validation to assess the predictive performance of the calibrated model. Figure 1 and Table 3 in Appendix presents the whole experimental data set.

Scatterplot matrix of the experimental dataset showing relative location of test data points to training data points
Direct Calibration.
Following the notation of Sec. 4, we have that qx = 3 and control inputs , and calibration parameters . We define region to include all values from the experimental set; that is the hypercube with bounds and . Similarly, region is also defined as a hypercube; however, its bounds are specified by the modelers based on their experience and prior knowledge of the process to a region within which they have reason to believe that the true values of might lie. These bounds were defined to be (not included) and .
is a distance measure between and known as the Mahalanobis distance [42], with length scale parameters ωk, which control the relative influence of each input dimension to the process. It can be seen that the covariance function is fully defined by hyperparameters . The smoothness parameter was kept constant at , based on preliminary work by the authors and common practice in the literature [41,42].
Selection of Prior Distributions.
The reasoning behind this selection is straightforward. With the exception of the variance parameters and , uninformative uniform distributions were selected for all the parameters due to the lack of prior information regarding the shape of the distributions. The scaling factor ρ is defined to take values between 0 and 1, while βj can take on any real value and ωj is limited to positive values due to the intrinsics of Mahalanobis distance. The calibration parameters , and θ3 take values within the previously defined bounds of the study region . Notice that an indicator function has been used for the prior distributions of , and θ3. This function is equal to 1 if and 0 otherwise. It was used to reject values of , and θ3 for which the final (output) and initial (input) content of nickel are equal, which indicates that precipitation did not occur. This represents an anomaly that needs to be discarded. Finally, inverse gamma prior distributions were selected for the variance parameters . These represent conjugate distributions with respect to the normal likelihood distribution.
Model Calibration.
Next, we conduct Bayesian calibration of the model by obtaining the posterior distributions of the parameters above using MCMC simulations. We employ the Metropolis–Hastings algorithm, which requires the evaluation of the posterior distribution given in Eq. (6) at each iteration. This implies that Y needs to be evaluated, and therefore the computer model is executed times at each of the MCMC iterations. We utilize parallel programming in the 862-node Intel x86-64 Linux cluster supercomputer at Texas A&M high performance research computing facilities. MCMC simulations were implemented over ten cores in the supercomputer and run for 15,000 iterations, with a 25% burn-in period and thinned every fifth iteration. Results of the posterior distributions of the calibration parameters are shown in Fig. 2, where each plot displays the histogram with 20 bins and their corresponding kernel density estimate.

Histograms and kernel density estimates ofthe posterior distribution for calibration parameters using direct calibration: (a) θ1, (b) θ2, and (c) θ3
In the graphs, we can see that θ1 and θ2 have defined informative and unimodal posterior distributions, with θ1 having a unique peak and θ2 being skewed to the right. The posterior distribution of θ3 shows some uniformity, which implies that all the values across its domain are equally likely to characterize the real process. Table 1 presents mean, mode (most frequent value), and standard deviation of the posterior distribution for the calibration parameters.
Posterior distribution values for calibration parameters
Parameter | Posterior mean | Posterior mode | Posterior SD |
---|---|---|---|
Interfacial energy, θ1 | 0.0498 | 0.0536 | 0.0079 |
Diffusion correction prefactor, θ2 | 75.4143 | 73.7015 | 13.9599 |
Diffusion correction activation, θ3 | 533.6609 | 587.1682 | 274.9424 |
Parameter | Posterior mean | Posterior mode | Posterior SD |
---|---|---|---|
Interfacial energy, θ1 | 0.0498 | 0.0536 | 0.0079 |
Diffusion correction prefactor, θ2 | 75.4143 | 73.7015 | 13.9599 |
Diffusion correction activation, θ3 | 533.6609 | 587.1682 | 274.9424 |
On its own, the distribution of θ1 can be attributed to the fact that it appears as a cubic term within an exponential function of the governing equations for precipitate evolution. The narrow nature of its distribution, in comparison to the other two parameters, can be explained by its appearance in the governing equation for precipitate nucleation, which is the materials science equivalent of initial conditions drastically influencing the solution to a differential equation. The interfacial energy is so dominant in precipitation models due to the fact that relatively small values imply almost no barrier to nucleation, leading to the precipitation of all the precipitates that can form in the material (ultimately limited by thermodynamics) right at the beginning of the precipitation process. On the other hand, slightly high values for this property may result in a barrier to nucleation that effectively results in no precipitation of secondary phases at all during reasonable heat treatment times. Thus, only a narrow range of interfacial energy values can yield physically meaningful precipitation predictions. In fact, the estimated mean and mode of θ1 are well within the range of values reported in the literature and very close to the value calculated through parametric sweep-experimental data coupling.
It is more difficult to ascertain direct significance from specific values of θ2 and θ3 distributions since they represent corrections to the diffusivity values taken from the MatCalc NiTi kinetic database. These corrections are necessary as one does not really know the state of the microstructure before the precipitation starts. Of particular effect is the presence, for example, of grain boundaries, dislocations, and other defects that can catalyze (facilitate) the nucleation of the second phase precipitate particles. The ranges for each parameter were selected such that only physically realistic values would be calculated for these two parameters. Specifically, θ2 and θ3 correspond to corrections to the diffusion coefficient and activation energy values, respectively. Although they represent correction factors and as such have no direct physical significance themselves, it can be inferred that these two diffusion correction values are less influential (i.e., have broader distributions) than θ1 for two main reasons. First, the equations that contain them are not as drastic as a cubic in an exponential, and second, it is because they are implemented in a later stage of the precipitate evolution which reduces their influence with respect to θ1.
Surrogate Calibration.
We now calibrate the computer model using the surrogate calibration to account for the computational burden associated with direct calibration. Calibration will be conducted based on the redefined statistical model in Eq. (10).
Surrogate Model Construction.
The first step toward surrogate calibration is to construct the computationally efficient surrogate model that can be used in lieu of the original computer model. We start by generating a data set of computer model simulations over different combinations of . A data set D1 with size N = 3025 data points was generated. Latin Hypercube Sampling was used to select values of uniformly over the space at which the computer model is evaluated. This data set was again implemented using parallelization in the supercomputer to reduce the execution time. The same data set of experimental observations used for direct calibration was utilized for surrogate model calibration. We denote this as D2 with size n = 31.
are now squared Mahalanobis distances. Similar to the previous case, the covariance function C1 is fully defined by hyperparameters , and C2 by . Notice that these covariance functions represent special cases of the Matérn function when [42].
Selection of Prior Distributions.
Model Calibration.
Next, we estimate the hyperparameters of the GP models by maximizing their marginal posterior distributions in stages 1 and 2 as explained in Sec. 4.2. Subsequently, estimation of the calibration parameters was performed using MCMC with the same specified simulation procedure; that is, 15,000 iterations with 25% burn-in period and thinning set to every fifth iteration. The posterior distributions for the calibration parameters are shown in Fig. 3.

Histograms and kernel density estimates of the posterior distribution for calibration parameters using surrogate model: (a) θ1, (b) θ2, and (c) θ3
By inspecting the obtained posterior distributions, we see some differences from those obtained using direct calibration. The distribution of θ1 is an informative bimodal behavior with two distinctive peaks. The distribution of θ2 is skewed to the left, in contrast to the direct calibration case, and the distribution of θ3 shows similar uninformative behavior to the direct calibration case. Posterior values are presented in Table 2 in a similar manner as with previous case.
Posterior distribution values for calibration parameters
Parameter | Posterior mean | Posterior mode | Posterior SD |
---|---|---|---|
Interfacial energy, θ1 | 0.0401 | 0.0197 and 0.0544 | 0.0173 |
Diffusion correction prefactor, θ2 | 44.9248 | 20.0463 | 27.4283 |
Diffusion correction activation, θ3 | 512.0809 | 969.1429 | 294.1693 |
Parameter | Posterior mean | Posterior mode | Posterior SD |
---|---|---|---|
Interfacial energy, θ1 | 0.0401 | 0.0197 and 0.0544 | 0.0173 |
Diffusion correction prefactor, θ2 | 44.9248 | 20.0463 | 27.4283 |
Diffusion correction activation, θ3 | 512.0809 | 969.1429 | 294.1693 |
As with the previous case, θ1 is the parameter with direct physical significance while θ2 and θ3 represent correction factors for the kinetic equations of the model. The bimodal nature of θ1 is most likely a result of the inability of the surrogate model to directly compare initial and final nickel content of the matrix, which is used as a sanity check in the direct model. To elaborate on this, note that while the surrogate model is to replace the computer model, it does not capture any physics, and thus does not recognize that cases where the initial and final nickel content are identical should be discarded.
To make this more explicitly clear: in the direct calibration model, an unreasonably small value for the interfacial energy would result in instantaneous saturation of the precipitate phase to volume fractions predicted through thermodynamics (lever rule). On the other hand, unreasonably large values for this quantity would result in a practically infinite barrier for nucleation, leading to no precipitate phases regardless of the heat treatment time. The former case would result in complete nickel depletion at times close to t = 0, while the latter case would correspond to no nickel depletion at all. In the direct model, these two problematic cases could be detected and discarded from the sampled distribution. That being said, both peaks are within the acceptable range of values seen in the literature.
The explanation of θ2 and θ3 distributions remains essentially the same for the surrogate model as it was for the direct case. These two parameters are correction factors and as such have no direct physical connection. All that can be said is that the mean and mode values seen in Fig. 3 will result in physically realistic diffusion coefficient and activation energy values. Again, these two distributions are broader because they influence later stages of the precipitate evolution which have less of an impact on overall precipitate morphology than early stage parameters like θ1.
In terms of hyperparameters, the estimated variances are as follows: . These results are consistent with what we would expect. For instance, the marginal variance of the surrogate model is smaller than that of the discrepancy function . This is justified since the discrepancy function accounts for the missing physics between the computer model and the experimental observations; therefore, it handles more noisy data, which translates to larger uncertainty for the discrepancy function. Additionally, the surrogate model replaces a deterministic computer model; therefore, it is expected that the surrogate will have smaller uncertainty given that the simulation dataset D1 does not involve any noise besides numerical errors.
Regarding the length scale parameters, a large value translates to low influence of that specific input dimension on the covariance function and vice versa. The estimated values are which show several interesting results. Within the surrogate model (Eq. (17)), the most important input dimension corresponds to the interfacial energy (related to ω4), which makes sense as it highly influences the precipitation process of nickel. On the opposite side, Aging time (ω3) has been estimated to be less impactful on the surrogate model. In Sec. 5.5, we present a variance-based sensitivity analysis of the model, which further explains the dependences of the output of the process on each of the inputs.
Predictive Performance of the Calibrated Models.
In Secs. 5.2 and 5.3, 31 experimental observations were used to calibrate the precipitation model using both the direct and surrogate modeling approaches. We now use five additional experimental observations from the test set to assess their performance.
where is the total number of posterior distribution samples calculated during MCMC (after burn-in period and thinning) and is the ith posterior sample. A similar procedure is utilized for the predictive distribution parameters in the remaining Eqs. (8), (14), and (15).
First, we run a formal ten-fold cross-validation procedure solely focused on assessing the performance of the surrogate model as an approximation for the computer model and how accurate of an approximation it represents. Figure 4 shows the results of this cross-validation, where the abscissa in the plot is the simulated values (the computer model output), while the ordinate is the mean of the predictions obtained by the surrogate model; therefore, a point in the plot compares a simulation with a prediction for a specific input . In this case, the straight line is a reference indicating ideal predictions in the case when simulations and predictions are identical.
The surrogate model appears to adequately capture the behavior of the computer model. This is demonstrated by the points being reasonably scattered close to the ideal red line. It is worth noting that we are comparing 3025 points from dataset D1 and that the most dense regions in the plot are those close the red line. Numerically, the average deviation from the ideal line is 0.0661 (at. %), which further confirms the results.
Next, we use the predictive distributions from Eq. (20) to assess the performance for both approaches, where we employ the set of data points not used for training the models and compare their predictive distribution to the actual observed values (which might be noisy). Figure 5 displays model predictions for the test set consisting of five points using the direct and surrogate calibration approaches, respectively. The abscissa in the plot is the experimental observation while the ordinate shows the mean of the predicted value, , at the squared marker. The confidence interval bars represent ±1 standard deviation, .
where is the ith experimental observation from the test set, is the ith predicted value (mean of the predictive distribution in this case), and is the number of data points in the test set (five in our case). The standard error of prediction was equal to 0.0656 and 0.0675 for direct and surrogate calibrations, respectively. These values are approximately equal to 3% of the full range of experimental observations, which is acceptable given all the experimental uncertainties in these types of studies.
Sensitivity Analysis.
where . We denote a hybrid point where the jth element equals if , and if .
where a large value of represents a higher influence of variables subset u on function f. For reporting purposes, the indices are frequently normalized by the variance of the function evaluated at dataset 1, .
We apply this procedure into our application and first calculate the indices for the computer model itself, in order to analyze which of the six inputs (three control inputs and three calibration parameters) impact it the most. First pane of Fig. 6 shows the calculated total sensitivity indices for the computer model, where the subsets u were selected to be single inputs (i.e., leftmost bar represents input x1, hence its subset is ). As we can see, initial nickel content (x1) dominates the variance of the computer model followed by interfacial energy (θ1), which is consistent with expert knowledge and results from the hyperparameters estimation in Sec. 4.2.

Sensitivity analysis using Sobol indices approach: (a) indices for the computer model and (b) indices for the calibrated surrogate model
Similarly, we calculated Sobol indices for the calibrated surrogate model. In this case, we use the posterior distributions of the calibration parameters reported in Fig. 3 to calculate the total sensitivity indices for each of the inputs as well. The results can be visualized in the second pane of Fig. 6 with consistent results once again, where input x1 is computed as the most controlling input to the surrogate model variance.
Conclusions
Uncertainty quantification (UQ) in materials simulation models has not been a central question in the UQ literature despite their great importance. In this paper, we have presented an inverse UQ problem (commonly known as the calibration problem) to a materials simulation model that predicts the evolution of precipitates in nickel–titanium shape memory alloys (NiTi SMAs) subject to aging heat treatment. We used a Gaussian process (GP) based surrogate modeling approach to conduct Bayesian calibration. The surrogate modeling approach was used in order to account for the computational burden associated with the precipitation model. The predictive performance of this surrogate modeling approach was benchmarked against the classical Markov chain Monte Carlo (MCMC) approach using a case study that involves real-world experimental observations.
The Bayesian approach used is invaluable both for solving the calibration problem and for quantifying the uncertainties in model predictions. Results show that the calibrated model using both approaches has good prediction accuracy, making it suitable for ICME-based design of shape memory alloys.
There are multiple follow-up studies currently undertaken by the co-authors. Namely, employing adaptive sampling techniques to accelerate and improve the predictive performance of the calibrated model, and coupling the calibrated precipitation model with other physics-based microstructure evolution models to quantify the uncertainty throughout the entire material processing chain, with particular emphasis on laser-based additive manufacturing of shape memory alloys.
Acknowledgment
This work was supported by an Early Stage Innovations grant from NASA's Space Technology Research Grants Program, Grant No. NNX15AD71G. Portions of this research were conducted with High Performance Research Computing resources provided by Texas A&M University.2 L. J. and R. A. also acknowledge the support of NSF through the NSF Research Traineeship (NRT) program under Grant No. NSF-DGE-1545403, “NRT-DESE: Data-Enabled Discovery and Design of Energy Materials (D3EM).” RA and IK also acknowledge the partial support from NSF through Grant No. NSF-CMMI-1534534.
Nomenclature
Appendix: Experimental data
Experimental observations used in the study
Set | Sample | Initial composition (at. % Ni) | Temperature (°C) | Time (s) | Martensitic transformation temperature (°C) | Final composition (at. % Ni) |
---|---|---|---|---|---|---|
Training | 1 | 51.0 | 400 | 43,200 | −49.0 | 51.1417 |
2 | 51.0 | 400 | 86,400 | −25.0 | 50.9902 | |
3 | 51.0 | 400 | 172,800 | −12.0 | 50.8820 | |
4 | 51.0 | 400 | 360,000 | 0.0 | 50.7665 | |
5 | 51.0 | 450 | 36,000 | −12.0 | 50.8820 | |
6 | 51.0 | 450 | 172,800 | 5.0 | 50.7160 | |
7 | 51.0 | 450 | 259,200 | 9.0 | 50.6727 | |
8 | 51.0 | 450 | 360,000 | 10.0 | 50.6619 | |
9 | 51.0 | 500 | 36,000 | −7.0 | 50.8351 | |
10 | 51.0 | 500 | 86,400 | −1.0 | 50.7774 | |
11 | 51.0 | 500 | 172,800 | 7.0 | 50.6944 | |
12 | 51.0 | 500 | 259,200 | 7.0 | 50.6944 | |
13 | 51.0 | 500 | 360,000 | 9.0 | 50.6727 | |
14 | 51.0 | 550 | 36,000 | −28.0 | 51.0118 | |
15 | 51.4 | 400 | 36,000 | −81.0 | 51.2715 | |
16 | 51.4 | 400 | 86,400 | −35.0 | 51.0587 | |
17 | 51.4 | 400 | 259,200 | −24.0 | 50.9830 | |
18 | 51.4 | 400 | 360,000 | −17.0 | 50.9253 | |
19 | 51.4 | 450 | 36,000 | −23.0 | 50.9758 | |
20 | 51.4 | 450 | 86,400 | −8.0 | 50.8459 | |
21 | 51.4 | 450 | 172,800 | −3.0 | 50.7990 | |
22 | 51.4 | 450 | 259,200 | 4.0 | 50.7269 | |
23 | 51.4 | 450 | 360,000 | 8.0 | 50.6836 | |
24 | 51.4 | 500 | 36,000 | −14.0 | 50.9000 | |
25 | 51.4 | 500 | 86,400 | −8.0 | 50.8459 | |
26 | 51.4 | 500 | 172,800 | −2.0 | 50.7882 | |
27 | 51.4 | 500 | 259,200 | 2.0 | 50.7485 | |
28 | 50.7 | 500 | 86,400 | 22.5 | 50.5212 | |
29 | 50.7 | 500 | 172,800 | 21.5 | 50.5357 | |
30 | 50.7 | 500 | 259,200 | 24.3 | 50.5032 | |
31 | 50.7 | 500 | 360,000 | 23.4 | 50.5122 | |
Test | 32 | 51.0 | 400 | 259,200 | −5.0 | 50.8170 |
33 | 51.0 | 450 | 86,400 | −1.0 | 50.7774 | |
34 | 51.4 | 400 | 172,800 | −24.0 | 50.9830 | |
35 | 51.4 | 500 | 360,000 | 4.0 | 50.7269 | |
36 | 50.7 | 500 | 36,000 | 14.4 | 50.6150 |
Set | Sample | Initial composition (at. % Ni) | Temperature (°C) | Time (s) | Martensitic transformation temperature (°C) | Final composition (at. % Ni) |
---|---|---|---|---|---|---|
Training | 1 | 51.0 | 400 | 43,200 | −49.0 | 51.1417 |
2 | 51.0 | 400 | 86,400 | −25.0 | 50.9902 | |
3 | 51.0 | 400 | 172,800 | −12.0 | 50.8820 | |
4 | 51.0 | 400 | 360,000 | 0.0 | 50.7665 | |
5 | 51.0 | 450 | 36,000 | −12.0 | 50.8820 | |
6 | 51.0 | 450 | 172,800 | 5.0 | 50.7160 | |
7 | 51.0 | 450 | 259,200 | 9.0 | 50.6727 | |
8 | 51.0 | 450 | 360,000 | 10.0 | 50.6619 | |
9 | 51.0 | 500 | 36,000 | −7.0 | 50.8351 | |
10 | 51.0 | 500 | 86,400 | −1.0 | 50.7774 | |
11 | 51.0 | 500 | 172,800 | 7.0 | 50.6944 | |
12 | 51.0 | 500 | 259,200 | 7.0 | 50.6944 | |
13 | 51.0 | 500 | 360,000 | 9.0 | 50.6727 | |
14 | 51.0 | 550 | 36,000 | −28.0 | 51.0118 | |
15 | 51.4 | 400 | 36,000 | −81.0 | 51.2715 | |
16 | 51.4 | 400 | 86,400 | −35.0 | 51.0587 | |
17 | 51.4 | 400 | 259,200 | −24.0 | 50.9830 | |
18 | 51.4 | 400 | 360,000 | −17.0 | 50.9253 | |
19 | 51.4 | 450 | 36,000 | −23.0 | 50.9758 | |
20 | 51.4 | 450 | 86,400 | −8.0 | 50.8459 | |
21 | 51.4 | 450 | 172,800 | −3.0 | 50.7990 | |
22 | 51.4 | 450 | 259,200 | 4.0 | 50.7269 | |
23 | 51.4 | 450 | 360,000 | 8.0 | 50.6836 | |
24 | 51.4 | 500 | 36,000 | −14.0 | 50.9000 | |
25 | 51.4 | 500 | 86,400 | −8.0 | 50.8459 | |
26 | 51.4 | 500 | 172,800 | −2.0 | 50.7882 | |
27 | 51.4 | 500 | 259,200 | 2.0 | 50.7485 | |
28 | 50.7 | 500 | 86,400 | 22.5 | 50.5212 | |
29 | 50.7 | 500 | 172,800 | 21.5 | 50.5357 | |
30 | 50.7 | 500 | 259,200 | 24.3 | 50.5032 | |
31 | 50.7 | 500 | 360,000 | 23.4 | 50.5122 | |
Test | 32 | 51.0 | 400 | 259,200 | −5.0 | 50.8170 |
33 | 51.0 | 450 | 86,400 | −1.0 | 50.7774 | |
34 | 51.4 | 400 | 172,800 | −24.0 | 50.9830 | |
35 | 51.4 | 500 | 360,000 | 4.0 | 50.7269 | |
36 | 50.7 | 500 | 36,000 | 14.4 | 50.6150 |