## Abstract

Die casting is a type of metal casting in which a liquid metal is solidified in a reusable die. In such a complex process, measuring and controlling the process parameters are difficult. Conventional deterministic simulations are insufficient to completely estimate the effect of stochastic variation in the process parameters on product quality. In this research, a framework to simulate the effect of stochastic variation together with verification, validation, and uncertainty quantification (UQ) is proposed. This framework includes high-speed numerical simulations of solidification, microstructure, and mechanical properties prediction models along with experimental inputs for calibration and validation. Both experimental data and stochastic variation in process parameters with numerical modeling are employed, thus enhancing the utility of traditional numerical simulations used in die casting to have a better prediction of product quality. Although the framework is being developed and applied to die casting, it can be generalized to any manufacturing process or other engineering problems as well.

## Introduction

The manufacturing industry uses a variety of processes to produce the finished product. These manufacturing processes are often very complex in nature and involve a large number of process parameters that affect the quality of the product. Die casting is one such manufacturing process in which a liquid metal is solidified in a reusable die to fabricate a desired part geometry. Automotive and housing industries are the main consumers of die-cast products. Simulations and experiments are often used to understand the physics of the process and to improve the product quality. Computer simulations are convenient and economical when compared to full-scale experiments, and are used to predict the effects of process parameters such as the alloy material properties, interface conditions at the mold, and thermal boundary conditions on the product quality in die casting. However, in a complex process, measuring and controlling these process parameters are difficult. The mean response of an output parameter can be different from its deterministic value. Moreover, conventional deterministic simulations are insufficient to completely estimate the effect of stochastic variation in the process parameters on the product quality. Thus, it is important to propagate the input uncertainty to the output to get reasonable estimates of the range of output.

Simulating a die casting process requires developing predictive models involving multiple physical phenomena such as solidification, fluid flow, and heat transfer; defining various uncertainties in process parameters; and then developing a solution algorithm. However, despite the high power of numerical algorithms and the availability of extensive property data, considerable uncertainties and errors exist in the current day simulation techniques. In addition, gaps in the knowledge base also exist, especially in the interaction of material microstructure with physical property fluctuations. Therefore, in order to obtain a comprehensive tool for accurately predicting quality of the die casting products, a virtually guided certification methodology is proposed in this paper. Virtually guided certification aims to increase the accuracy of numerical simulations using a framework connecting experimental data, predictive models, and UQ. The present effort aims to reduce the uncertainty in the numerical simulation results through verification, validation, and UQ. To our knowledge, this is the first time such a framework has been developed for die casting. An accurate prediction of outputs with error bars can reduce scrap castings, thus giving monetary benefits.

## Framework Description

Individual components of the virtual certification framework are shown in Fig. 1. The top and bottom links connect the input process parameters to outputs using experiments and numerical simulations, respectively. Real-life die castings are used to calibrate the empirical models for microstructure and material property parameters. Temperature gradients and cooling rates estimated using the numerical simulations are inputs to these models. The numerical software is verified using published results for canonical problems and validated using the experimental results. UQ is a wrapper on the deterministic software in order to estimate the impact of stochastic variation in the process parameters on the final product quality. The following sections describe each module of the framework (Fig. 1) in detail.

## Die Casting Experiments for Calibration and Validation

As depicted in Fig. 1, the major outcome of the virtual certification methodology is the predictive framework that provides properties and behavior of die-cast materials under various conditions. Experimental validation establishes the accuracy of using mathematical models, uncertainties, and numerical simulation approaches for simulating die casting manufacturing processes. Experimental data on a variety of real-life die casting parts from the industry are obtained for the validation. Initially, a part of the experimental data is used to compare the model predictions, and improvements in the computational models are made if necessary as a calibration step. The calibration experimental data serve as a useful platform to improve computational models. After ensuring that the computational results match the calibration experimental data, the computational models are further tested with the remaining experimental data, that is data that are not used as part of the calibration process. This two-step procedure ensures rigorous validation.

## Solidification Model

The numerical model includes the effects of solidification, fluid flow due to natural convection, and heat transfer. The governing equations are based on the work of Plotkowski et al. [1]. The velocity **u** is the mixture velocity. It is assumed that the solid phase velocity is zero. For die casting problems, this assumption makes sense as the solidification begins from outside near the mold surface. Thus, the solid phase remains stationarily attached to the mold surface. Currently, the temperature dependence of the material properties is neglected. In this work, microstructure parameters are estimated based on the cooling rate only. Since the solute concentration is not a quantity of interest, solute transport and diffusion are neglected [1].

*μ*/

*K*)

**u**is the Darcy drag term that represents the increased resistance to flow in the mushy zone. The isotropic permeability of the dendritic array (

*K*) is given by the Blake–Kozeny model (Eq. (4)). In the fully liquid region, the solid fraction is zero, permeability tends to infinity, and hence the Darcy drag term goes to zero. Thus, the effect of the Darcy drag is annihilated in the liquid region. In the solid region, the solid fraction is unity, permeability tends to zero, and thus the coefficient of the Darcy drag term goes to infinity. This coefficient goes in the diagonal term of the discretized system of linear equations. Thus, in the solid region, the velocities go to zero. In the mushy zone, the drag term helps in reducing the velocities compared with the liquid zone

The Boussinesq approximation is used to model the effects of natural convection. As shown by Spiegel and Veronis [2], if the length scale of the problem is significantly less than the scale height and the fluctuations in the pressure and density due to fluid motion are negligible compared with the total static variations, then the fluid can be modeled as incompressible with an additional term −**g***ρβ*(*T* − *T*_{ref}) in the momentum equation (2).

*ρL*

_{f}(∂

*f*

_{s}/∂

*t*)) is expressed in terms of the solid fraction. Hence in order to close the system, a linear relation between temperature and solid fraction is assumed (Eq. (5)) [3]

Equations (1)–(5) are solved with a newly developed software OpenCast written in an object-oriented fashion in c++. The discretized system of linear equations is obtained using the finite volume method and a collocated grid. Continuity and momentum equations are solved by the fractional step method [4].

In order to deal with practical die casting geometries, the software OpenCast is developed for unstructured grids. Open source software gmsh [5] is used to generate tetrahedral/hexahedral meshes. Discretization of the equations generates a system of sparse linear equations. Algebraic Multigrid solver of the open source library hypre [6] is used to solve these systems iteratively. Multigrid solvers are efficient in solving sparse systems as their execution time scales linearly with problem size [7].

## Microstructure and Material Behavior Model

*A*

_{λ}= 44.6 and

*B*

_{λ}= −0.359. A relationship of a similar form replacing cooling rates with solidification time has also been suggested by Hunt [9].

*σ*

_{0.2}), the following empirical relationship proposed by Okayasu et al. [10] is used:

*λ*

_{2}in

*μ*m.

## Parameter Uncertainty Quantification

The final product quality in die casting is influenced by many process parameters such as alloy material properties, interface conditions at the mold, and thermal boundary conditions. Measuring and controlling these parameters accurately are difficult due to the complexity of the process. This stochastic variation is dealt with as parameter uncertainty in the numerical simulations as conventional deterministic simulations alone are unable to estimate its effect on the product quality. From a modeling perspective, parameter UQ is a set of partial differential equations with coefficients, boundary conditions, and initial conditions varying stochastically. The basic idea is to consider the stochastic variables as dimensions of the problem in addition to space and time.

Various methods have been proposed in the literature to estimate the relation between stochastic process parameters and output parameters. The basic idea is to expand the output variables as a linear combination of polynomial basis functions in the stochastic dimension. Orthogonal polynomials are used as basis functions because their orthogonality helps in convergence. Xiu and Karniadakis [11] extended Wiener's polynomial chaos [12] to obtain an optimum basis from the Askey family of orthogonal polynomials, which leads to optimal convergence of the error. Depending on the weighting function of the orthogonal polynomial, Xiu and Karniadakis [11] determined which polynomial leads to exponential convergence for a given underlying probability distribution function that the stochastic variable follows. For example, the standard normal distribution is the weighting function of the Hermite polynomials. Hence, it is recommended to use Hermite basis functions if the stochastic variable follows normal distribution to obtain fast convergence.

*n*for all practical purposes. The total number of terms in the

*d*dimensional polynomial chaos of the highest order

*l*is given by

*n*+ 1 = (

*l*+

*d*)!/

*l*!

*d*!.

Various numerical techniques have been proposed in the literature to estimate the deterministic coefficients of the polynomial chaos expansion. In the spectral Galerkin method [13,14], the solution is projected onto finite-dimensional stochastic space. This projection requires modification of the existing deterministic software and hence can be nontrivial for many practical scenarios. This method is thus known as the intrusive method. Due to this difficulty, nonintrusive methods are popular.

*M*sample points (

*ξ*^{m}) and enforce the condition

*w*(

**,**

*x*

*ξ*^{m}) =

*w*

_{sim}(

**,**

*x*

*ξ*^{m}), where the left-hand side comes from the polynomial expansion and the right-hand side comes from each deterministic simulation. This provides

*M*constraints that can be written in the matrix vector form [15].

*M*>

*n*+ 1 ensures that the Vandermonde system equation (9) is overdetermined. This is solved for $[w0(x,\xi 1)\cdots wn(x,\xi M)T]$ in the least-squares sense.

The choice of sample points (*ξ*^{m}) plays an important role in the successive implementation of the stochastic collocation method. Uniformly distributed points can lead to highly oscillatory basis functions. Hence, a popular method is to choose the roots of the basis orthogonal polynomial as sample points for a single stochastic dimension [15]. For multiple stochastic dimensions, a simple extension is to use the tensor product of single-dimensional sample points. But the number of samples in tensor products grows exponentially with stochastic dimension. Hence, it is computationally expensive when each deterministic simulation is time consuming. The Smolyak algorithm [16] is thus used to choose a minimum number of sample points in the multidimensional space maintaining the accuracy of the interpolation. Ganapathysubramanian and Zabaras [17] and Smith [15] have discussed the implementation and efficiency of the sparse grid algorithm in detail. In this research, sparse grid nodes are taken from the work of Heiss and Winschel [18].

## Verification Using Natural Convection

In the context of computer simulations, software verification is performed to confirm whether the model is correctly implemented. In this case, the results of three-dimensional natural convection in a differentially heated cubical enclosure are used for verification. The cube is meshed with uniform 64^{3} hexahedral elements. Steady-state solution is computed by time marching. For any scalar field φ, nondimensional steady-state error is defined as max(‖φ^{new} − φ^{old}‖)/max(‖φ^{new}‖), where the maximum is computed over the entire domain. When the steady-state error over all the variables (temperature and three velocity components) is less than 10^{−4}, it is assumed that steady-state is reached. Fusegi et al. [19] have plotted temperature and velocity values along various lines along midplanes of the domain for two Rayleigh numbers (10^{5} and 10^{6}). The percentage error for each case is defined as the L2 norm of difference between the two curves discretized. Figures 2 and 3 show the temperature and velocity plots superimposed for Rayleigh number 10^{5}. Figure 4 plots velocities for Rayleigh number 10^{6}. Overall, it can be seen that the error in temperature is less than 0.02% and the error in velocities is less than 0.8%.

## Deterministic Solidification Results

The framework is applied on a practical die casting example of a window frame connector rib. Figure 5 shows the part meshed with 308,000 elements. The bounding box of the part is of dimensions 2 cm × 10 cm × 12 cm. The mold is filled with an aluminum alloy with 1000 K initial temperature. Material properties are taken from Ref. [1]. All the boundaries are held at 500 K. For such thin die casting geometries, the solidification time is of the order of seconds. Thus, natural convection velocities do not play an important role in the output parameters. Moreover, simulation of solidification without natural convection reduces the computational effort significantly as the solution of flow equations is not required. Hence, this geometry is simulated without natural convection flow. The solidification time for the deterministic simulation was 0.76 s.

Figure 6 shows isosurfaces of the solid fraction for two different time steps during solidification. Figures 7(a) and 7(b) show SDAS and yield strength contours, respectively, along the planar cross-section *X* = 0.013 m. It can be seen that the vertical region is the thickest section and hence solidifies at the end. As the part cools down with time, the temperature gradients and cooling rates drop. Hence, it is observed that the core region typically has high grain size and low yield strength. Similar results are predicted by the empirical models of SDAS and yield strength.

## Parameter Uncertainty Quantification

The same window frame connector rib problem described in the previous section is used for parameter uncertainty analysis. In practice, there are stochastic variations in the material properties, boundary conditions, initial temperature, etc. As an example, two-dimensional UQ is studied here by adding uncertainty to boundary temperature and latent heat. The mean values of boundary temperature and latent heat are assumed to be 500 K and 3.9 × 10^{5} J/kg, respectively. The standard deviations of both the parameters are taken as 1% with respect to means. It is assumed that the input parameters follow a normal distribution and are independent of each other. Hence, Hermite polynomials are chosen as basis functions and sample points are their roots. The Smolyak algorithm [16] is used to generate the two-dimensional samples. The impact of stochastic variation on two input parameters is studied on three output parameters: solidification time, the maximum value of SDAS over the entire domain, and the minimum value of yield strength over the entire domain.

In order to study the convergence of stochastic collocation, three accuracy levels of sample points are used to estimate the coefficients of polynomial chaos expansion. For estimating the error committed in the interpolation, a uniform two-dimensional tensor product of size 6 × 6 (points [ −2.5, −1.5, −0.5, 0.5, 1.5, 2.5] in each stochastic direction) is chosen. The deterministic simulation on these 36 points gives an estimate of the output parameters. Polynomial chaos expansion independently gives another estimate of the same parameters. The error is defined as a maximum absolute value of difference between these two estimates normalized by dividing it with the maximum value of the parameter thus making the error nondimensional. The first column of Table 1 is the accuracy level of sample points. Accuracy level *l* integrates polynomials of total order 2*l* − 1 exactly [18]. The second column denotes the number of sample points in the two-dimensional sparse grid. This is the number of deterministic simulations required (*M* in Eq. (9)). The third column is the number of terms in the truncated polynomial expansion (*n* + 1 in Eqs. (8) and (9)). The last three columns list the nondimensional error in computation of the three output parameters. It can be seen that the error is of the order 10^{−4} or less for the accuracy level of 7. This shows the convergence of stochastic collocation and hence level 7 is used for plotting response surfaces.

Accuracy level | # Sample Pts. | # Poly. terms | Solid. time error | Max. SDAS error | Min. yield error |
---|---|---|---|---|---|

5 | 53 | 10 | 1.1E−3 | 8.1E−4 | 7.9E−5 |

6 | 89 | 10 | 8.2E−4 | 7.6E−4 | 6.6E−5 |

7 | 137 | 15 | 6.9E−4 | 8.2E−4 | 6.4E−5 |

Accuracy level | # Sample Pts. | # Poly. terms | Solid. time error | Max. SDAS error | Min. yield error |
---|---|---|---|---|---|

5 | 53 | 10 | 1.1E−3 | 8.1E−4 | 7.9E−5 |

6 | 89 | 10 | 8.2E−4 | 7.6E−4 | 6.6E−5 |

7 | 137 | 15 | 6.9E−4 | 8.2E−4 | 6.4E−5 |

Response surface is used to visualize the relation between the input and output parameters graphically. For two-dimensional stochastic problems, response surfaces are plotted as contours (Figs. 8–10). In each plot, *X* and *Y* axes denote the latent heat value and wall temperature, respectively, as these are the stochastic input parameters. Since both the input variables are assumed to follow normal distribution, they are plotted in the range (*μ* − 3*σ*, *μ* + 3*σ*). For example, the wall temperature is plotted in the range (500 − 3 × 5, 500 + 3 × 5).

During solidification, heat is released in the form of latent heat. Hence, a higher value of latent heat slows down solidification as the heat has to be taken out from the body. Mold wall extracts heat and the rate of heat extraction are proportional to the temperature difference between the body and the wall. Higher wall temperature reduces the temperature difference thus lowering the heat transfer rate. So solidification time increases with wall temperature. These trends can be observed in the response surface (Fig. 8). The increase in latent heat or wall temperature causes a drop in the cooling rate. A lower cooling rate (i.e., higher solidification time) implies higher SDAS as the grain gets more time to grow (Fig. 9). The yield strength is inversely proportional to solidification time. The faster the material solidifies, higher the yield strength. Thus, the yield strength reduces with an increase in latent heat and wall temperature as seen in Fig. 10.

In this case, even though only two inputs are chosen, the utility of UQ can be seen from the response surfaces. For an input relative uncertainty (*σ*/*μ*) of 1%, the error bar is around 4% and 3% in solidification time and maximum of SDAS, respectively. Sensitivity can also be qualitatively estimated from the response surface contours. For example, the contour lines of minimum of yield strength are steep, which means that the solidification time is more sensitive to latent heat than the wall temperature. The response surface contours are nearly linear, which implies that the local sensitivity is almost constant. However in general, there is no reason to expect linearity.

## Conclusions

This paper describes a virtual certification framework applied to die casting. The framework includes numerical simulations, verification, validation using experimental methods, and UQ. The numerical model simulates solidification with fluid flow and heat transfer. The microstructure and structural properties are estimated empirically as a function of temperature gradients and cooling rates are obtained from the numerical simulation. Verification using published numerical three-dimensional natural convection results shows that the outputs from the current software are in good agreement with errors less than 0.8%. A state-of-the-art algebraic multigrid solver from the open source library HYPRE [6] is used for fast convergence. Deterministic solidification of an actual die casting geometry is demonstrated with the results of the solid fraction, SDAS, and yield strength. Parameter UQ is utilized to quantify the effect of stochastic variation in the process parameters on the output parameters. UQ is modeled as a wrapper on the deterministic simulation and hence can be done without any modification to the deterministic software. A two-dimensional stochastic analysis is done on the same geometry and response surfaces are plotted.

As of now, work is in progress on the fronts of validation and calibration for which controlled experimental results with all the process parameters measured are required. Considering the process complexity of die casting, it is difficult to measure temperatures inside the casting during solidification. Lack of this information is an obstacle in calibration and validation. The constants in the empirical relations are typically material dependent. Hence, experimental studies involving simpler geometries with controlled process conditions and in situ measurements are planned in future to calibrate and validate the framework. In this work, only two inputs (wall temperature and latent heat) are chosen as representative stochastic inputs for demonstration of the framework. There are other parameters like initial alloy temperature and material properties like viscosity, thermal conductivity, density, and specific heat which also have uncertainty. In future, a high-dimensional UQ combined with sensitivity analysis which deals with a larger set of inputs for die casting will be performed. It is possible that there are some other input parameters that introduce higher uncertainty in the outputs which will be highlighted in future papers with sensitivity analysis.

The virtual certification framework presented here can help enhance the product performance. It should be noted that although die casting is used as an example for demonstrating the framework, it can be easily generalized to other manufacturing processes or engineering problems as well. As the framework is the main focus of this paper, some simplifying modeling assumptions are made. These assumptions do not compromise the overall idea of the framework. Many of these assumptions will be relaxed in a later version of OpenCast and the results will be published subsequently.

## Acknowledgment

This work was funded by the Digital Manufacturing and Design Innovation Institute with support in part by the U.S. Department of the Army. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Army. The authors also thank Steve Udvardy and Beau Glim of the North American Die Casting Association (NADCA), Chicago White Metal Casting, Mercury Castings, RCM Industries, Twin City Die Castings Company, and Visi-Trak Worldwide for providing die-cast parts used for calibration and validation. Technical discussions with Alex Monroe of Mercury Castings and his suggestions are appreciated.

## References

## Nomenclature

*k*=thermal conductivity

*t*=time

**g**=gravity vector

**u**=mixture velocity vector (u,v,w)

**x**=spatial dimension

*K*=isotropic permeability

*P*=pressure

*f*_{s}=solid fraction

*C*_{p}=specific heat

*L*_{f}=latent heat of fusion

*T*_{ref}=reference temperature

*T*_{liq}=liquidus temperature

*T*_{sol}=solidus temperature

*β*=coefficient of thermal expansion

*θ*=an elementary event in Θ

- Θ =
set of elementary events

*λ*=dendrite arm spacing

*μ*=dynamic viscosity

=*ξ*random variable vector ($\xi 1,\xi 2,\u2026\xi n$)

*ρ*=density

**Ψ**=_{i}multidimensional orthogonal polynomial of order

*i*