Testing new turbine cooling schemes at engine conditions becomes increasingly cost prohibitive as the desired gas-path temperatures increase. As a result, the turbine component is simulated in a laboratory with a large-scale model that is sized and constructed out of a selected material so that the Biot number is matched between the laboratory and engine conditions. Furthermore, the experimental temperatures are lower, so the surface temperature that the metal component would experience in the engine is scaled via the overall cooling effectiveness, ϕ.
Properly measuring ϕ requires that the relevant flow physics must be matched, thus the relevant Reynolds numbers be matched-both those of the freestream and the coolant, as well as the other scaling parameters, such as the mass flux, momentum flux, and velocity ratios. However, if the coolant-to-freestream density ratio does not match that of the engine condition, the mass flux, momentum flux, coolant and freestream Reynolds numbers, and coolant-to-freestream velocity ratios cannot be matched simultaneously to the engine condition. Furthermore, the coolant thermal transfer properties are unaccounted for in these parameters, despite their large influence on the resultant overall effectiveness. While a good deal of research has focused on the effects of the coolant-to-freestream density ratio, this study specifically examines the influence of other thermodynamic properties, in particular the specific heat, which differ substantially between experimental and engine conditions. This study demonstrates the influence of various coolant properties on the overall effectiveness distribution on a leading edge by selectively matching M, I, and ACR with air, argon and carbon dioxide coolants.