Abstract
The ever-increasing volume of data produced by HPC simulations necessitates scalable methods for data exploration and knowledge extraction. Scientific data analysis often involves complex queries across distributed datasets, requiring manipulation of multiple primary variables and generating derived data that needs to be handled efficiently, creating challenges for applications that need to parse many large datasets. Relying on individual applications to handle all intermediate data generally leads to redundant computations across studies and unnecessary data transfers. In this paper, we investigate the performance of different approaches where applications define derived variables as quantities of interest (QoIs) and offload the computation and transfer of these QoIs to the I/O library. This significantly reduces redundancy and optimizes data movement across the distributed storage and processing infrastructure by allowing control over when and where derived variables are computed. We present a detailed analysis of the performance-storage trade-offs associated with different solutions and showcase results for our study on two large-scale datasets created from climate and combustion simulations.