Abstract
The integrity and precision of nuclear data are crucial for a broad spectrum of applications, from national security and nuclear reactor design to medical diagnostics, where the associated uncertainties can significantly impact outcomes. A substantial portion of uncertainty in nuclear data originates from the subjective biases in the evaluation process, a crucial phase in the nuclear data production pipeline. Recent advancements indicate that automation of certain routines can mitigate these biases, thereby standardizing the evaluation process, reducing uncertainty and enhancing reproducibility. This article contributes to developing a framework for automated evaluation techniques testing, emphasizing automated fitting methods that do not require the user to provide any prior information. This approach simplifies the process and reduces the manual effort needed in the initial evaluation stage. It highlights the capability of the framework to validate and optimize subroutines, targeting the performance analysis and optimization of the fitting procedure using high-fidelity synthetic data (labeled experimental data) and the concept of a fully controlled computational experiment. An error metric is introduced to provide a clear and intuitive measure of the fitting quality by quantifying the estimate’s accuracy and performance across the specified energy. This metric sets a scale for comparison and optimization of routines or hyperparameter selection, improving the entire evaluation process methodology and increasing reproducibility and objectivity.