Abstract
This paper presents a machine-learned virtual cruise guide indicator (vCGI) for Chinook helicopters. Two temporal neural networks were trained and evaluated on measured data from 55 flight tests, one for the fore rotor and another for the aft rotor, to predict a vCGI value, which protects 23 components from fatigue damage during steady-state conditions. Three different classes of machine learning architectures were evaluated for prediction of the vCGI from time sequences: a temporal convolutional neural network with 1D dilated causal convolutions, a long short-term memory recurrent neural network, and an attention-based transformer architecture. The final average model accuracy on unseen flight data is currently greater than 93% for CGI values which could result in fatigue damage and 90% for normal operation CGI values. Model accuracy was improved through a series of advancements in:(1) selection of optimal training data using temporal collective variables and unsupervised learning, (2) dataset augmentation with maximum-entropy temporal collective variables, and (3) implementation of a mixture-of-experts classification- regression approach using an adversarial classification approach to assign maneuver labels. The results are presented for each advancement in model development along with lessons learned in training machine learning models on real- world, time-dependent rotorcraft data.