Abstract
We use a stable parallel approach to train Wasserstein Conditional Generative Adversarial Neural Networks (W-CGANs). The parallel training reduces the risk of mode collapse and enhances scalability by using multiple generators that are concurrently trained, each one of them focusing on a single data label. The use of the Wasserstein metric reduces the risk of cycling by stabilizing the training of each generator. We apply the approach on the CIFAR10 and the CIFAR100 datasets, two standard benchmark datasets with images of the same resolution, but different number of classes. Performance is assessed using the inception score, the Fr矇chet inception distance, and image quality. An improvement in inception score and Fr矇chet inception distance is shown in comparison to previous results obtained by performing the parallel approach on deep convolutional conditional generative adversarial neural networks (DC-CGANs). Weak scaling is attained on both datasets using up to 100 NVIDIA V100 GPUs on the OLCF supercomputer Summit.