Giourga, Maria
ORCID: https://orcid.org/0000-0001-7109-5818; Petropoulos, Ioannis; Stavros, Sofoklis; Potiris, Anastasios
ORCID: https://orcid.org/0000-0001-7716-1521; Goula, Kallirroi
ORCID: https://orcid.org/0009-0001-1399-0301; Moustakli, Efthalia
ORCID: https://orcid.org/0009-0005-3333-2620; Papahliou, Anthi-Maria; Daskalaki, Maria-Anastasia; Segou, Margarita; Rodolakis, Alexandros; Daskalakis, George
ORCID: https://orcid.org/0000-0001-7108-211X; Domali, Ekaterini
ORCID: https://orcid.org/0000-0001-8899-3040.
2026
A deep learning approach for classifying benign, malignant, and borderline ovarian tumors using convolutional neural networks and generative adversarial networks.
Medical Sciences, 14 (1), 89.
10.3390/medsci14010089
Background/Objectives: Accurate preoperative characterization of ovarian masses is essential for appropriate clinical management, particularly for borderline ovarian tumors (BOTs), which are less common and often difficult to distinguish from benign or malignant lesions on ultrasound. Although expert subjective ultrasound assessment achieves high diagnostic accuracy, limited availability of highly trained sonologists restricts its widespread application. Artificial intelligence-based approaches offer a potential solution; however, the low prevalence of BOTs restricts the development of robust deep learning models due to severe class imbalance. This study aimed to develop a Convolutional Neural Network (CNN)-based classifier enhanced with Generative Adversarial Networks (GANs) to improve the discrimination of ovarian masses as benign, malignant, or BOT using ultrasound images. Methods: A total of 3816 ultrasound images from 636 ovarian masses were retrospectively analyzed, including 390 benign lesions, 202 malignant tumors, and 44 BOTs. To address class imbalance, a Deep Convolutional GAN (DCGAN) was used to generate 2000 synthetic BOT images for data augmentation. A three-class ensemble CNN model integrating VGG16, ResNet50, and InceptionNetV3 architectures was developed. Performance was assessed on an independent test set and compared with a baseline model trained without DCGAN augmentation. Results: The incorporation of DCGAN-generated BOT images significantly enhanced classification performance. The BOT F1-score increased from 68.4% to 86.5%, while overall accuracy improved from 84.7% to 91.5%. For BOT identification, the final model achieved a sensitivity of 88.2% and specificity of 85.1%. Class-specific AUCs were 0.96 for benign lesions, 0.94 for malignant tumors, and 0.91 for BOTs. Conclusions: DCGAN-based augmentation effectively expands limited ultrasound datasets and improves CNN performance, particularly for BOT detection. This approach demonstrates potential as a decision support tool for preoperative assessment of ovarian masses.
medsci-14-00089-v2.pdf - Published Version
Available under License Creative Commons Attribution 4.0.
Download (2MB) | Preview
Downloads per month over past year
Altmetric Badge
Dimensions Badge
![]() |
