Objectives To improve sensitivity and inter-reader consistency of prostate cancer localisation on micro-ultrasonography (MUS) by developing a deep learning model for automatic cancer segmentation, and to compare model performance with that of expert urologists.Patients and Methods We performed an institutional review board-approved prospective collection of MUS images from patients undergoing magnetic resonance imaging (MRI)-ultrasonography fusion guided biopsy at a single institution. Patients underwent 14-core systematic biopsy and additional targeted sampling of suspicious MRI lesions. Biopsy pathology and MRI information were cross-referenced to annotate the locations of International Society of Urological Pathology Grade Group (GG) >= 2 clinically significant cancer on MUS images. We trained a no-new U-Net model - the Prostate Micro-Ultrasound Network (ProMUS-NET) - to localise GG >= 2 cancer on these image stacks in a fivefold cross-validation. Performance was compared vs that of six expert urologists in a matched sub-cohort.Results The artificial intelligence (AI) model achieved an area under the receiver-operating characteristic curve of 0.92 and detected more cancers than urologists (lesion-level sensitivity 73% vs 58%; patient-level sensitivity 77% vs 66%). AI lesion-level sensitivity for peripheral zone lesions was 86.2%.Conclusions Our AI model identified prostate cancer lesions on MUS with high sensitivity and specificity. Further work is ongoing to improve margin overlap, to reduce false positives, and to perform external validation. AI-assisted prostate cancer detection on MUS has great potential to improve biopsy diagnosis by urologists.
ProMUS-NET: Artificial intelligence detects more prostate cancer than urologists on micro-ultrasonography
Lughezzani, Giovanni;Fasulo, Vittorio;
2025-01-01
Abstract
Objectives To improve sensitivity and inter-reader consistency of prostate cancer localisation on micro-ultrasonography (MUS) by developing a deep learning model for automatic cancer segmentation, and to compare model performance with that of expert urologists.Patients and Methods We performed an institutional review board-approved prospective collection of MUS images from patients undergoing magnetic resonance imaging (MRI)-ultrasonography fusion guided biopsy at a single institution. Patients underwent 14-core systematic biopsy and additional targeted sampling of suspicious MRI lesions. Biopsy pathology and MRI information were cross-referenced to annotate the locations of International Society of Urological Pathology Grade Group (GG) >= 2 clinically significant cancer on MUS images. We trained a no-new U-Net model - the Prostate Micro-Ultrasound Network (ProMUS-NET) - to localise GG >= 2 cancer on these image stacks in a fivefold cross-validation. Performance was compared vs that of six expert urologists in a matched sub-cohort.Results The artificial intelligence (AI) model achieved an area under the receiver-operating characteristic curve of 0.92 and detected more cancers than urologists (lesion-level sensitivity 73% vs 58%; patient-level sensitivity 77% vs 66%). AI lesion-level sensitivity for peripheral zone lesions was 86.2%.Conclusions Our AI model identified prostate cancer lesions on MUS with high sensitivity and specificity. Further work is ongoing to improve margin overlap, to reduce false positives, and to perform external validation. AI-assisted prostate cancer detection on MUS has great potential to improve biopsy diagnosis by urologists.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


