Deep Learning for the Radiographic Detection of Apical Lesions

 Published online: May 31, 2019

Abstract

Introduction

We applied deep convolutional neural networks (CNNs) to detect apical lesions (ALs) on panoramic dental radiographs.

Methods

Based on a synthesized data set of 2001 tooth segments from panoramic radiographs, a custom-made 7-layer deep neural network, parameterized by a total number of 4,299,651 weights, was trained and validated via 10 times repeated group shuffling. Hyperparameters were tuned using a grid search. Our reference test was the majority vote of 6 independent examiners who detected ALs on an ordinal scale (0, no AL; 1, widened periodontal ligament, uncertain AL; 2, clearly detectable lesion, certain AL). Metrics were the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive/negative predictive values. Subgroup analysis for tooth types was performed, and different margins of agreement of the reference test were applied (base case: 2; sensitivity analysis: 6).

Results

The mean (standard deviation) tooth level prevalence of both uncertain and certain ALs was 0.16 (0.03) in the base case. The AUC of the CNN was 0.85 (0.04). Sensitivity and specificity were 0.65 (0.12) and 0.87 (0.04,) respectively. The resulting positive predictive value was 0.49 (0.10), and the negative predictive value was 0.93 (0.03). In molars, sensitivity was significantly higher than in other tooth types, whereas specificity was lower. When only certain ALs were assessed, the AUC was 0.89 (0.04). Increasing the margin of agreement to 6 significantly increased the AUC to 0.95 (0.02), mainly because the sensitivity increased to 0.74 (0.19).

Conclusions

A moderately deep CNN trained on a limited amount of image data showed satisfying discriminatory ability to detect ALs on panoramic radiographs.

Comments