Figure 1: Wound Dataset Image Classification. (Count of Images per Label)
Category: |
Precision: |
Recall: |
F1-Score: |
Abrasion |
0.94 |
0.96 |
0.95 |
Bruises |
0.93 |
0.92 |
0.92 |
Burns |
0.96 |
0.97 |
0.96 |
Cuts |
0.94 |
0.95 |
0.94 |
Injuries |
0.93 |
0.91 |
0.92 |
Lacerations |
0.96 |
0.95 |
0.96 |
Stab Wounds |
0.97 |
0.98 |
0.98 |
Table 1: Precision, Recall, and F-1 Score For Each Wound Category.
Hyperparamater: |
Value: |
Learning Rate |
0.001 |
Batch Size |
32 |
Epochs |
45 (initial), 25 (fine-tuning) |
Dropout Rate |
0.5 |
Data Augmentation |
Yes |
Optimizer |
Adam |
Table 2: Key Hyperparameters Used In Training
True Label |
Cuts |
Lacerations |
Burns |
Predicted Label |
Lacerations |
Cuts |
Bruises |
Example of Misclassified Wounds.
Figure 1: Wound Dataset Image Classification. (Count of Images per Label)
Figure 3: VGG16 Custom Layers. (VGG16 Model Architecture)
Figure 4: VGG16 Model Architecture. (Convolutional Layers)
Figure 4: ROC curves for Wound Classification Categories.
Figure 5: Confusion Matrix for Classification Of Wound Images. (True Labels, Predicted Labels)
Tables at a glance
Figures at a glance