Preview

Journal of the Russian Universities. Radioelectronics

Advanced search

Analysis of MSTAR Object Classification Features Extracted by a Deep Convolutional Neural Network

https://doi.org/10.32603/1993-8985-2025-28-2-45-56

Abstract

Introduction. Deep convolutional neural networks are effective tools for classifying objects on radar images; however, their decision-making process is not transparent. This makes the determination of classification features of objects that affect the entire network operation a relevant research task. Such a study contributes to the field of explainable AI (XAI).
Aim. Determination of the classification features of military objects, detected by a deep convolutional neural network during training.
Materials and methods. The convolutional neural network was designed, trained, and tested using Keras and Tensorflow 2.0 libraries on the open part of the MSTAR dataset. The GradCAM method was used to visualize and determine the classification features of objects in the dataset.
Results. When using MSTAR images with an unsuppressed background, the object itself makes a significant contribution to the classification result only for 58 % of the images. For 6 % of the images, the classification result is determined by the object radar shadow, and for 25 % of images – by the background. For 11 % of the images, the most significant classification feature could not be established. For images with a suppressed background, in 60 and 40 % of the cases, brightness distribution and the object contour, respectively, made the main contribution to the classification result.
Conclusion. In the MSTAR dataset, each class of objects is represented by a set of radar images of the same real object from different view angles. This determines local background features, invisible to humans and unique to each class of objects. These features have a significant effect of the training outcome of the neural network. This effect can be eliminated by suppressing the background and reducing the image dimensionality. The obtained results also suggest the feasibility of further research into the XAI capabilities in relation to modern neural detectors and radar image datasets.

About the Author

I. F. Kupryashkin
Military Educational and Scientific Center of the Air Force "N. E. Zhukovsky and Yu. A. Gagarin Air Force Academy"
Russian Federation

Ivan F. Kupryashkin, Dr Sci. (Eng.) (2017), Associate Professor (2011), Head of the Department of Сombat Use of Electronic Warfare Systems (with Aerospace Control Systems and Guided Weapons)

54 A, Starykh Bolshevikov St., Voronezh 394064



References

1. Zhu X., Montazeri S., Ali M., Hua Yu., Wang Yu., Mou L., Shi Yi., Xu F., Bamler R. Deep Learning Meets SAR. Available at: https://arxiv.org/abs/2006.10027 (accessed 10.03.2025).

2. Anas H., Majdoulayne H., Chaimae A., Nabil S. M. Deep Learning for SAR Image Classification. Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing. Vol. 1037. Cham, Springer, 2020, pp. 890–898. doi: 10.1007/978-3-030-29516-5_67

3. Li J., Xu C., Su H., Gao L., Wang T. Deep Learning for SAR Ship Detection: Past, Present and Future. Remote Sensing. 2022, vol. 14, iss. 11, art. no. 2712. doi: 10.3390/rs14112712

4. Wu T.-D., Wang H.-F., Hsu P.-H., Tiong K.-K., Chang L.-C., Chang C.-H. Target Detection and Recognition in Synthetic Aperture Radar Images Using YOLO Deep Learning Methods. Intern. Conf. on Consumer Electronics, PingTung, Taiwan, 17–19 July 2023. IEEE, 2023, pp. 593–594. doi: 10.1109/ICCE-Taiwan58799.2023.10226736

5. Yu C., Shin Y. SMEP-DETR: Transformer-Based Ship Detection for SAR Imagery with Multi-Edge Enhancement and Parallel Dilated Convolutions. Remote Sensing. 2025, vol. 17, iss. 6, art. no. 953. doi: 10.3390/rs17060953

6. Zhu M., Hu, G., Zhou H., Wang, S., Feng Z., Yue S. A Ship Detection Method via Redesigned FCOS in Large-Scale SAR Images. Remote Sensing. 2022, vol. 14, iss. 5, art. no. 1153. doi: 10.3390/rs14051153

7. Li W., Yang W., Hou Y., Liu L., Liu X., Li X. SARATR-X: A Foundation Model for Synthetic Aperture Radar Images Target Recognition. Available at: https://arxiv.org/html/2405.09365v1 (accessed 10.03.2025).

8. Mishra P. Ob'yasnimyye modeli iskusstvennogo intellekta na Python. Model' iskusstvennogo intel-lekta. Ob'yasneniya s ispol'zovaniyem bibliotek, rasshireniy i freymvorkov na osnove yazyka Python [Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based Libraries, Extensions, and Frameworks]. Moscow, DMK Press, 2022, 298 p. (In Russ.)

9. Feng Z., Zhu M., Stankovic L., Ji H. Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation. Remote Sensing. 2021, vol. 13, iss. 9, art. no. 1772. doi: 10.3390/rs13091772

10. Zhu M., Zang B., Ding L., Lei T., Feng Z., Fan J. LIME-Based Data Selection Method for SAR Images Generation Using GAN. Remote Sensing. 2022, vol. 14, iss. 1, art. no. 204. doi: 10.3390/rs14010204

11. Fein-Ashley J., Kannan R., Prasanna V. Studying the Effects of Self-Attention on SAR Automatic Target Recognition. Available at: https://arxiv.org/abs/2409.00473 (accessed 10.03.2025).

12. Taufique A. M. N., Nagananda N., Savakis A. Visualization of Deep Transfer Learning in SAR Imagery. Available at: https://arxiv.org/abs/2103.11061 (accessed 10.03.2025).

13. Li Y., Li X., Li W., Hou Q., Liu L., Cheng M.-M., Yang J. SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection. Available at: https://arxiv.org/abs/2403.06534 (accessed 10.03.2025).

14. Kechagias-Stamatis O., Aouf N. Automatic Target Recognition on Synthetic Aperture Radar Imagery: A Survey. Available at: https://arxiv.org/abs/2007.02106 (accessed 10.03.2025).

15. Liu Y., Li W., Liu L., Zhou J., Xiong X., Peng B., Song Y., Yang W., Liu T., Liu Z., Li X. NUDT4MSTAR: A New Dataset and Benchmark Towards SAR Target Recognition in the Wild. Available at: https://arxiv.org/abs/2501.13354 (accessed 10.03.2025).

16. Kupryashkin I. F., Mazin A. S. Visualization of Convolutional Neural Network Patterns in the Noisy Radar Images Classification Problem. Digital Signal Processing. 2021, no. 4, pp. 42–47. (In Russ.)

17. Chollet F. Deap Learning with Python. Shelter Island, Manning Publications, 2017, 384 p.

18. Selvaraju R. R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. Available at: https://arxiv.org/abs/1610.02391 (accessed 24.10.2024).

19. Kupryashkin I. F. Сomparative Results of the Classification Accuracy of MSTAR Dataset Radar Images by Convolutional Neural Networks with Different Architectures. J. of Radio Electronics. 2021, no. 11, pp. 1–27. (In Russ.) doi: 10.30898/1684-1719.2021.11.14

20. Simonyan K., Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Available at: https://arxiv.org/abs/1409.1556 (accessed 24.10.2024).


Review

For citations:


Kupryashkin I.F. Analysis of MSTAR Object Classification Features Extracted by a Deep Convolutional Neural Network. Journal of the Russian Universities. Radioelectronics. 2025;28(2):45-56. (In Russ.) https://doi.org/10.32603/1993-8985-2025-28-2-45-56

Views: 151


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1993-8985 (Print)
ISSN 2658-4794 (Online)