Preprocessing of Object Images Before Their Detection Using YOLO Neural Network

Authors

DOI:

https://doi.org/10.31861/sisiot2025.2.02002

Keywords:

digital image processing, artificial neural networks, YOLO, object detection, software

Abstract

Software was developed in Python for the digital processing of object images before their detection using the You Only Look Once (YOLO) artificial neural network. Detection was carried out on object images, specifically vehicles and other road users. The preprocessing of images was performed through histogram equalization and local contrast enhancement. The original images were read from video cameras or graphic files. Image detection was performed using the medium-sized YOLO convolutional neural network model, YOLOv8m. As a result of detection, the studied objects, particularly cars, were highlighted with rectangular bounding boxes. To evaluate the accuracy of object detection, several metrics (parameters) were used: Precision, Recall, Intersection over Union (IoU), F1, and Average Precision (AP). The YOLO network was pre-trained on the Common Objects in Context dataset. The YOLO neural network returns the object recognition confidence, the coordinates of its center (x, y) on the image, and its width and height (w, h). The detected objects are marked with rectangular bounding boxes. The positioning of objects in the images was manually performed using the Roboflow tool, which allows for accurate spatial positioning of each object. The study of the impact of image preprocessing on the accuracy of object detection by the YOLO network was conducted using a typical traffic scene image. After histogram equalization, the histogram becomes more symmetrical and uniform, improving the visual quality of both dark and bright areas. This led to improved object detection accuracy across almost all metrics. The increase in detection accuracy by the IoU parameter is 0.169. The summary improvement in detection accuracy by the IoU, F1, AP parameters is 0.502. It was shown that after image preprocessing, detection accuracy increased, and all vehicles were correctly detected. The developed software can be used in various computer vision systems as well as in Internet of Things systems for remote monitoring and control.

Downloads

Download data is not yet available.

Author Biographies

  • Serhiy Balovsyak, Yuriy Fedkovych Chernivtsi National University

    In 1995, graduated from Chernivtsi State University. In 2018, defended his doctoral dissertation in the specialty "Computer systems and components". Currently, works as an associate professor at the Department of Computer Systems and Networks of Chernivtsi National University. Research interests include digital signal processing, programming, artificial neural networks.

  • Serhii Stets, Yuriy Fedkovych Chernivtsi National University

    In 2021, graduated from the National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” with a degree in "Systems of Artificial Intelligence". In 2022, entered a PhD program in the specialty "Software engineering". Research interests include media processing, programming, and artificial neural networks.

References

O. P. Stelmakh, I. V. Stetsenko, and D. V. Velyhotskyi, “Information Technology of Video Data Processing for Traffic Intensity Monitoring,” Control Systems & Computers, vol. 3, pp. 50–59, 2020, doi: 10.15407/usim.2020.03.050.

M. A. Z. Al Bayati and M. Çakmak, “Real-Time Vehicle Detection for Surveillance of River Dredging Areas Using Convolutional Neural Networks,” International Journal of Image, Graphics and Signal Processing (IJIGSP), vol. 15, no. 5, pp. 17–28, 2023, doi: 10.5815/ijigsp.2023.05.02.

Zh. Hu, D. Uhryn, Yu. Ushenko, V. Korolenko, V. Lytvyn, and V. Vysotska, “System programming of a disease identification model based on medical images,” in Proc. SPIE: Sixteenth International Conference on Correlation Optics, vol. 12938, pp. 129380F-1–129380F-4, 2024, doi: 10.1117/12.3009245.

O. Berezsky, P. Liashchynskyi, O. Pitsun, P. Liashchynskyi, and M. Berezkyy, “Comparison of Deep Neural Network Learning Algorithms for Biomedical Image Processing,” CEUR Workshop Proceedings, pp. 135–145, 2022.

S. V. Balovsyak and I. M. Fodchuk, “Objects images alignment with the use of genetic and gradient algorithms,” International Journal of Computing, vol. 12, no. 2, pp. 160–167, 2013, doi: 10.47839/ijc.12.2.597.

S. V. Balovsyak, O. V. Derevyanchuk, Ya. V. Derevianchuk, V. V. Tomash, and S. V. Yarema, “Segmentation of railway transport images using fuzzy logic,” Trans Motauto World, vol. 7, no. 3, pp. 122–125, 2022.

S. Balovsyak, O. Kroitor, Kh. Odaiska, A. M. Salem, and S. Stets, “Car Image Recognition using Convolutional Neural Network with Effi-cientNet Architecture,” in IntelITSIS 2024: 5th International Workshop on Intelligent Information Technologies & Systems of Information Security, Khmelnytskyi, Ukraine, Mar. 28, 2024, CEUR Workshop Proceedings, vol. 3675, pp. 182–195, 2024.

Yu. Herman, H. Lastivka, and A. Samila, “Embedded Operating Systems in IoT Edge Computing,” Security of Infocommunication Systems and Internet of Things (SISIOT), vol. 2, no. 2, paper 02001, pp. 1–6, 2024, doi: 10.31861/sisiot2024.2.02001.

“Model Prediction with Ultralytics YOLO,” [Online]. Available: https://docs.ultralytics.com/modes/predict/#plotting-results.

A. Farid, F. Hussain, K. Khan, M. Shahzad, U. Khan, and Z. Mahmood, “A Fast and Accurate Real-Time Vehicle Detection Method Using Deep Learning for Unconstrained Environments,” Applied Sciences, vol. 13, no. 5, Article no. 3059, 2023. [Online]. Available: https://www.mdpi.com/2076-3417/13/5/3059.

X. Li, Y. Wang, H. Zhang, and J. Liu, “Vehicle–Pedestrian Detection Method Based on Improved YOLOv8,” Electronics, vol. 13, no. 11, Article no. 2149, 2024. [Online]. Available: https://www.mdpi.com/2079-9292/13/11/2149.

J. P. Q. Tomas, A. Aquino, G. E. M. David, R. D. Del Ayre, and J. K. L. Alminiana, “YOLOv8-Based Vehicle Type Detection: An In-Depth Exploration of Deep Learning Techniques for Robust Analysis of Dashcam Footage,” in Proceedings of the 2024 9th International Conference on Information and Communication Technology, pp. 91–96, 2024. [Online]. Available: https://dl.acm.org/doi/10.1145/3670085.3670098.

R. Gonzalez and R. Woods, Digital Image Processing, 4th ed. New York, NY, USA: Pearson/Prentice Hall, 2018.

S. Palani, Principles of Digital Signal Processing. Cham, Switzerland: Springer, 2022.

D. Kim and D. Hwang, Eds., Intelligent Imaging and Analysis. Basel, Switzerland: MDPI, 2020.

A. Geron, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow. Sebastopol, CA, USA: O’Reilly Media, Inc., 2019.

S. Balovsyak, Kh. Odaiska, O. Yakovenko, and I. Iakovlieva, “Adjusting the Brightness and Contrast parameters of digital video cameras using artificial neural networks,” in Proc. SPIE: Sixteenth International Conference on Correlation Optics, vol. 12938, pp. 129380I-1–129380I-4, 2024, doi: 10.1117/12.3009429.

J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv preprint, 2018, arXiv:1804.02767. [Online]. Available: https://arxiv.org/abs/1804.02767.

“Roboflow Annotation Tool,” Roboflow. [Online]. Available: https://roboflow.com.

D. Suresha and H. N. Prakash, “Data Content Weighing for Subjective versus Objective Picture Quality Assessment of Natural Pictures,” International Journal of Image, Graphics and Signal Processing (IJIGSP), vol. 9, no. 2, pp. 27–36, 2017.

S. V. Balovsyak and Kh. S. Odaiska, “Automatic Highly Accurate Estimation of Gaussian Noise Level in Digital Images Using Filtration and Edges Detection Methods,” International Journal of Image, Graphics and Signal Processing (IJIGSP), vol. 9, no. 12, pp. 1–11, 2017, doi: 10.5815/ijigsp.2017.12.01.

“CamSeq01 Dataset, Cambridge Labeled Objects in Video,” [Online]. Available: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamSeq01.

Downloads


Abstract views: 25

Published

2025-12-30

Issue

Section

Articles

How to Cite

[1]
S. Balovsyak and S. Stets, “Preprocessing of Object Images Before Their Detection Using YOLO Neural Network”, SISIOT, vol. 3, no. 2, p. 02002, Dec. 2025, doi: 10.31861/sisiot2025.2.02002.

Similar Articles

41-50 of 65

You may also start an advanced similarity search for this article.