Analysis of Machine Learning Methods in Navigation and Trajectory Planning for Autonomous Control of Unmanned Systems

Authors

DOI:

https://doi.org/10.31861/sisiot2024.2.02009

Keywords:

machine learning, autonomous navigation, trajectory planning, unmanned systems, deep learning

Abstract

This article investigates the use of machine learning methods in navigation and trajectory planning for the autonomous control of unmanned systems. The main approaches, such as deep learning and reinforcement learning, are considered, offering innovative solutions to challenges arising in dynamic and complex environments. An overview of machine learning methods is conducted, highlighting their advantages over traditional algorithms due to flexibility, adaptability, and the ability to operate under uncertainty. The application of machine learning in trajectory planning is analyzed, including the use of autoencoders, generative models, and graph neural networks for predicting and optimizing routes.  Existing problems and challenges are discussed, particularly ensuring safety and reliability, the need for large volumes of high-quality data, issues of model interpretability, and regulatory aspects. Prospects for development are identified, including the development of more efficient algorithms, enhancing model transparency, and establishing standards for the responsible deployment of autonomous systems. In conclusion, it is emphasized that machine learning is a transformative force in the field of autonomous navigation and trajectory planning. Overcoming current challenges and continuing innovation will unlock the full potential of unmanned systems, bringing significant benefits to society and the economy through widespread application across various sectors.

Downloads

Download data is not yet available.

Author Biographies

  • Roman Trembovetskyi, Cherkasy State Technological University

    PhD student Information Security and Computer Engineering Department in Cherkasy State Technological University. Research interests: machine learning, unmanned mobile systems, drones engineering, Internet of Things.

  • Inna Rozlomii, Cherkasy State Technological University

    PhD, Associate Professor Information Security and Computer Engineering Department in Cherkasy State Technological University. Research interests: information security, methods of cryptographic information transformation, Internet of Things, databases.

  • Vira Babenko, Cherkasy State Technological University

    Doctor of Technical Sciences, Professor, Information Security and Computer Engineering Department in Cherkasy State Technological University. Research interests: information protection in computer systems, security of information and communication systems, methodology for synthesizing cryptographic transformation operations, development of cryptographic primitives.

References

K. Nonami, M. Kartidjo, K.-J. Yoon, and A. Budiyono, Autonomous Control Systems and Vehicles, Springer Science & Business Media, 2013.

M. X. Evangeline and Karthik, Modeling, Simulation, and Control of AI Robotics and Autonomous Systems, IGI Global, 2024.

(n.d.). [Online]. Available: https://images.javatpoint.com/tutorial/machine-learning/images/machine-learning-models2.png. [Accessed: Dec. 12, 2024].

F. X. Govers III, Artificial Intelligence for Robotics: Build intelligent robots using ROS 2, Python, OpenCV, and AI/ML techniques for real-world tasks, 2nd ed., Packt Publishing, Birmingham, 2024.

G. Bonaccorso, Hands-On Unsupervised Learning with Python: Implement Machine Learning and Deep Learning Models Using Scikit-Learn, TensorFlow, and More, Packt Publishing, 2019.

S. Arshad and G.-W. Kim, "Role of Deep Learning in Loop Closure Detection for Visual and Lidar SLAM: A Survey," Sensors, vol. 21, no. 4, p. 1243, 2021. doi: 10.3390/s21041243.

A. M. Roth, D. Manocha, R. D. Sriram, and E. Tabassi, Explainable and Interpretable Reinforcement Learning for Robotics, Springer International Publishing, Cham, 2024. doi: 10.1007/978-3-031-47518-4.

(n.d.). [Online]. Available: https://miro.medium.com/v2/1*7cuAqjQ97x1H_sBIeAVVZg.png. [Accessed: Dec. 12, 2024].

Z. Fan, "An exploration of reinforcement learning and deep reinforcement learning," Appl. Comput. Eng., vol. 73, no. 1, pp. 154-159, 2024. doi: 10.54254/2755-2721/73/20240386.

Y. Huang, "Deep Q-Networks," in Deep Reinforcement Learning: Fundamentals, Research and Applications, H. Dong, Z. Ding, and S. Zhang, Eds., Springer Singapore, 2020, pp. 135-160. doi: 10.1007/978-981-15-4095-0_4.

J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," arXiv preprint arXiv:1707.06347, 2017.

E. Dhulkefl, A. Durdu, and H. Terzioğlu, "Dijkstra algorithm using UAV path planning," Konya J. Eng. Sci., vol. 8, pp. 92-105, 2020.

J. Ding, Y. Zhou, X. Huang, K. Song, S. Lu, L. Wang, "An Improved RRT⁎ Algorithm for Robot Path Planning Based on Path Expansion Heuristic Sampling," J. Comput. Sci., vol. 2023, 101937, 2023. doi: 10.1016/j.jocs.2022.101937.

A. Marashian and A. Razminia, "Mobile robot’s path-planning and path-tracking in static and dynamic environments: Dynamic programming approach," Robot. Auton. Syst., vol. 172, p. 104592, 2024. doi: 10.1016/j.robot.2023.104592.

D. Arce, J. Solano, and C. Beltrán, "A Comparison Study between Traditional and Deep-Reinforcement-Learning-Based Algorithms for Indoor Autonomous Navigation in Dynamic Scenarios," Sensors, vol. 23, no. 24, p. 9672, 2023. doi: 10.3390/s23249672.

A. Tampuu, T. Matiisen, M. Semikin, D. Fishman, N. Muhammad, "A Survey of End-to-End Driving: Architectures and Training Methods," IEEE Trans. Neural Netw. Learn. Syst., vol. 32, pp. 1-21, 2020. doi: 10.1109/tnnls.2020.3043505.

R. Dwivedi et al., "Explainable AI (XAI): Core Ideas, Techniques and Solutions," ACM Comput. Surv., vol. 54, no. 3, p. 1-42, 2022. doi: 10.1145/3561048.

B. Y. Suprapto et al., "Designing an Autonomous Vehicle Using Sensor Fusion Based on Path Planning and Deep Learning Algorithms," SAIEE Afr. Res. J., vol. 115, no. 3, pp. 86-98, 2024. doi: 10.23919/saiee.2024.10551314.

S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, "A survey of deep learning techniques for autonomous driving," J. Field Robot., vol. 37, no. 3, pp. 362-386, 2020. doi: 10.1002/rob.21918.

B. R. Kiran et al., "Deep Reinforcement Learning for Autonomous Driving: A Survey," IEEE Trans. Intell. Transp. Syst., vol. 22, no. 1, pp. 1-18, 2021. doi: 10.1109/tits.2021.3054625.

Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, "A Comprehensive Survey on Graph Neural Networks," IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4-24, 2021. doi: 10.1109/tnnls.2020.2978386.

(n.d.). [Online]. Available: https://www.pi-research.org/project/gnn/featured_hu7d801bb7c748537ad832916140f12452_219604_720x0_resize_q75_lanczos.jpg. [Accessed: Dec. 12, 2024].

V. Mnih, "Asynchronous Methods for Deep Reinforcement Learning," arXiv Prepr. arXiv:1602.01783, 2016.

H. Tan, "Reinforcement learning with deep deterministic policy gradient," in 2021 International Conference on Artificial Intelligence, Big Data and Algorithms (CAIBDA), IEEE, 2021, pp. 82-85.

H. Wang, Z. Wang, and X. Cui, "Multi-objective Optimization Based Deep Reinforcement Learning for Autonomous Driving Policy," J. Phys., vol. 1861, no. 1, p. 012097, 2021. doi: 10.1088/1742-6596/1861/1/012097.

M. Bansal, A. Krizhevsky, and A. Ogale, "ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst," 2018. [Online]. Available: https://arxiv.org/abs/1812.03079.

M. Kim, J. Kim, M. Jung, and H. Oh, "Towards monocular vision-based autonomous flight through deep reinforcement learning," Expert Syst. With Appl., vol. 198, p. 116742, 2022. doi: 10.1016/j.eswa.2022.116742.

M. Zeller, "Safety Assurance of Autonomous Systems using Machine Learning: An Industrial Case Study and Lessons Learnt," INCOSE Int. Symp., vol. 33, no. 1, pp. 320-333, 2023. doi: 10.1002/iis2.13024.

P. Sun et al., "Scalability in Perception for Autonomous Driving: Waymo Open Dataset," 2020. [Online]. Available: https://arxiv.org/abs/1912.04838.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, "A Survey of Methods for Explaining Black Box Models," ACM Comput. Surv., vol. 51, no. 5, p. 1-42, 2019. doi: 10.1145/3236009.

Y. Cheng, D. Wang, P. Zhou, and T. Zhang, "Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges," IEEE Signal Process. Mag., vol. 35, no. 1, pp. 126-136, 2018. doi: 10.1109/msp.2017.2765695.

Downloads


Abstract views: 2

Published

2024-12-30

Issue

Section

Articles

How to Cite

[1]
R. Trembovetskyi, I. Rozlomii, and V. Babenko, “Analysis of Machine Learning Methods in Navigation and Trajectory Planning for Autonomous Control of Unmanned Systems”, SISIOT, vol. 2, no. 2, p. 02009, Dec. 2024, doi: 10.31861/sisiot2024.2.02009.

Similar Articles

21-30 of 41

You may also start an advanced similarity search for this article.