loading page

Deep-Q-Network Hybridization with Extended Kalman Filter for Accelerate Learning in Autonomous Navigation with the Auxiliary Security Module
  • Carlos D. S. Bezerra,
  • Flávio Vieira,
  • Anderson da Silva Soares
Carlos D. S. Bezerra
Universidade Federal de Goias Instituto de Informatica

Corresponding Author:[email protected]

Author Profile
Flávio Vieira
Universidade Federal de Goias - Campus Samambaia
Author Profile
Anderson da Silva Soares
Universidade Federal de Goias Instituto de Informatica
Author Profile

Abstract

This article proposes an algorithm for autonomous navigation of mobile robots that merges Reinforcement Learning with Extended Kalman Filter (EKF) as a localization technique, namely, EKF-DQN, aiming to accelerate learning and improve the reward values obtained in the process of apprenticeship. More specifically, Deep Neural Networks (DQN - Deep-Q-Networks) are used to control the trajectory of an autonomous vehicle in an indoor environment. Due to the ability of EKF to predict states, this algorithm is proposed to be used as a learning accelerator of the DQN network, predicting states ahead and inserting this information in the memory replay. Aiming at the safety of the navigation process, it is also proposed a visual safety system that avoids collisions of the mobile vehicle with people moving in the environment. The efficiency of the proposed algorithm is verified through computer simulations using the CoppeliaSIM simulator with code insertion in Python. The simulation results show that the EKF-DQN algorithm accelerates the maximization of rewards obtained and provides a higher success rate in fulfilling the proposed mobile robot mission compared to the DQN and Q-Learning algorithms.
31 Oct 2022Submitted to Transactions on Emerging Telecommunications Technologies
01 Nov 2022Submission Checks Completed
01 Nov 2022Assigned to Editor
01 Nov 2022Review(s) Completed, Editorial Evaluation Pending
20 Jan 2023Reviewer(s) Assigned
11 Jul 2023Editorial Decision: Revise Major
12 Sep 20231st Revision Received
13 Sep 2023Review(s) Completed, Editorial Evaluation Pending
13 Sep 2023Submission Checks Completed
13 Sep 2023Assigned to Editor
05 Oct 2023Reviewer(s) Assigned
18 Nov 2023Editorial Decision: Revise Minor