DOUBLE DEEP Q-LEARNING AND FASTER R-CNN-BASED AUTONOMOUS VEHICLE NAVIGATION AND OBSTACLE AVOIDANCE IN DYNAMIC ENVIRONMENT

Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment

Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment

Blog Article

Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering.The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics.Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics.However, Screen Protector this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment.The Faster Course a pied - Femme - Vetements - Pantalon - Nylon R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle.

Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment.The proposed model is primarily tested in a gaming environment similar to the real-world.It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.

Report this page