Abstract:
Data dissemination in Vehicular Ad Hoc Networks (VANETs) is vital for the development and operation of intelligent transportation systems, as it enables the rapid and reliable exchange of critical information among vehicles and infrastructure. However, the dynamic nature of VANETs, characterised by high node mobility and frequently changing network topologies, poses significant challenges for conventional routing protocols. The major challenges of traditional routing protocols struggle with scalability, Quality of Service (QoS), and efficient data dissemination. Machine learning (ML) based traditional routing algorithms that typically rely on predefined datasets for training and can struggle to adapt to the dynamic and unpredictable nature of VANET environments. In contrast, reinforcement learning (RL) excels by learning from interactions with the environment in real-time. RL-based routing algorithms can adaptively optimize routing decisions based on the constantly changing network conditions, such as vehicle density, mobility patterns, and communication link quality. This chapter explores the potential of Reinforcement Learning (RL) to address these challenges by enabling adaptive routing protocols that dynamically adjust to network conditions. We provide a comprehensive overview of the fundamentals of RL and examine how these concepts can be applied to develop RL-based routing strategies in VANETs. Through detailed analysis and discussion, the chapter demonstrates the ability of RL to enhance the scalability, QoS, and overall performance of data dissemination in VANETs, paving the way for more robust and efficient vehicular communications in future ITS deployments.