Intelligent data dissemination in vehicular networks: leveraging reinforcement learning

Show simple item record

dc.contributor.author Bhatia, Jitendra
dc.contributor.author Shah, Maanit
dc.contributor.author Prajapati, Rushi
dc.contributor.author Shah, Khush
dc.contributor.author Shah, Premal
dc.contributor.author Trivedi, Harshal
dc.contributor.author Joshi, Dhaval
dc.coverage.spatial Singapore
dc.date.accessioned 2025-06-20T08:01:03Z
dc.date.available 2025-06-20T08:01:03Z
dc.date.issued 2025-06
dc.identifier.citation Bhatia, Jitendra; Shah, Maanit; Prajapati, Rushi; Shah, Khush; Shah, Premal; Trivedi, Harshal and Joshi, Dhaval, "Intelligent data dissemination in vehicular networks: leveraging reinforcement learning", in Deep learning based solutions for vehicular adhoc networks, DOI: 10.1007/978-981-96-5190-0_9, Singapore: Springer, pp. 195-218, Jun. 2025, ISBN: 9789819651924.
dc.identifier.isbn 9789819651924
dc.identifier.uri https://doi.org/10.1007/978-981-96-5190-0_9
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/11533
dc.description.abstract Data dissemination in Vehicular Ad Hoc Networks (VANETs) is vital for the development and operation of intelligent transportation systems, as it enables the rapid and reliable exchange of critical information among vehicles and infrastructure. However, the dynamic nature of VANETs, characterised by high node mobility and frequently changing network topologies, poses significant challenges for conventional routing protocols. The major challenges of traditional routing protocols struggle with scalability, Quality of Service (QoS), and efficient data dissemination. Machine learning (ML) based traditional routing algorithms that typically rely on predefined datasets for training and can struggle to adapt to the dynamic and unpredictable nature of VANET environments. In contrast, reinforcement learning (RL) excels by learning from interactions with the environment in real-time. RL-based routing algorithms can adaptively optimize routing decisions based on the constantly changing network conditions, such as vehicle density, mobility patterns, and communication link quality. This chapter explores the potential of Reinforcement Learning (RL) to address these challenges by enabling adaptive routing protocols that dynamically adjust to network conditions. We provide a comprehensive overview of the fundamentals of RL and examine how these concepts can be applied to develop RL-based routing strategies in VANETs. Through detailed analysis and discussion, the chapter demonstrates the ability of RL to enhance the scalability, QoS, and overall performance of data dissemination in VANETs, paving the way for more robust and efficient vehicular communications in future ITS deployments.
dc.description.statementofresponsibility by Jitendra Bhatia, Maanit Shah, Rushi Prajapati, Khush Shah, Premal Shah, Harshal Trivedi and Dhaval Joshi
dc.format.extent pp. 195-218
dc.language.iso en_US
dc.publisher Springer
dc.title Intelligent data dissemination in vehicular networks: leveraging reinforcement learning
dc.type Book Chapter
dc.relation.journal Deep learning based solutions for vehicular adhoc networks


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account