Restricted Exploration Problem of Reinforcement Learning-based Traffic Signal Control Model and Development of Transferable Policy using Graph Neural Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 123
  • Download : 0
Reinforcement learning (RL) has emerged as an alternative approach for optimizing the traffic signal control system. However, there is an issue concerning the restricted exploration encountered when the signal control model is trained with the traffic simulation that has a predefined travel demand scenario. With the restricted exploration, the model would obtain a partially-trained policy that is valid only for the small part of the state space and not for the unexplored (‘never-before-seen’) states. Although this issue critically affects the robustness of the signal control model, however it has not been considered in the literature. Therefore, this research aims to obtain a transferable policy as an effective method to enhance the training efficiency when the model has a partially-trained policy due to the restricted exploration. The key idea is to represent the state variable as a graph and train it using graph neural networks (GNNs). Then, the policy can infer the solution for an unexplored state by using the already-trained knowledge of the topologically equivalent state. The experiment is conducted with five test demand scenarios of different levels in order to investigate the transferability of the policy. The results show that the proposed GNN-based model transferably adapts to the changes of traffic states than the model that does not consider the graph representation.
Publisher
The National Academies of Sciences
Issue Date
2021-01-27
Language
English
Citation

TRB 100th Annual Meeting

URI
http://hdl.handle.net/10203/305000
Appears in Collection
CE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0