2025 Volume 77 Issue 1 Pages 71-77
Multi-agent reinforcement learning-based arterial traffic signal control offers flexible, scalable control for complex networks through adaptive responsiveness and distributed coordination. Incorporating neighboring intersection traffic information into state features may improve control performance. Experiments are conducted to test linear RL model’s behavior under two training environments and different combinations of local and neighboring state features. The results show that both joint training environment as well as incorporating information sharing lead to better agent coordination and control performance, while state features with vehicle counts generally outperform those with queuing length at main direction but underperform at other directions.