Le Lézard
Classified in: Science and technology
Subjects: PDT, SVY, PSF

Chung-Ang University Researchers Develop a Meta-Reinforcement Learning Algorithm for Traffic Signal Control


SEOUL, South Korea, Nov. 14, 2022 /PRNewswire/ -- Traffic signal control affects the daily life of people living in urban areas. The existing system relies on a theory- or rule-based controller in charge of altering the traffic lights based on traffic conditions. The objective is to reduce vehicle delay during unsaturated traffic conditions and maximize the vehicle throughput during congestion. However, the existing traffic signal controller cannot fulfil such altering objectives, and a human controller can only manage a few intersections. In view of this, recent advancements in artificial intelligence have focused towards enabling alternate ways of traffic signal control.

Current research on this front has explored reinforcement learning (RL) algorithms as a possible approach. However, RL algorithms do not always work due to the dynamic nature of traffic environments, i.e., traffic at an intersection depends on traffic conditions at other nearby junctions. While multiagent RL can tackle this interference issue, it suffers from exponentially growing dimensionality with the increase in intersections.

Recently, a team of researchers from Chung Ang University in Korea led by Prof. Keemin Sohn proposed a meta-RL model to solve this issue. Specifically, the team developed an extended deep Q-network (EDQN)-incorporated context-based meta-RL model for traffic signal control. "Existing studies have devised meta-RL algorithms based on intersection geometry, traffic signal phases, or traffic conditions. The present research deals with the non-stationary aspect of signal control according to the congestion levels. The meta-RL works autonomously in detecting traffic states, classifying traffic regimes, and assigning signal phases," explains Prof. Sohn, which was made available online on 30 September 2022 and was published in the Computer-Aided Civil and Infrastructure Engineering journal on 30 September 2022.

The model works as follows. It determines the traffic regime?saturated or unsaturated?by utilizing a latent variable that indicates the overall environmental condition. Based on traffic flow, the model either maximizes throughput or minimizes delays similar to a human controller. It does so by implementing traffic signal phases (action). As with intelligent learning agents, the action is controlled by the provision of a "reward." Here, the reward function is set to be +1 or -1 corresponding to a better or worse performance in handling traffic relative to the previous interval, respectively. Further, the EDQN acts as a decoder to jointly control traffic signals for multiple intersections.

Following its theoretical development, the researchers trained and tested their meta-RL algorithm using Vissim v21.0, a commercial traffic simulator, to mimic real-world traffic conditions. Further, a transportation network in southwest Seoul consisting of 15 intersections was chosen as a real-world testbed. Following meta-training, the model could adapt to new tasks during meta-testing without adjusting its parameters.

The simulation experiments revealed that the proposed model could switch control tasks (via transitions) without any explicit traffic information. It could also differentiate between rewards according to the saturation level of traffic conditions.

The researchers pointed to the need for an even more precise algorithm to consider different saturation levels from intersection to intersection. "Existing research has employed reinforcement learning for traffic signal control with a single fixed objective. In contrast, this work has devised a controller that can autonomously select the optimal target based on the latest traffic condition. The framework, if adopted by traffic signal control agencies, could yield travel benefits that have never been experienced before," concludes an optimistic Prof. Sohn. 

Reference
DOI: https://doi.org/10.1111/mice.12924

Title of original paper: A meta?reinforcement learning algorithm for traffic signal control to automatically switch different reward functions according to the saturation level of traffic flows

Journal: Computer-Aided Civil and Infrastructure Engineering

About Chung-Ang University

Website: https://neweng.cau.ac.kr/index.do 

Contact:
Se-Jin Oh
02-820-6614
[email protected]

SOURCE Chung-Ang University


These press releases may also interest you

at 17:00
BigBear.ai , a leading provider of AI-powered decision intelligence solutions for national security, supply chain management, and digital identity, today announced that it will publish its first quarter earnings release on Thursday, May 2, 2024, at...

at 17:00
Avicanna Inc. ("Avicanna" or the "Company") a biopharmaceutical company focused on the development, manufacturing, and commercialization of plant-derived cannabinoid-based products is pleased to announce that it has closed a non-brokered private...

at 16:59
Distinguished geochemist, space scientist, and Director of NASA's Jet Propulsion Laboratory, Dr. Laurie Leshin will be honored as the 2024 Woman of the Year by THE MUSES of the California Science Center Foundation. The annual luncheon, which will...

at 16:50
Geospace Technologies Corporation announces the new product release of an all-in-one, ultralight weight, land-based, wireless seismic data acquisition solution called the Pioneertm. Weighing less than 0.5kg, the Pioneer features a 5Hz geophone,...

at 16:50
Tencent Music Entertainment Group ("TME", or the "Company") , the leading online music and audio entertainment platform in China, today announced that it has filed its annual report on Form 20-F that includes its audited financial statements for the...

at 16:45
Genentech, a member of the Roche Group (SIX: RO, ROG; OTCQX: RHHBY), announced today that the U.S. Food and Drug Administration (FDA) has approved Alecensa® (alectinib) for adjuvant treatment following tumor resection for patients with anaplastic...



News published on and distributed by: