SCIS & ISIS
SCIS & ISIS 2004
Displaying 1-4 of 4 articles from this issue
  • Toshihiko Miyagi
    Session ID: TUA-1-1
    Published: 2004
    Released on J-STAGE: November 12, 2018
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a new algorithm for finding disaggregate user equilibria on a congested network where a driver is assumed to be an agent who performs reinforcement learning to get maximal payoff (minimum loss) under limited route information. A reinforcement learning with endogenously determined leaning- efficiency parameters is presented and its relation to the user equilibrium is also explored.
    Download PDF (188K)
  • Yos Sunitiyoso, Shoji Matsumoto
    Pages TUA-1-2-
    Published: 2004
    Released on J-STAGE: November 11, 2018
    CONFERENCE PROCEEDINGS FREE ACCESS

    This study attempts to apply an agent-based approach to modelling a transportation system. Utilizing the advantage of agent-based model of being validated at an individual level, a social dilemma situation of travel mode choice is modeled and viewed as a complex system. Inductive-learning’scapability of travelers is used and combined with an evolutionary approach in order to simulate travelers’ learning process. Auser-equilibrium point as predicted by conventional equilibrium analysis could be reached and stabilized. The stable situation is produced by interaction process among agents and by behavioral change process of each agent, without a central or external rule that organizes objective function of the system. The study also revealed some conditions that may produce other stable situations in addition to the user equilibrium point. An emergent situation combined with travelers’ sensitivity to payoff differences is observed to be influential.

    Download PDF (355K)
  • Shoichiro Nakayama, Jun-ichi Takayama, Tetsuo Sato
    Pages TUA-1-3-
    Published: 2004
    Released on J-STAGE: November 11, 2018
    CONFERENCE PROCEEDINGS FREE ACCESS

    In general, transportation system consists of many drivers who choose the route, learning based on their experiences and information provided. In this study, drivers are assumed to reason and learn inductively based on their experiences. We develop an agent-based transportation system simulation model. In the model, the agent learns which route to choose based on his experiences. We shall call such a learning agent an adaptive agent. We examine the behavior of agents and network flow through the simulation. The results of the numerical experiments can be summarized as follows: 1) the system converges to Wardrop equilibrium; 2) the grades (the number of times of choosing the fastest route) are various among agents; 3) the difference of the grades occurs contingently; 4) agents who choose the route randomly deteriorate the system’s stability excessively.

    Download PDF (310K)
  • Bona Frazila Russ, Tadashi Yamada, Jun Castro
    Pages TUA-1-4-
    Published: 2004
    Released on J-STAGE: November 11, 2018
    CONFERENCE PROCEEDINGS FREE ACCESS

    This paper describes a model for determining an optimal freight transport network expansion plan that selects a set of feasible actions from a number of possible actions. The model is developed within the framework of bi-level programming problem, where a multimodal multi-user assignment technique is incorporated within the lower level problem and the combination of actions for capacity expansion is optimised using genetic algorithm-based procedures in the upper level problem. Some procedures on the basis of genetic algorithms are applied to investigate the performance of the model. Model application to the actual freight transport network in Java Island, Indonesia reveals that the model can adequately select the best combination of actions for optimal multimodal freight network expansion.

    Download PDF (327K)
feedback
Top