Publications

Working documents

  • Evaluation of Pedestrian Behaviour in 157 Cities with 285 Hours of Dashcam Footage from YouTube
    Alam, M. S., Martens, M., Bazilinskyy, P.
    In preparation.
    Interaction between future cars and pedestrians should be designed to be understandable and safe globally. While previous research has studied vehicle-pedestrian interactions within specific cities or countries, this study offers a more scalable and robust approach by examining pedestrian behaviour worldwide. We present a dataset, PYT which includes 285 hours of day and night dashcam YouTube footage from 157 cities and 59 countries. We detected pedestrian movements, focusing on the speed and the pedestrian crossing decision time during road crossings based on the bounding boxes given by YOLO. Videos were carefully selected based on specific criteria to ensure urban settings and adequate pedestrian interactions. Results revealed statistically significant cross-cultural variations in pedestrian behaviour influenced by socioeconomic and environmental factors such as Gross Metropolitan Product (GMP), traffic-related mortality and literacy. The dataset is publicly available to encourage further research into global pedestrian behaviour.
  • Generating realistic traffic scenarios: A deep learning approach using generative adversarial networks (GANs)
    Alam, M. S., Martens, M., Bazilinskyy, P.
    Submitted for publication.
    Diverse and realistic traffic scenarios are crucial for testing systems and human behaviour in transportation research. Leveraging Generative Adversarial Networks (GANs), this study focuses on video-to-video translation to generate a variety of traffic scenes. By employing GANs for video-to-video translation, the study accurately captures the nuances of urban driving environments, enriching realism and breadth. One advantage of this approach is the ability to model how road users adapt and behave differently across varying conditions depicted in the translated videos. For instance, certain scenarios may exhibit more cautious driver behaviour, while others may involve heavier traffic and faster speeds. Maintaining consistent driving patterns in the translated videos improves their resemblance to real-world scenarios, thereby increasing the reliability of the data for testing and validation purposes. Ultimately, this approach provides researchers and practitioners with a valuable method for evaluating algorithms and systems under challenging conditions, advancing transportation models and automated driving technologies.

2024

  • Harnessing traditional controllers for fast-track training of deep reinforcement learning control strategies
    Alam, M. S., & Carlucho, I.
    Journal of Marine Engineering & Technology (2024)

    In recent years, Autonomous Ships have become a focal point for research, with a specific emphasis on improving ship autonomy. Machine Learning Controllers, especially those based on Reinforcement Learning, have seen significant progress. However, addressing the substantial computational demands and intricate reward structures required for their training remains critical. This paper introduces a novel approach, “Leveraging Traditional Controllers for Accelerated Deep Reinforcement Learning (DRL) Training,” aimed at bridging conventional maritime control methods with cutting-edge DRL techniques for vessels. This innovative approach explores the synergies between stable traditional controllers and adaptive DRL methodologies, known for their complexity handling capabilities. To tackle the time-intensive nature of DRL training, we propose a solution: utilizing existing traditional controllers to expedite DRL training by transferring knowledge from these controllers to guide DRL exploration. We rigorously assess the effectiveness of this approach through various ship maneuvering scenarios, including different trajectories and external disturbances like winds. The results unequivocally demonstrate accelerated DRL training while maintaining stringent safety standards. This groundbreaking approach has the potential to bridge the gap between traditional maritime practices and contemporary DRL advancements, facilitating the seamless integration of autonomous systems into maritime operations, with promising implications for enhanced vessel efficiency, cost-effectiveness, and overall safety.
  • From A to B with ease: User-centric interfaces for shuttle buses
    Alam, M. S., Subramanian, T., Martens, M., Remlinger, W., & Bazilinskyy, P.
    16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) Stanford, CA, USA (2024)

    User interfaces are crucial for easy travel. To understand user preferences for travel information during automated shuttle rides, we conducted an online survey with 51 participants from 8 countries. The survey focused on the information passengers wish to access and their preferences for using mobile, private, and public screens during boarding and travelling on the bus. It also gathered opinions on the usage of Near-Field Communication (NFC) for shuttle bus confirmation and viewing assistance to help passengers stand precisely where the shuttle will arrive, overcoming navigation and language barriers. Results showed that 72.6% of participants indicated a need for NFC and 82.4% for viewing assistance. There was a strong correlation between preferences for shuttle bus schedules, route details (r=0.55), and next-stop information (r=0.57) on mobile screens, suggesting that passengers who value one type of information are likely to value related kinds too.

2023

  • AI on the water: Applying drl to autonomous vessel navigation
    Alam, M. S.,Sanjeev Kumar, R.S., & Somayajula, A.
    Proceedings of the Sixth International Conference in Ocean Engineering (ICOE2023) (2023)

    Human decision-making errors cause a majority of globally reported marine accidents. As a result, automation in the marine industry has been gaining more attention in recent years. Obstacle avoidance becomes very challenging for an autonomous surface vehicle in an unknown environment. We explore the feasibility of using Deep Q-Learning (DQN), a deep reinforcement learning approach, for controlling an underactuated autonomous surface vehicle to follow a known path while avoiding collisions with static and dynamic obstacles. The ship's motion is described using a three-degree-of-freedom (3-DOF) dynamic model. The KRISO container ship (KCS) is chosen for this study because it is a benchmark hull used in several studies, and its hydrodynamic coefficients are readily available for numerical modelling. This study shows that Deep Reinforcement Learning (DRL) can achieve path following and collision avoidance successfully and can be a potential candidate that may be investigated further to achieve human-level or even better decision-making for autonomous marine vehicles.
  • Navigating the Ocean with DRL: Path following for marine vessels
    Jose, J., & Alam, M. S., & Somayajula, A.S.
    Proceedings of the Sixth International Conference in Ocean Engineering (ICOE2023) (2023)

    Human error is a substantial factor in marine accidents, accounting for 85% of all reported incidents. By reducing the need for human intervention in vessel navigation, AI-based methods can potentially reduce the risk of accidents. AI techniques, such as Deep Reinforcement Learning (DRL), have the potential to improve vessel navigation in challenging conditions, such as in restricted waterways and in the presence of obstacles. This is because DRL algorithms can optimize multiple objectives, such as path following and collision avoidance, while being more efficient to implement compared to traditional methods. In this study, a DRL agent is trained using the Deep Deterministic Policy Gradient (DDPG) algorithm for path following and waypoint tracking. Furthermore, the trained agent is evaluated against a traditional PD controller with an Integral Line of Sight (ILOS) guidance system for the same. This study uses the Kriso Container Ship (KCS) as a test case for evaluating the performance of different controllers. The ship's dynamics are modeled using the maneuvering Modelling Group (MMG) model. This mathematical simulation is used to train a DRL-based controller and to tune the gains of a traditional PD controller. The simulation environment is also used to assess the controller's effectiveness in the presence of wind.
  • Data Driven Control for marine vehicle maneuvering
    Alam, M. S.
    (2023)

    The majority of global marine accidents are caused by human decision-making errors, which has resulted in increased interest in automation within the marine industry. However, obstacle avoidance for autonomous surface vehicles in unknown environments is particularly difficult. This study investigates the possibility of utilizing a deep reinforcement learning (DRL) approach to control an underactuated autonomous surface vehicle following a predetermined path while avoiding collisions with static and dynamic obstacles. The ship’s movement is modelled using a three-degree-of-freedom (3-DOF) dynamic model, with the KRISO container ship (KCS) being selected for the study due to its extensive use in previous research and readily available hydrodynamic coefficients for numerical modelling. The study evaluates the performance of various DRL algorithms, such as Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO) algorithms, for path following and their effectiveness in the presence of wind, as well as comparing them to the traditional PD controller. The study also explores DQN and DDPG algorithms for both static and dynamic obstacle avoidance and proposes a hybrid network that uses two networks for improved path following and obstacle avoidance capabilities.
  • Deep reinforcement learning based controller for ship navigation
    Deraj, R., Sanjeev Kumar, R.S., Alam, M. S., & Somayajula, A.
    Ocean Engineering (2023)

    A majority of marine accidents that occur can be attributed to errors in human decisions. Through automation, the occurrence of such incidents can be minimized. Therefore, automation in the marine industry has been receiving increased attention in the recent years. This paper investigates the automation of the path following action of a ship. A deep Q-learning approach is proposed to solve the path-following problem of a ship. This method comes under the broader area of deep reinforcement learning (DRL) and is well suited for such tasks, as it can learn to take optimal decisions through sufficient experience. This algorithm also balances the exploration and the exploitation schemes of an agent operating in an environment. A three-degree-of-freedom (3-DOF) dynamic model is adopted to describe the ship’s motion. The Krisco container ship (KCS) is chosen for this study as it is a benchmark hull that is used in several studies and its hydrodynamic coefficients are readily available for numerical modeling. Numerical simulations for the turning circle and zig-zag maneuver tests are performed to verify the accuracy of the proposed dynamic model. A reinforcement learning (RL) agent is trained to interact with this numerical model to achieve waypoint tracking. Finally, the proposed approach is investigated not only by numerical simulations but also by model experiments using 1:75.5 scaled model.
  • Comparison of path following in ships using modern and traditional controllers
    Sanjeev Kumar, R.S., Alam, M. S., Reddy, B., & Somayajula, A.S.
    Proceedings of the Sixth International Conference in Ocean Engineering (ICOE2023) (2023)

    Vessel navigation is difficult in restricted waterways and in the presence of static and dynamic obstacles. This difficulty can be attributed to the high-level decisions taken by humans during these maneuvers, which is evident from the fact that 85% of the reported marine accidents are traced back to human errors. Artificial intelligence-based methods offer us a way to eliminate human intervention in vessel navigation. Newer methods like Deep Reinforcement Learning (DRL) can optimize multiple objectives like path following and collision avoidance at the same time while being computationally cheaper to implement in comparison to traditional approaches. Before addressing the challenge of collision avoidance along with path following, the performance of DRL-based controllers on the path following task alone must be established. Therefore, this study trains a DRL agent using Proximal Policy Optimization (PPO) algorithm and tests it against a traditional PD controller guided by an Integral Line of Sight (ILOS) guidance system. The Krisco Container Ship (KCS) is chosen to test the different controllers. The ship dynamics are mathematically simulated using the Maneuvering Modelling Group (MMG) model developed by the Japanese. The simulation environment is used to train the deep reinforcement learning-based controller and is also used to tune the gains of the traditional PD controller. The effectiveness of the controllers in the presence of wind is also investigated.

Download all papers in bib file here.

* Joint first author.