Research

YouTube as the data source for research on global traffic behaviour

The interactions between future cars and pedestrians should be designed to be understandable and safe worldwide. Crowdsourcing helps to go beyond WEIRD (Western, educated, industrialised, rich, developed) studies. But we can go beyond that and cover the whole world to have a true understanding of how people behave around the globe and get a better grasp of how future cars and transportation infrastructure should be designed. We live in the 21st century, where the internet has become ubiquitous and universally adopted. Accessibility to technology created a phenomenon of ASMR driving videos. I established work on the population of the Pedestrians in YouTube (PYT) dataset, which includes 2051 hand-picked hours of day and night urban dashcam footage from 1268 towns and cities in 216 sovereign states and dependent territories. Using YOLO, we analysed aggregated pedestrian behaviour both on the city and country levels. And we are investing in going beyond YOLO to have a more precise and complete understanding of what exactly happens on the streets of cities on all continents.

Do we need human participants for human factors research?

The logical next step is to question whether we even need human participants to conduct (basic) human factors research. Of course, some hypothesis must be tested in a very controlled environment with expensive eye trackers. But, some research questions can maybe be answered through the “wisdom of humanity up until a certain point in time”, which arguably what AI (LLM) is. In this study, we used the GPT-4V vision-language models to compare LLM-based assessment of risk with findings from a crowdsourced study with 1378 participants. The conclusion was that the population-level human risk can be predicted using AI with a high degree of accuracy. We also explored the use of 11 LLMs to evaluate external human-machine interfaces (eHMIs) in automated vehicles.

Accelerating Autonomous Ship Control with Deep Reinforcement Learning and Maritime Expertise

The future of maritime autonomy hinges on intelligent control strategies that are both adaptive and robust, and my masters’ research brings together the best of classic maritime control wisdom with the latest in artificial intelligence. Traditionally, autopilots based on proven PID or PD control and line-of-sight (LOS) guidance have safely navigated ships for decades, but the emergence of deep reinforcement learning (DRL) offers a way to automate and optimize these tasks with unprecedented adaptability. In these works, we address the central challenge of DRL for autonomous vessels—the need for vast amounts of training data and computational time—by leveraging traditional controllers to “fast-track” DRL training through behavioral cloning, allowing the AI to learn efficiently from stable, time-tested strategies before exploring on its own. Our approach was rigorously validated both in simulation and scaled experimental setups using a benchmark container ship model (KCS), where DRL agents not only achieved effective path-following and maneuvering but also demonstrated faster convergence and improved safety in challenging scenarios, including strong winds and dynamic conditions. The synergy of DRL with established maritime control not only bridges the gap between reliable practice and innovation but sets a foundation for safer, more cost-effective, and future-ready autonomous ships. Read the full research in Ocean Engineering and Journal of Marine Engineering & Technology.

Demo of DRL based controller for autonomous ship.