Docent Lecture by Lars Hammarstrand: "Where am I? – An existential question for a self-driving vehicle?".
On Wednesday 27 March Lars Hammarstrand will hold a lecture in connection with his promotion to the academic title Oavlönad docent at the department. His lecture is titled "Where am I? - An existential question for a self-driving vehicle?".
What is your main point in your promotion lecture?
Existential questions for a truly self-driving vehicle is: how to localise itself on the road and in the road network and how to do this with sufficient accuracy, in all weather and in every season, and year in and year out? Now, these questions are tough, and as of yet we do not have any definitive answers. What we do have are bits and pieces that on their own can solve parts of the problem.
One promising approach is to use a combination of different sensing modalities, e.g., radar, camera, and lidar, together with updatable maps describing the position of stable "landmarks" that can be observed by the sensors. Using these maps, we can position the vehicle by matching our current sensor observations with the landmarks in the map. In the lecture, we will look at different approaches of doing this and discuss possible stable landmarks, especially for a radar and a camera.
What has made you specially interested in research on localisation for self-driving vehicles?
I think it is a very a challenging problem that is highly relevant to solve. In order to solve it, I believe that you need to use a mix of Bayesian statistics, machine learning and classical geometry. All of which I find fascinating on their own but even more so when combined to solve a challenging real-world problem.
What, or who, inspires you as a researcher?
The part that I enjoy the most with being a researcher is the possibility to discuss and come up with new creative and innovative ideas and solutions together with other skilled researchers.
ABSTRACT
"Where am I? – An existential question for a self-driving vehicle?".
One of the core problems that self-driving vehicles need to solve is to localise itself on the road and in the road network. Knowing the position of the vehicle, be it in a local or global frame, is essential in order to be able to interpret the traffic scene (scene understanding), plan a safe and comfortable path (path planning), and being able to navigate to the final destination (navigation). The key challenges here is to be able to perform the localisation with sufficient accuracy, in all weather and in every season, and year in and year out.
One might be tempted to think that the localisation problem is solved a long time ago with the introduction of Global navigation and satellite systems (GNSS), such as GPS. However (spoiler alert), although there are GNSS variants, e.g. real-time kinematics (RTK), that is able to achieve the desired accuracy under favourable conditions, such systems are not robust enough to handle all situations. This is especially true in urban environments where there is limited line-of-sight to the satellites.
So, instead of relying in a single approach to perform the localisation, a self-driving vehicle needs to combine many different independent approaches to achieve the required accuracy and robustness. In this lecture, we will try to give an overview of some of these methods. We will focus on methods that uses current observations from onboard sensors, e.g., radar, camera and lidar, to perform localisation in a pre-build map containing the position of stable landmarks that can be observed by the sensors. More specifically, we will look at using observations from radar and camera sensors and discuss what are stable landmarks for these sensors and how these can be used to address the key challenges listed above.
