When a mobile unit autonomously navigates an area, it needs to recognize infrastructure elements and interpret this information. As such, in a complex, changing environment, it needs to identify obstacles build a virtual representation of the environment and use that representation for localization and navigation.
Simultaneous Location and Mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a mobile unit's location within it. Despite the large potential of SLAM for inspection and mapping operations or localization and the advanced state-of-the-art in academic research, there is a very limited adoption of SLAM for practical applications in industries that utilize mobile systems.
The focus of this project is on improving SLAM by including semantic information in the map and by making localization more robust to changes in the environment. To this end, we will investigate hybrid representations consisting of multiple layers that allow to incorporate semantic information, merging information from different sensors and from different mobile units.
Do you want to benefit from this basic research and discover to where it leads? We are always looking for companies that can propose interesting cases to validate the results or that can participate in the user group. By doing so, you can follow the progress from close by.