Recent state of the art advancements in perception for autonomous driving are driven by deep learning. Autonomous vehicles are typically equipped with different sensors (cameras, LiDARs etc.) and in order to achieve accurate and reliable results a multimodal fusion model is required to exploit the complementary properties of the sensors. In this context, many multimodal data methods have been proposed to tackle the problem of perception.
Within the research project BEINTELLI we develop a scalable software solution for our vehicle, edge and cloud infrastructure. The aim of this master thesis is to research & develop a generalized pipeline for multimodal fusion with a focus on early vs late fusion techniques. Implementation of the proposed method using simulation and real-world data.
- Good programming skills, optional experiences with ROS
- Good knowledge in concepts related to autonomous driving (perception, planning & control)
- Experience with deep learning frameworks (Pytorch, Tensorflow, ONNX)
- Self-reliant thinking and working, high motivation and commitment
- PS: Concrete tasks will be formulated based on interview & your interests
- Feng, D., Haase-Schutz, C., Rosenbaum, L., Hertlein, H., Glaser, C., Timm, F., Wiesbeck, W. and Dietmayer, K., 2021. Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving:Datasets, Methods, and Challenges.
- X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3D object detection network for autonomous driving,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 6526–6534.
- J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. Waslander, “Joint 3d proposal generation and object detection from view aggregation,” in IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Oct. 2018, pp. 1–8.
We are looking forward to receiving your PDF application. Please send your application with the following documents:
- Current transcript of records
- Curriculum vitae in tabular form