Conflict Detection and Resolution in Sensor-fusion from Multi-Perspective Scenes

Level: Master Thesis

The main aspect of the thesis will be on the detection and possibly resolution of conflicts in automated detection from multiple (two or more) perspectives of the same scenes. When multiple different perspectives from the same observed scene are available, the combined information can be used to automatically (cross-)validate the output of visual detectors (CNN/DNN). Agreement and conflicting information can be used to generate insight into failure modes and to improve the detection methods by generating training samples for the corrected misclassified detections.

Main Research Focus

  • Literature review on related work, (intermediate/late) sensor fusion, consensus and automatic resolution strategies (with focus on ML/DL methods)
  • Development of conflict detection and resolution methods

Prerequisites

  • Decent programming skills (Python and/or C)
  • Experience with sensor fusion, synchronization, interpolation or numeric libraries (e.g. numpy) would be helpful
  • Previous work in computer vision or with 2D/3D data (annotated images, point clouds) and manipulation would be considered helpful

References

  • B. Khaleghi, A. Khamis, F. O. Karray, and S. N. Razavi, “Multisensor data fusion: A review of the state-of-the-art,” Inf. Fusion, 2013.
  • M. Gabb, H. Digel, T. Müller and R. Henn, “Infrastructure-supported Perception and Track-level Fusion using Edge Computing,” IEEE Intelligent Vehicles Symposium (IV), 2019.
  • S. Seebacher et al., “Infrastructure Data Fusion for Validation and Future Enhancements of Autonomous Vehicles’ Perception on Austrian Motorways,” IEEE International Conference on Connected Vehicles and Expo (ICCVE), 2019.

Curious? Get in touch.

We are looking forward to receiving your application. Please include your current transcript of records and Curriculum vitae in tabular form; all documents in PDF format.