Ensuring Federated Learning Reliability for Infrastructure-Enhanced Autonomous Driving
Abstract
The application of machine learning techniques, particularly in the context of autonomous driving solutions, has grown exponentially in recent years. As such, the collection of high-quality datasets has become a prerequisite for training new models. However, concerns about privacy and data usage have led to a growing demand for decentralized methods that can be learned without the need for pre-collected data. Federated learning (FL) offers a potential solution to this problem by enabling individual clients to contribute to the learning process by sending model updates rather than training data. While Federated Learning has proven successful in many cases, new challenges have emerged, especially in terms of network availability during training. Since a global instance is responsible for collecting updates from local clients, there is a risk of network downtime if the global server fails. In this study, we propose a novel and crucial concept that addresses this issue by adding redundancy to our network. Rather than deploying a single global model, we deploy a multitude of global models and utilize consensus algorithms to synchronize and keep these replicas updated. By utilizing these replicas, even if the global instance fails, the network remains available. As a result, our solution enables the development of reliable Federated Learning systems, particularly in system architectures suitable for infrastructure-enhanced autonomous driving. Consequently, our findings enable the more effective realization of use cases in the context of cooperative, connected, and automated mobility.