Zhaojian Li's project entitled “Collaborative Learning for Multi-robot Systems with Model-enabled Privacy Protection and Safety Supervision” has now been officially funded by NSF.

Abstract:

Robots are increasingly deployed in environments that change with time. Therefore, it becomes imperative to empower them with the ability to continuously learn and adapt to novel situations. Reinforcement learning shows great potential to fulfill this goal due to its nature in exploration and adaptation. However, existing reinforcement learning algorithms fall short in applications to real-world robotic systems. The primary reason is that reinforcement learning is essentially a trial-and-error process that may violate system constraints during learning, which can lead to disastrous consequences such as collisions and robot breakdown. Another important factor hindering reinforcement learning’s application in robotics is privacy concerns because reinforcement learning relies heavily on the intensive collection and sharing of data, particularly in a multi-robot setting with close interaction or cooperation. In fact, privacy concerns on data collection and sharing in robotics have severely hindered networked robot deployment in European urban areas and government drone use in North America. Conventional privacy mechanisms either trade accuracy for privacy or incur heavy computation/communication overhead, and hence are inappropriate for robotic systems subject to stringent accuracy and real-time constraints. Led by investigators with synergistic expertise in robotics, multi-agent reinforcement learning, and privacy for dynamical systems, the project aims to develop a unified framework and corresponding novel methodologies for collaborative reinforcement learning in multi-robot systems with safety and privacy guarantees.

This project combines model based safety regulation with model-free reinforcement learning to enable reinforcement learning's applicability to safety-critical multi-robot systems. It will first address single-robot reinforcement learning using a deep Koopman-based safety regulation for general nonlinear robotic systems to guarantee safety while retaining learning efficiency. The result will then be extended to multi-robot collective reinforcement learning where robots are deployed in shared, contested, or resource-constrained environments. Structured constraint decoupling and efficient learning mechanisms will be designed to enable a fully scalable decentralized multi-agent reinforcement learning paradigm. By exploiting the inherent dynamics of collaborative learning, the project will also enable dynamics-based privacy protection for collected and shared data during learning. Different from conventional privacy mechanisms that either trade accuracy for privacy (differential privacy) or incur heavy computation/communication overhead (encryption), the dynamics-enabled privacy approach can maintain learning optimality while incurring little computation/communication overhead. The proposed algorithms and frameworks will be evaluated using both numerical simulations and experiments with real connected vehicles on real tracks. Results of the project will be used to enrich both graduate and undergraduate courses. The PIs will also use existing various on-going outreach opportunities to energize interests in STEM in K-12 students and community college technicians.