Neural Abstraction-Based Controller Synthesis and Deployment

Abstraction-based techniques are an attractive approach for synthesizing correct-by-construction controllers to satisfy high-level temporal requirements. A main bottleneck for successful application of these techniques is the memory requirement, both during controller synthesis and in controller dep...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Majumdar, Rupak, Salamati, Mahmoud, Soudjani, Sadegh
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 07.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstraction-based techniques are an attractive approach for synthesizing correct-by-construction controllers to satisfy high-level temporal requirements. A main bottleneck for successful application of these techniques is the memory requirement, both during controller synthesis and in controller deployment. We propose memory-efficient methods for mitigating the high memory demands of the abstraction-based techniques using neural network representations. To perform synthesis for reach-avoid specifications, we propose an on-the-fly algorithm that relies on compressed neural network representations of the forward and backward dynamics of the system. In contrast to usual applications of neural representations, our technique maintains soundness of the end-to-end process. To ensure this, we correct the output of the trained neural network such that the corrected output representations are sound with respect to the finite abstraction. For deployment, we provide a novel training algorithm to find a neural network representation of the synthesized controller and experimentally show that the controller can be correctly represented as a combination of a neural network and a look-up table that requires a substantially smaller memory. We demonstrate experimentally that our approach significantly reduces the memory requirements of abstraction-based methods. For the selected benchmarks, our approach reduces the memory requirements respectively for the synthesis and deployment by a factor of \(1.31\times 10^5\) and \(7.13\times 10^3\) on average, and up to \(7.54\times 10^5\) and \(3.18\times 10^4\). Although this reduction is at the cost of increased off-line computations to train the neural networks, all the steps of our approach are parallelizable and can be implemented on machines with higher number of processing units to reduce the required computational time.
ISSN:2331-8422