Learning symbolic abstractions from system execution traces
This dissertation shows that symbolic abstractions for a system can be inferred from a set of system execution traces using a combination of Boolean satisfiability and program synthesis. In addition, the degree of completeness of an inferred abstraction can be evaluated by employing equivalence chec...
Saved in:
Main Author | |
---|---|
Format | Dissertation |
Language | English |
Published |
University of Oxford
2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This dissertation shows that symbolic abstractions for a system can be inferred from a set of system execution traces using a combination of Boolean satisfiability and program synthesis. In addition, the degree of completeness of an inferred abstraction can be evaluated by employing equivalence checking using simulation relations, that can further be used to iteratively infer an overapproximating system abstraction with provable completeness guarantees. The first part of this dissertation presents a novel algorithm to infer a symbolic abstraction for a system as a finite state automaton from system execution traces. Given a set of execution traces the algorithm uses Boolean satisfiability to learn a finite state automaton that accepts (at least) all traces in the set. To learn a symbolic system abstraction over large and possibly infinite alphabets, the algorithm uses program synthesis to consolidate trace information into syntactic expressions that serve as transition predicates in the learned model. The system behaviours admitted by the inferred abstraction are limited to only those manifest in the set of execution traces. The abstraction may therefore only be a partial model of the system and may not admit all system behaviours. The second part of this dissertation presents a novel procedure to evaluate the degree of completeness for an inferred system abstraction. The structure of the abstraction is used to extract a set of conditions that collectively encode a completeness hypothesis. The hypothesis is formulated such that the satisfaction of the hypothesis is sufficient to guarantee that a simulation relation can be constructed between the system and the abstraction. Further, the existence of a simulation relation is sufficient to guarantee that the inferred system abstraction is overapproximating. In addition, counterexamples to the hypothesis can be used to construct new traces and iteratively learn new abstractions, until the completeness hypothesis is satisfied and an overapproximating system abstraction is obtained. |
---|---|
Bibliography: | Semiconductor Research Corporation ; Jason Hu Scholarship |