Summary - TerpreT: A Probabilistic Programming Language for Program Induction
We study machine learning formulations of inductive program synthesis; that is, given input-output examples, synthesize source code that maps inputs to corresponding outputs. Our key contribution is TerpreT, a domain-specific language for expressing program synthesis problems. A TerpreT model is com...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.12.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We study machine learning formulations of inductive program synthesis; that
is, given input-output examples, synthesize source code that maps inputs to
corresponding outputs. Our key contribution is TerpreT, a domain-specific
language for expressing program synthesis problems. A TerpreT model is composed
of a specification of a program representation and an interpreter that
describes how programs map inputs to outputs. The inference task is to observe
a set of input-output examples and infer the underlying program. From a TerpreT
model we automatically perform inference using four different back-ends:
gradient descent (thus each TerpreT model can be seen as defining a
differentiable interpreter), linear program (LP) relaxations for graphical
models, discrete satisfiability solving, and the Sketch program synthesis
system. TerpreT has two main benefits. First, it enables rapid exploration of a
range of domains, program representations, and interpreter models. Second, it
separates the model specification from the inference algorithm, allowing proper
comparisons between different approaches to inference.
We illustrate the value of TerpreT by developing several interpreter models
and performing an extensive empirical comparison between alternative inference
algorithms on a variety of program models. To our knowledge, this is the first
work to compare gradient-based search over program space to traditional
search-based alternatives. Our key empirical finding is that constraint solvers
dominate the gradient descent and LP-based formulations.
This is a workshop summary of a longer report at arXiv:1608.04428 |
---|---|
DOI: | 10.48550/arxiv.1612.00817 |