BAGEL: Bootstrapping Agents by Guiding Exploration with Language

Following natural language instructions by executing actions in digital environments (e.g. web-browsers and REST APIs) is a challenging task for language model (LM) agents. Unfortunately, LM agents often fail to generalize to new environments without human demonstrations. This work presents BAGEL, a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Murty, Shikhar, Manning, Christopher, Shaw, Peter, Joshi, Mandar, Lee, Kenton
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 09.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Following natural language instructions by executing actions in digital environments (e.g. web-browsers and REST APIs) is a challenging task for language model (LM) agents. Unfortunately, LM agents often fail to generalize to new environments without human demonstrations. This work presents BAGEL, a method for bootstrapping LM agents without human supervision. BAGEL converts a seed set of randomly explored trajectories or synthetic instructions, into demonstrations, via round-trips between two noisy LM components: an LM labeler which converts a trajectory into a synthetic instruction, and a zero-shot LM agent which maps the synthetic instruction into a refined trajectory. By performing these round-trips iteratively, BAGEL quickly converts the initial distribution of trajectories towards those that are well-described by natural language. We use BAGEL demonstrations to adapt a zero shot LM agent at test time via in-context learning over retrieved demonstrations, and find improvements of over 2-13% absolute on ToolQA and MiniWob++, with up to 13x reduction in execution failures.
ISSN:2331-8422