Tuning computer vision models with task rewards

Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language proces...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors André Susano Pinto, Kolesnikov, Alexander, Yuge Shi, Beyer, Lucas, Zhai, Xiaohua
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 16.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.
ISSN:2331-8422