Selective batching for inference system for transformer-based generation tasks

An inference system applies a machine-learning transformer model to a batch of requests with variable input length or variable target length or variable internal state length by selectively batching a subset of operations in the transformer model but processing requests in the batch individually for...

Full description

Saved in:
Bibliographic Details
Main Authors Kim, Soojeong, Yu, Gyeongin, Kim, Geon-Woo, Chun, Byung-Gon, Jeong, Joo Seong
Format Patent
LanguageEnglish
Published 19.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:An inference system applies a machine-learning transformer model to a batch of requests with variable input length or variable target length or variable internal state length by selectively batching a subset of operations in the transformer model but processing requests in the batch individually for a subset of operations in the transformer model. In one embodiment, the operation to be processed individually is an attention operation of an encoder or a decoder of the transformer model. By selective batching, the inference system can allow batching operations to be performed for a batch of requests with variable input or target length or internal state length to utilize the parallel computation capabilities of hardware accelerators while preventing unnecessary computations that occur for workarounds that restrain the data of a batch of requests to a same length.
Bibliography:Application Number: US202217969645