Are Protein Language Models Compute Optimal?

While protein language models (pLMs) have transformed biological research, the scaling laws governing their improvement remain underexplored. By adapting methodologies from NLP scaling laws, we investigated the optimal ratio between model parameters and training tokens within a fixed compute budget....

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Serrano, Yaiza, Ciudad, Álvaro, Molina, Alexis
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 26.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While protein language models (pLMs) have transformed biological research, the scaling laws governing their improvement remain underexplored. By adapting methodologies from NLP scaling laws, we investigated the optimal ratio between model parameters and training tokens within a fixed compute budget. Our study reveals that pLM sizes scale sublinearly with compute budget, showing diminishing returns in performance as model size increases, and we identify a performance plateau in training loss comparable to the one found in relevant works in the field. Our findings suggest that widely-used pLMs might not be compute-optimal, indicating that larger models could achieve convergence more efficiently. Training a 35M model on a reduced token set, we attained perplexity results comparable to larger models like ESM-2 (15B) and xTrimoPGLM (100B) with a single dataset pass. This work paves the way towards more compute-efficient pLMs, democratizing their training and practical application in computational biology.
ISSN:2331-8422