NativQA: Multilingual Culturally-Aligned Natural Query for LLMs

Natural Question Answering (QA) datasets play a crucial role in developing and evaluating the capabilities of large language models (LLMs), ensuring their effective usage in real-world applications. Despite the numerous QA datasets that have been developed, there is a notable lack of region-specific...

Full description

Saved in:
Bibliographic Details
Main Authors Hasan, Md. Arid, Hasanain, Maram, Ahmad, Fatema, Laskar, Sahinur Rahman, Upadhyay, Sunaya, Sukhadia, Vrunda N, Kutlu, Mucahid, Chowdhury, Shammur Absar, Alam, Firoj
Format Journal Article
LanguageEnglish
Published 13.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Natural Question Answering (QA) datasets play a crucial role in developing and evaluating the capabilities of large language models (LLMs), ensuring their effective usage in real-world applications. Despite the numerous QA datasets that have been developed, there is a notable lack of region-specific datasets generated by native users in their own languages. This gap hinders the effective benchmarking of LLMs for regional and cultural specificities. In this study, we propose a scalable framework, NativQA, to seamlessly construct culturally and regionally aligned QA datasets in native languages, for LLM evaluation and tuning. Moreover, to demonstrate the efficacy of the proposed framework, we designed a multilingual natural QA dataset, MultiNativQA, consisting of ~72K QA pairs in seven languages, ranging from high to extremely low resource, based on queries from native speakers covering 18 topics. We benchmark the MultiNativQA dataset with open- and closed-source LLMs. We made both the framework NativQA and MultiNativQA dataset publicly available for the community. (https://nativqa.gitlab.io)
DOI:10.48550/arxiv.2407.09823