S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models
Large Language Models have gained considerable attention for their revolutionary capabilities. However, there is also growing concern on their safety implications, making a comprehensive safety evaluation for LLMs urgently needed before model deployment. In this work, we propose S-Eval, a new compre...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
23.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Large Language Models have gained considerable attention for their revolutionary capabilities. However, there is also growing concern on their safety implications, making a comprehensive safety evaluation for LLMs urgently needed before model deployment. In this work, we propose S-Eval, a new comprehensive, multi-dimensional and open-ended safety evaluation benchmark. At the core of S-Eval is a novel LLM-based automatic test prompt generation and selection framework, which trains an expert testing LLM Mt combined with a range of test selection strategies to automatically construct a high-quality test suite for the safety evaluation. The key to the automation of this process is a novel expert safety-critique LLM Mc able to quantify the riskiness score of a LLM's response, and additionally produce risk tags and explanations. Besides, the generation process is also guided by a carefully designed risk taxonomy with four different levels, covering comprehensive and multi-dimensional safety risks of concern. Based on these, we systematically construct a new and large-scale safety evaluation benchmark for LLMs consisting of 220,000 evaluation prompts, including 20,000 base risk prompts (10,000 in Chinese and 10,000 in English) and 200, 000 corresponding attack prompts derived from 10 popular adversarial instruction attacks against LLMs. Moreover, considering the rapid evolution of LLMs and accompanied safety threats, S-Eval can be flexibly configured and adapted to include new risks, attacks and models. S-Eval is extensively evaluated on 20 popular and representative LLMs. The results confirm that S-Eval can better reflect and inform the safety risks of LLMs compared to existing benchmarks. We also explore the impacts of parameter scales, language environments, and decoding parameters on the evaluation, providing a systematic methodology for evaluating the safety of LLMs. |
---|---|
AbstractList | Large Language Models have gained considerable attention for their revolutionary capabilities. However, there is also growing concern on their safety implications, making a comprehensive safety evaluation for LLMs urgently needed before model deployment. In this work, we propose S-Eval, a new comprehensive, multi-dimensional and open-ended safety evaluation benchmark. At the core of S-Eval is a novel LLM-based automatic test prompt generation and selection framework, which trains an expert testing LLM Mt combined with a range of test selection strategies to automatically construct a high-quality test suite for the safety evaluation. The key to the automation of this process is a novel expert safety-critique LLM Mc able to quantify the riskiness score of a LLM's response, and additionally produce risk tags and explanations. Besides, the generation process is also guided by a carefully designed risk taxonomy with four different levels, covering comprehensive and multi-dimensional safety risks of concern. Based on these, we systematically construct a new and large-scale safety evaluation benchmark for LLMs consisting of 220,000 evaluation prompts, including 20,000 base risk prompts (10,000 in Chinese and 10,000 in English) and 200, 000 corresponding attack prompts derived from 10 popular adversarial instruction attacks against LLMs. Moreover, considering the rapid evolution of LLMs and accompanied safety threats, S-Eval can be flexibly configured and adapted to include new risks, attacks and models. S-Eval is extensively evaluated on 20 popular and representative LLMs. The results confirm that S-Eval can better reflect and inform the safety risks of LLMs compared to existing benchmarks. We also explore the impacts of parameter scales, language environments, and decoding parameters on the evaluation, providing a systematic methodology for evaluating the safety of LLMs. |
Author | Yuan, Xiaohan Mao, Xiaofeng Li, Jinfeng Wang, Dongxia Wang, Jingyi Xue, Hui Wang, Wenhai Ren, Kui Chen, Yuefeng Huang, Longtao |
Author_xml | – sequence: 1 givenname: Xiaohan surname: Yuan fullname: Yuan, Xiaohan – sequence: 2 givenname: Jinfeng surname: Li fullname: Li, Jinfeng – sequence: 3 givenname: Dongxia surname: Wang fullname: Wang, Dongxia – sequence: 4 givenname: Yuefeng surname: Chen fullname: Chen, Yuefeng – sequence: 5 givenname: Xiaofeng surname: Mao fullname: Mao, Xiaofeng – sequence: 6 givenname: Longtao surname: Huang fullname: Huang, Longtao – sequence: 7 givenname: Hui surname: Xue fullname: Xue, Hui – sequence: 8 givenname: Wenhai surname: Wang fullname: Wang, Wenhai – sequence: 9 givenname: Kui surname: Ren fullname: Ren, Kui – sequence: 10 givenname: Jingyi surname: Wang fullname: Wang, Jingyi |
BookMark | eNqNjc2KwkAQhAdRMKu-Q4PnQJwx8efmiusedk_mHhrTiYmxx52fgG-_I7sP4KWq4Cuq3sSQNdNARFKpRbxeSjkWM2vbJElktpJpqiLRnuJDj90Wdt7pG7rmDMgl7Eq8u6YnyMk6OBKTCUwzVNrAO_H5ckNzbbiGE1bkHvAc8X8VXcEXmpqCcu0xhG9dUmenYlRhZ2n27xMx_zjk-8_4bvSPDz9Fq73hgAqVpJtsmW3WC_Va6xfy-kmF |
ContentType | Paper |
Copyright | 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Database (Proquest) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials AUTh Library subscriptions: ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_30596469813 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 19:05:09 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_30596469813 |
OpenAccessLink | https://www.proquest.com/docview/3059646981?pq-origsite=%requestingapplication% |
PQID | 3059646981 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_3059646981 |
PublicationCentury | 2000 |
PublicationDate | 20240523 |
PublicationDateYYYYMMDD | 2024-05-23 |
PublicationDate_xml | – month: 05 year: 2024 text: 20240523 day: 23 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2024 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.5370107 |
SecondaryResourceType | preprint |
Snippet | Large Language Models have gained considerable attention for their revolutionary capabilities. However, there is also growing concern on their safety... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Benchmarks Large language models Parameters Risk Taxonomy |
Title | S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models |
URI | https://www.proquest.com/docview/3059646981 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LS8NAEB60QfDmEx-1DOg12GTz9CKtJBaxpdgKvZXdJEWkTWKTHrz4251J03oQellYFpbdZZhv5tthPoA7SaiivCjSbWVbuiViqSvT8biKMErErE2LnCj2B07v3XqZ2JOacCvqssqNT6wcdZxFzJHfC9aJYblD4zH_0lk1in9XawmNfdAM03XZqr3wecuxmI5LEbP452Yr7AiPQBvKPFkew16SnsBBVXIZFafwOdIpjJ0_YGdVZlXjVKSsHjuxzNkF4Zg2wnVXaH48pOgSu2RTHwtZ8ds4krOk_MZg268bsxm-cmU3jWsWElnqbF6cwW0YjJ96-uaA09qEiunfhcU5NNIsTS4ACVqk50RtSkMotbFsX0pfxMqIlWcKXxmX0Ny109Xu5Ws4NAmz-XPcFE1olMtVckOYW6pW9bAt0LrBYPhGs_5P8AvOiIxn |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LS8NAEB60RfTmEx9VB_QabLNJTLxIlcaoaREaobewmwciNYlNevDfO5Om9SD0speBZXdZ5pv5dnY-gGtJqKLsKNJMZRqaIWKpKd2yuYowSkTaJSMnisOR5b0bLxNz0hBuZVNWufSJtaOO84g58hvBOjEsd9i7L741Vo3i19VGQmMT2oYgoOGf4u7TimPRrVuKmMU_N1tjh7sL7TdZJLM92EiyfdiqSy6j8gA-xxqFsdM77M-rvG6cipTVYz-WBbsgDGgiXHSF5sNDii7xge7Ux5es-W0cyzSpfnCw6teNeYo-V3bTuGAhkaXOpuUhXLmD4NHTlgsMmytUhn8bFkfQyvIsOQYkaJG2FXUpDaHUxjAdKR0Rq16sbF04qncCnXUzna43X8K2Fwz90H8evZ7Bjk74zQ_luuhAq5rNk3PC30pd1If8C7_qjH4 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=S-Eval%3A+Automatic+and+Adaptive+Test+Generation+for+Benchmarking+Safety+Evaluation+of+Large+Language+Models&rft.jtitle=arXiv.org&rft.au=Yuan%2C+Xiaohan&rft.au=Li%2C+Jinfeng&rft.au=Wang%2C+Dongxia&rft.au=Chen%2C+Yuefeng&rft.date=2024-05-23&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |