Protocol to use protein language models predicting and following experimental validation of function-enhancing variants of thymine-N-glycosylase
Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using...
Saved in:
Published in | STAR protocols Vol. 5; no. 3; p. 103188 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Inc
12.07.2024
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload.
For complete details on the use and execution of this protocol, please refer to He et al.1
[Display omitted]
•Optimizing thymine-N-glycosylase using protein language models (PLMs)•Use of PLMs to optimize enzymes without extensive task-specific training data•Protocol for “zero-shot” enzyme optimization
Publisher’s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics.
Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. |
---|---|
AbstractList | Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for "zero-shot" enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. For complete details on the use and execution of this protocol, please refer to He et al.1.Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for "zero-shot" enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. For complete details on the use and execution of this protocol, please refer to He et al.1. Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload.For complete details on the use and execution of this protocol, please refer to He et al.1 : Publisher’s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for "zero-shot" enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. For complete details on the use and execution of this protocol, please refer to He et al. . Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. For complete details on the use and execution of this protocol, please refer to He et al.1 [Display omitted] •Optimizing thymine-N-glycosylase using protein language models (PLMs)•Use of PLMs to optimize enzymes without extensive task-specific training data•Protocol for “zero-shot” enzyme optimization Publisher’s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. For complete details on the use and execution of this protocol, please refer to He et al. 1 • Optimizing thymine-N-glycosylase using protein language models (PLMs) • Use of PLMs to optimize enzymes without extensive task-specific training data • Protocol for “zero-shot” enzyme optimization Publisher’s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics. Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance protein function without prior knowledge of their specific roles. Here, we present a protocol for optimizing thymine-DNA-glycosylase (TDG) using PLMs. We describe steps for “zero-shot” enzyme optimization, construction of plasmids, double plasmid transfection, and high-throughput sequencing and data analysis. This protocol holds promise for streamlining the engineering of gene editing tools, delivering improved activity while minimizing the experimental workload. |
ArticleNumber | 103188 |
Author | Chang, Xing Yuan, Fajie He, Yan Zhou, Xibin |
Author_xml | – sequence: 1 givenname: Yan surname: He fullname: He, Yan organization: School of Medicine, Westlake University, Hangzhou, Zhejiang 310014, China – sequence: 2 givenname: Xibin surname: Zhou fullname: Zhou, Xibin organization: School of Engineering, Westlake University, Hangzhou, Zhejiang 310014, China – sequence: 3 givenname: Fajie surname: Yuan fullname: Yuan, Fajie email: yuanfajie@westlake.edu.cn organization: School of Engineering, Westlake University, Hangzhou, Zhejiang 310014, China – sequence: 4 givenname: Xing orcidid: 0000-0002-5072-9225 surname: Chang fullname: Chang, Xing email: changxing@westlake.edu.cn organization: School of Medicine, Westlake University, Hangzhou, Zhejiang 310014, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39002134$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kk1v1DAQhiNUREvpH-CAcuSSxV_J2hISQhUflSrgAGdr4kxSr7z2YjtL91_wk3HYUrUXTrbH7zxjz7zPqxMfPFbVS0pWlNDuzWZ1u4thxQgTJcCplE-qM9Z1XUO7bn3yYH9aXaS0IYSwljJB5bPqlKtyolycVb-_xZCDCa7OoZ4T1gWa0fragZ9mmLDehgFdKnEcrMnWTzX4oR6Dc-HXcsLbHUa7RZ_B1XtwdoBsg6_DWI-zN8u-QX8D3izqPUQLPqflOt8cttZj86WZ3MGEdHCQ8EX1dASX8OJuPa9-fPzw_fJzc_3109Xl--vG8FblZhAAKLg0A4DklJtBcUYNmoGBaMnQAlP9uu1aBLKWnDDeq7Zfr4WUsgPS8vPq6sgdAmz0rvwA4kEHsPpvIMRJQ8zWONSI2JNWgUJlRKkmhRz7saM9KuwEN4X17sjazf0WB1N6EcE9gj6-8fZGT2GvKWVKKsEL4fUdIYafM6astzYZdGUKGOakOVkr1XZSsCJlR6mJIaWI430dSvRiDb3RizX0Yg19tEZJevXwhfcp_4xQBG-PgjJr3FuMOhmL3pShRzS5NMX-j_8Hl7_RVg |
Cites_doi | 10.1016/j.molcel.2024.01.021 10.1007/s11427-018-9402-9 10.1038/s41587-019-0032-3 |
ContentType | Journal Article |
Copyright | 2024 The Authors Copyright © 2024 The Authors. Published by Elsevier Inc. All rights reserved. 2024 The Authors 2024 |
Copyright_xml | – notice: 2024 The Authors – notice: Copyright © 2024 The Authors. Published by Elsevier Inc. All rights reserved. – notice: 2024 The Authors 2024 |
DBID | 6I. AAFTH NPM AAYXX CITATION 7X8 5PM DOA |
DOI | 10.1016/j.xpro.2024.103188 |
DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access PubMed CrossRef MEDLINE - Academic PubMed Central (Full Participant titles) Directory of Open Access Journals |
DatabaseTitle | PubMed CrossRef MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic PubMed |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
EISSN | 2666-1667 |
ExternalDocumentID | oai_doaj_org_article_eeeb059a9e9c4a83848fbf61be9e643c 10_1016_j_xpro_2024_103188 39002134 S2666166724003538 |
Genre | Journal Article |
GroupedDBID | 0R~ 0SF 53G 6I. AAEDW AAFTH AALRI AAMRU AAXUO ADVLN AEXQZ AITUG AKRWK ALMA_UNASSIGNED_HOLDINGS AMRAJ EBS FDB GROUPED_DOAJ M~E OK1 ROL RPM M41 NPM AAYXX CITATION 7X8 5PM |
ID | FETCH-LOGICAL-c359t-d4aae438cdaa8313cd9321cecd2a450d5a29b7565ea0783023b95b7748886a053 |
IEDL.DBID | RPM |
ISSN | 2666-1667 |
IngestDate | Tue Oct 22 14:59:31 EDT 2024 Tue Sep 17 21:28:43 EDT 2024 Sat Oct 26 04:31:41 EDT 2024 Wed Oct 09 16:37:31 EDT 2024 Sat Nov 02 12:10:40 EDT 2024 Sat Sep 21 15:58:46 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Keywords | Biotechnology and bioengineering CRISPR Computer sciences |
Language | English |
License | This is an open access article under the CC BY-NC-ND license. Copyright © 2024 The Authors. Published by Elsevier Inc. All rights reserved. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c359t-d4aae438cdaa8313cd9321cecd2a450d5a29b7565ea0783023b95b7748886a053 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 Technical contact These authors contributed equally Lead contact |
ORCID | 0000-0002-5072-9225 |
OpenAccessLink | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11298943/ |
PMID | 39002134 |
PQID | 3079956842 |
PQPubID | 23479 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_eeeb059a9e9c4a83848fbf61be9e643c pubmedcentral_primary_oai_pubmedcentral_nih_gov_11298943 proquest_miscellaneous_3079956842 crossref_primary_10_1016_j_xpro_2024_103188 pubmed_primary_39002134 elsevier_sciencedirect_doi_10_1016_j_xpro_2024_103188 |
PublicationCentury | 2000 |
PublicationDate | 20240712 |
PublicationDateYYYYMMDD | 2024-07-12 |
PublicationDate_xml | – month: 7 year: 2024 text: 20240712 day: 12 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | STAR protocols |
PublicationTitleAlternate | STAR Protoc |
PublicationYear | 2024 |
Publisher | Elsevier Inc Elsevier |
Publisher_xml | – name: Elsevier Inc – name: Elsevier |
References | Liu, Wang, Jiao, Zhang, Song, Li, Gao, Wang (bib6) 2019; 62 He, Zhou, Chang, Chen, Liu, Li, Fan, Sun, Miao, Huang (bib1) 2024; 84 Clement, Rees, Canver, Gehrke, Farouni, Hsu, Cole, Liu, Joung, Bauer, Pinello (bib7) 2019; 37 Devlin, Chang, Lee, Toutanova (bib3) 2019; 1 Roshan, Jason, Robert, Joshua, John, Pieter, Tom, Alexander (bib4) 2021 Alayrac (bib5) 2022 Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal, Amodei (bib2) 2020; 33 Roshan (10.1016/j.xpro.2024.103188_bib4) 2021 Liu (10.1016/j.xpro.2024.103188_bib6) 2019; 62 Devlin (10.1016/j.xpro.2024.103188_bib3) 2019; 1 Brown (10.1016/j.xpro.2024.103188_bib2) 2020; 33 He (10.1016/j.xpro.2024.103188_bib1) 2024; 84 Alayrac (10.1016/j.xpro.2024.103188_bib5) 2022 Clement (10.1016/j.xpro.2024.103188_bib7) 2019; 37 |
References_xml | – volume: 33 start-page: 1877 year: 2020 end-page: 1901 ident: bib2 article-title: Language models are few-shot learners publication-title: Adv. Neural Inf. Process. Syst. contributor: fullname: Amodei – volume: 62 start-page: 1 year: 2019 end-page: 7 ident: bib6 article-title: Hi-TOM: a platform for high-throughput tracking of mutations induced by CRISPR/Cas systems publication-title: Life Sci. contributor: fullname: Wang – year: 2021 ident: bib4 article-title: MSA Transformer publication-title: bioRxiv contributor: fullname: Alexander – year: 2022 ident: bib5 article-title: Flamingo: a Visual Language Model for Few-Shot Learning publication-title: arXiv contributor: fullname: Alayrac – volume: 1 start-page: 4171 year: 2019 end-page: 4186 ident: bib3 article-title: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding publication-title: 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Naacl Hlt 2019) contributor: fullname: Toutanova – volume: 84 start-page: 1257 year: 2024 end-page: 1270.e6 ident: bib1 article-title: Protein language models-assisted optimization of a uracil-N-glycosylase variant enables programmable T-to-G and T-to-C base editing publication-title: Mol. Cell contributor: fullname: Huang – volume: 37 start-page: 224 year: 2019 end-page: 226 ident: bib7 article-title: CRISPResso2 provides accurate andrapid genome editing sequence analysis publication-title: Nat. Biotechnol. contributor: fullname: Pinello – volume: 84 start-page: 1257 year: 2024 ident: 10.1016/j.xpro.2024.103188_bib1 article-title: Protein language models-assisted optimization of a uracil-N-glycosylase variant enables programmable T-to-G and T-to-C base editing publication-title: Mol. Cell doi: 10.1016/j.molcel.2024.01.021 contributor: fullname: He – volume: 1 start-page: 4171 year: 2019 ident: 10.1016/j.xpro.2024.103188_bib3 article-title: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding publication-title: 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Naacl Hlt 2019) contributor: fullname: Devlin – volume: 62 start-page: 1 year: 2019 ident: 10.1016/j.xpro.2024.103188_bib6 article-title: Hi-TOM: a platform for high-throughput tracking of mutations induced by CRISPR/Cas systems publication-title: Life Sci. doi: 10.1007/s11427-018-9402-9 contributor: fullname: Liu – year: 2022 ident: 10.1016/j.xpro.2024.103188_bib5 article-title: Flamingo: a Visual Language Model for Few-Shot Learning publication-title: arXiv contributor: fullname: Alayrac – volume: 37 start-page: 224 year: 2019 ident: 10.1016/j.xpro.2024.103188_bib7 article-title: CRISPResso2 provides accurate andrapid genome editing sequence analysis publication-title: Nat. Biotechnol. doi: 10.1038/s41587-019-0032-3 contributor: fullname: Clement – volume: 33 start-page: 1877 year: 2020 ident: 10.1016/j.xpro.2024.103188_bib2 article-title: Language models are few-shot learners publication-title: Adv. Neural Inf. Process. Syst. contributor: fullname: Brown – year: 2021 ident: 10.1016/j.xpro.2024.103188_bib4 article-title: MSA Transformer publication-title: bioRxiv contributor: fullname: Roshan |
SSID | ssj0002512418 |
Score | 2.3103948 |
Snippet | Protein language models (PLMs) are machine learning tools trained to predict masked amino acids within protein sequences, offering opportunities to enhance... |
SourceID | doaj pubmedcentral proquest crossref pubmed elsevier |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Publisher |
StartPage | 103188 |
SubjectTerms | Biotechnology and bioengineering Computer sciences CRISPR Protocol |
SummonAdditionalLinks | – databaseName: Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3BTtwwELUQJy6IqkDT0spI3JAFTpxsfCyoCCGBOIDEzbKdcVnUJhWbhfIX_eTO2AnstlK59LabRLux39jzxh6_YWzP6iCDLECQEItQ0klhqyBFoDUGmFTOx0Ni5xfV6bU6uylvFkp9UU5YkgdOHXcAAA4pgNWgvbJ1Uas6uFBJBxrQm_o4-x7qhWCK5mDy2krWwymZlND1E6ckDAhzRQfNZSy18uKJomD_kkP6m3D-mTe54IhONtj6wCD55_Tmb9gKtG_Zr8v7ru8QVN53fD4DHvUXpi0f1yN5LHkzw-u0M0O5zty2DQ9oBt0jfVuU-udoftNUbIl3gZPvo88C2luS58CnHzDEpgwauo1Qf0euKi7E129Pvps9ISGHTXZ98uXq-FQMxRaEL0rdi0ZZC6qofWOxl2XhG2R20oNvcqvKw6a0uXYTpH9gaecPXb3TpUPyiCF0ZXEob7HVtmvhHeM-IDDO503jES0_0XVV6QLjQErBQYqQsf2x482PpKlhxmSzO0MwGYLJJJgydkTYPD9JetjxAlqJGazEvGYlGStHZM1ALRJlwJ-a_vPPd0czMDjuaDPFttDNZwbnRjoUjG3K2HYyi-dXLDRRp0JlrF4ymKU2LN9pp7dR25voL0niv_8frf7A1qgttBQt8x222t_P4SNyqN59isPlN3eWICQ priority: 102 providerName: Directory of Open Access Journals |
Title | Protocol to use protein language models predicting and following experimental validation of function-enhancing variants of thymine-N-glycosylase |
URI | https://dx.doi.org/10.1016/j.xpro.2024.103188 https://www.ncbi.nlm.nih.gov/pubmed/39002134 https://www.proquest.com/docview/3079956842 https://pubmed.ncbi.nlm.nih.gov/PMC11298943 https://doaj.org/article/eeeb059a9e9c4a83848fbf61be9e643c |
Volume | 5 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwELbanrggEK9QqIzEDbmLEycbH6GiqpBa9UCl3izbGbdB3bjazQL9F_xkZpyk7ILEgVvetjNjzzf2N2PG3lodZJAFCErEIpR0UtgqSBFojgHmlfMpSOz0rDq5UJ8vy8sdVk2xMIm071172N0sDrv2OnErbxd-NvHEZuenR4QRKG_4bJftooZu-Og0_pLFVrIeI2QGMtcPHI7QGcwVBZnLtM3KbyuUkvVvGaO_weafnMkNI3T8iD0c0SP_MNTyMduB7gn7eb6MfUSB8j7y9Qp4yr3Qdnyai-Rpu5sVXqdVGeI5c9s1PKAKxO90tpnmn6PqtcNGSzwGTnaPjgV015SaA5_-hu41sWfoNop5gThVnImrmzsfV3cIxuEpuzj-9OXoRIwbLQhflLoXjbIWVFH7xtq6kIVvENVJD77JrSrfN6XNtZsj9ANLq35o5p0uHQJHdJ8ri934GdvrYgcvGPcBIY7zedP4WtV-ruuq0gX6gES_QXiQsXfTjze3Qz4NMxHNvhoSkyExmUFMGftIsrl_knJhpwtxeWVGjTAA4BAkWg3aK6w_lhtcqKQDDVgZn7FykqwZYcUAF_BT7T8LfzOpgcE-RwsptoO4XhkcFykgGNuUseeDWtxXsdAEmwqVsXpLYbbasH0H1Tzl9Z7U-uX_v7rPHlALaPJZ5q_YXr9cw2tETb07SLMNB6mr_AI0Vh7g |
link.rule.ids | 230,315,730,783,787,867,888,2109,27936,27937,53804,53806 |
linkProvider | National Library of Medicine |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bb9MwFD4a4wFeuIhbuBqJN-R2Tpw0foSJqcBa7WETe7Nsx9kCazK1KTB-BT-Zc5xmtENCgrc0ThMf-bPPd-xzAXhlVClKkXhOiVi4FFZwk5WCl7TH4EeZdSFIbDLNxkfyw3F6vAVZHwsTnPadrQb12WxQV6fBt_J85oa9n9jwYLJLHIHyhg-vwXWcsDtyzUqnFZh0thT5Kkamc-f6jgsSmoOxpDBzEQqt_NZDIV3_hjr6k25e9ZpcU0N7t-FTL0DnffJlsGztwP24ktvx3yW8A7dWzJS96drvwpav78HPg3nTNggW1jZsufAs5HWoatbvc7JQSmeB9-nEh3yomakLViK8mm_0a72EAENYV10RJ9aUjHQqXXNfn1LaD3z6K5ru5JlDzQihGXJgPuUnZxeuWVwg0ff34Wjv3eHumK-KOHCXpKrlhTTGyyR3hTF5IhJXIGMUzrsiNjLdKVITKztCWukNnSgihbAqtUhK0TTPDC4RD2C7bmr_CJgrkT5ZFxeFy2XuRirPMpWgfUmuPUg9InjdD6k-73J16N6J7bMmAGgCgO4AEMFbGvXLJynPdrjRzE_0aiy0994iATXKKyex__jd0paZsF557IyLIO0xo1eUpaMi-Krqrx9_2QNM43ymQxpT-2a50LjmUrAxyhTBww5wl11MFFGyREaQb0BxQ4bNFgRYyBneA-rx___1BdwYH0729f776ccncJOkoU1uET-F7Xa-9M-QnbX2eZiKvwBOw0AX |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bb9MwFLZgSIgXLuIWrkbiDbmdEyeNH2FQjcuqPjBpEg-W7dhbYHWqJgXGr-Anc04uox0SD3tLEzex5c8-37E_n0PISy099zxxDAOxMMENZzrznHlcY3CTzNj2kNjBLNs_FB-O0qNeVVn3sspgTTkKp4tRKE9abeVyYceDTmw8P9hDjoBxw8fLwo-vkmswaHezDU8dZ2G024Ln_TmZTtL1EyYlcAljgUfNeZts5a8takP2b5mkfynnReXkhima3iJfhkZ0CpRvo3VjRvbXhfiOl2vlbXKzZ6j0dVfmDrniwl3ye76qmgpAQ5uKrmtH2_gOZaDDeidtU-rUcB93flBLTXUoqAeYVT_w12YqAQrwLrtkTrTyFG0rXjMXTjD8B5T-Di48KnTwMUBpAVyYzdjx6Zmt6jMg_O4eOZy--7y3z_pkDswmqWxYIbR2IsltoXWe8MQWwBy5dbaItUh3i1TH0kyAXjqNO4tAJYxMDZBTcNEzDVPFfbITquAeEmo90Chj46KwucjtROZZJhPwM1HiAxQkIq-GblXLLmaHGsRsXxWCQCEIVAeCiLzBnj8vifG22xvV6lj1_aGccwaIqJZOWgH1h-964zNunHRQGRuRdMCN6qlLR0ngVeV_P_5iAJmCcY2bNTq4al0rmHvx0DG0KSIPOtCdVzGRSM0SEZF8C45bbdh-AiBrY4cPoHp0-b8-J9fnb6fq0_vZx8fkBjYG17p5_ITsNKu1ewokrTHP2tH4BxHNQpc |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Protocol+to+use+protein+language+models+predicting+and+following+experimental+validation+of+function-enhancing+variants+of+thymine-N-glycosylase&rft.jtitle=STAR+protocols&rft.au=He%2C+Yan&rft.au=Zhou%2C+Xibin&rft.au=Yuan%2C+Fajie&rft.au=Chang%2C+Xing&rft.date=2024-07-12&rft.eissn=2666-1667&rft.volume=5&rft.issue=3&rft.spage=103188&rft_id=info:doi/10.1016%2Fj.xpro.2024.103188&rft_id=info%3Apmid%2F39002134&rft.externalDocID=39002134 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2666-1667&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2666-1667&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2666-1667&client=summon |