Attention assessment based on multi‐view classroom behaviour recognition

In recent years, artificial intelligence has been applied in many fields, and education has attracted more and more attention. More and more behaviour detection and recognition algorithms are applied in the field of education. Students' attention in class is the key to improving the quality of...

Full description

Saved in:
Bibliographic Details
Published inIET computer vision Vol. 19; no. 1
Main Authors Zheng, ZhouJie, Liang, GuoJun, Luo, HuiBin, Yin, HaiChang
Format Journal Article
LanguageEnglish
Published 01.01.2025
Online AccessGet full text

Cover

Loading…
Abstract In recent years, artificial intelligence has been applied in many fields, and education has attracted more and more attention. More and more behaviour detection and recognition algorithms are applied in the field of education. Students' attention in class is the key to improving the quality of teaching, and classroom behavior is a direct manifestation of students' attention. In view of the problem that the accuracy of students' classroom behavior recognition is generally low, we apply deep learning to multi‐view behavior detection, which can detect and recognize behaviors from different perspectives, to evaluate students' classroom attention. First, an improved detection model based on YOLOv5 is proposed, which improves the CBL module throughout the entire network to optimize the model and uses SIoU as the loss function to improve the convergence speed of the prediction box. Second, a quantitative evaluation standard for students' classroom attention is established and then training and verification are conducted by collecting multi‐view classroom datasets. Finally, the environment variation in the training model phase is increased to make the model have better generalization ability. Experiments demonstrate that our method can effectively identify and detect students' behaviours in the classroom from different angles, and it has good robustness and feature extraction capabilities.
AbstractList In recent years, artificial intelligence has been applied in many fields, and education has attracted more and more attention. More and more behaviour detection and recognition algorithms are applied in the field of education. Students' attention in class is the key to improving the quality of teaching, and classroom behavior is a direct manifestation of students' attention. In view of the problem that the accuracy of students' classroom behavior recognition is generally low, we apply deep learning to multi‐view behavior detection, which can detect and recognize behaviors from different perspectives, to evaluate students' classroom attention. First, an improved detection model based on YOLOv5 is proposed, which improves the CBL module throughout the entire network to optimize the model and uses SIoU as the loss function to improve the convergence speed of the prediction box. Second, a quantitative evaluation standard for students' classroom attention is established and then training and verification are conducted by collecting multi‐view classroom datasets. Finally, the environment variation in the training model phase is increased to make the model have better generalization ability. Experiments demonstrate that our method can effectively identify and detect students' behaviours in the classroom from different angles, and it has good robustness and feature extraction capabilities.
Author Zheng, ZhouJie
Liang, GuoJun
Yin, HaiChang
Luo, HuiBin
Author_xml – sequence: 1
  givenname: ZhouJie
  surname: Zheng
  fullname: Zheng, ZhouJie
  organization: Zhuhai Technician College Zhuhai China
– sequence: 2
  givenname: GuoJun
  orcidid: 0000-0001-7845-1408
  surname: Liang
  fullname: Liang, GuoJun
  organization: Zhuhai Technician College Zhuhai China, Faculty of Information Technology Macau University of Science and Technology Macau China
– sequence: 3
  givenname: HuiBin
  surname: Luo
  fullname: Luo, HuiBin
  organization: Faculty of Information Technology Macau University of Science and Technology Macau China
– sequence: 4
  givenname: HaiChang
  surname: Yin
  fullname: Yin, HaiChang
  organization: Faculty of Information Technology Macau University of Science and Technology Macau China
BookMark eNptkM1KxDAUhYOM4MzoxifIWuiYn7ZJlsOgjjLgRtflJk010jaSZCrufASf0SexVXEhru69h-8eDmeBZr3vLUKnlKwoydW5GRxbUUbz8gDNqShopsqczH53zo7QIsYnQopSqXyObtYp2T4532OI0cbYjRfWEG2NR63bt8l9vL0Pzr5g045I8L7D2j7C4Pw-4GCNf-jdZHCMDhtooz35mUt0f3lxt9lmu9ur6816lxlOypSBkVKCZLKQOWE1rUsBVnCltAZmZFHXmgIjwhBaq4ZzDdwqIzSROWeCcr5EZ9--Jvgxj22q5-A6CK8VJdXUQjW1UH21MMLkD2xcgiluCuDa_14-AbTMZF0
CitedBy_id crossref_primary_10_3390_systems11070372
crossref_primary_10_3390_s23115205
crossref_primary_10_3390_app131810426
crossref_primary_10_1007_s11036_023_02251_2
crossref_primary_10_1016_j_procs_2024_03_206
crossref_primary_10_3390_fishes8040186
crossref_primary_10_1007_s10639_025_13330_0
crossref_primary_10_3390_s25020373
crossref_primary_10_4018_IJGCMS_371423
crossref_primary_10_3390_app14010230
Cites_doi 10.1109/tgrs.2022.3170493
10.1109/CVPR.2016.91
10.1109/CVPR.2005.177
10.1109/CSAIEE54046.2021.9543310
10.3390/s22165932
10.1109/ICCV.2015.169
10.1109/CCAI55564.2022.9807756
10.1109/tpami.2009.167
10.1049/iet‐ipr.2018.5905
10.1016/j.chb.2018.08.016
10.1007/s11042‐020‐09242‐5
10.1109/TCDS.2022.3182650
10.1016/j.compag.2019.01.012
10.1109/tcsvt.2020.3043026
10.1016/j.patcog.2021.108498
10.1109/CVPR.2014.81
10.1016/j.patcog.2021.108102
10.3390/s19204588
10.3390/app12136790
10.1007/978-3-319-46448-0_2
10.1111/j.1553‐2712.2011.01270.x
10.1109/ICCV.2017.324
10.3390/s21093263
10.1007/978-3-030-01264-9_45
10.1007/978-3-319-10602-1_48
10.1109/CVPR.2009.5206848
10.1021/ed100409p
10.1016/j.patcog.2022.108873
10.1007/s11263‐009‐0275‐4
ContentType Journal Article
DBID AAYXX
CITATION
DOI 10.1049/cvi2.12146
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList CrossRef
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISSN 1751-9640
ExternalDocumentID 10_1049_cvi2_12146
GroupedDBID .DC
0R~
0ZK
1OC
24P
29I
5GY
6IK
8FE
8FG
8VB
AAHJG
AAJGR
AAMMB
AAYXX
ABJCF
ABQXS
ABUWG
ACCMX
ACESK
ACGFO
ACGFS
ACIWK
ACXQS
ADEYR
AEFGJ
AEGXH
AENEX
AFKRA
AGXDD
AIDQK
AIDYY
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ARAPS
AVUZU
AZQEC
BENPR
BGLVJ
BPHCQ
CCPQU
CITATION
CS3
DU5
DWQXO
EBS
EJD
GNUQQ
GROUPED_DOAJ
HCIFZ
HZ~
IAO
IDLOA
IPLJI
ITC
J9A
K1G
K6V
K7-
L6V
LAI
M43
M7S
MCNEO
MS~
O9-
OK1
P62
PHGZM
PHGZT
PQGLB
PQQKQ
PROAC
PTHSS
PUEGO
QWB
RNS
RUI
S0W
UNMZH
WIN
ZL0
~ZZ
ID FETCH-LOGICAL-c306t-ac888a82858402d1d67ae7399bba2c85ddb1a207c01d9f33ba3e9c7b084327133
ISSN 1751-9632
IngestDate Wed Aug 27 16:40:33 EDT 2025
Thu Apr 24 23:09:34 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c306t-ac888a82858402d1d67ae7399bba2c85ddb1a207c01d9f33ba3e9c7b084327133
ORCID 0000-0001-7845-1408
OpenAccessLink https://onlinelibrary.wiley.com/doi/pdf/10.1049/cvi2.12146
ParticipantIDs crossref_primary_10_1049_cvi2_12146
crossref_citationtrail_10_1049_cvi2_12146
PublicationCentury 2000
PublicationDate 2025-01-01
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – month: 01
  year: 2025
  text: 2025-01-01
  day: 01
PublicationDecade 2020
PublicationTitle IET computer vision
PublicationYear 2025
References e_1_2_9_30_1
e_1_2_9_31_1
e_1_2_9_11_1
e_1_2_9_34_1
e_1_2_9_10_1
e_1_2_9_35_1
e_1_2_9_13_1
e_1_2_9_32_1
e_1_2_9_12_1
e_1_2_9_33_1
Ren S. (e_1_2_9_16_1) 2015; 28
e_1_2_9_15_1
e_1_2_9_14_1
e_1_2_9_17_1
e_1_2_9_36_1
e_1_2_9_19_1
e_1_2_9_18_1
e_1_2_9_20_1
e_1_2_9_22_1
e_1_2_9_21_1
e_1_2_9_23_1
e_1_2_9_8_1
e_1_2_9_7_1
e_1_2_9_6_1
e_1_2_9_5_1
e_1_2_9_3_1
e_1_2_9_2_1
Rosegard E. (e_1_2_9_4_1) 2013
Gomaa A. (e_1_2_9_24_1) 2022
e_1_2_9_9_1
e_1_2_9_26_1
e_1_2_9_25_1
e_1_2_9_28_1
e_1_2_9_27_1
e_1_2_9_29_1
References_xml – ident: e_1_2_9_11_1
  doi: 10.1109/tgrs.2022.3170493
– ident: e_1_2_9_18_1
  doi: 10.1109/CVPR.2016.91
– ident: e_1_2_9_13_1
  doi: 10.1109/CVPR.2005.177
– ident: e_1_2_9_26_1
  doi: 10.1109/CSAIEE54046.2021.9543310
– start-page: 1
  year: 2022
  ident: e_1_2_9_24_1
  article-title: Faster CNN‐Based Vehicle Detection and Counting Strategy for Fixed Camera scenes
  publication-title: Multimedia Tools and Applications
– ident: e_1_2_9_28_1
  doi: 10.3390/s22165932
– ident: e_1_2_9_15_1
  doi: 10.1109/ICCV.2015.169
– ident: e_1_2_9_25_1
  doi: 10.1109/CCAI55564.2022.9807756
– ident: e_1_2_9_36_1
– ident: e_1_2_9_14_1
  doi: 10.1109/tpami.2009.167
– volume: 28
  year: 2015
  ident: e_1_2_9_16_1
  article-title: Faster r‐CNN: towards real‐time object detection with region proposal networks
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: e_1_2_9_19_1
  doi: 10.1049/iet‐ipr.2018.5905
– ident: e_1_2_9_7_1
  doi: 10.1016/j.chb.2018.08.016
– ident: e_1_2_9_23_1
  doi: 10.1007/s11042‐020‐09242‐5
– ident: e_1_2_9_2_1
  doi: 10.1109/TCDS.2022.3182650
– ident: e_1_2_9_20_1
  doi: 10.1016/j.compag.2019.01.012
– ident: e_1_2_9_3_1
  doi: 10.1109/tcsvt.2020.3043026
– ident: e_1_2_9_35_1
– ident: e_1_2_9_9_1
  doi: 10.1016/j.patcog.2021.108498
– ident: e_1_2_9_12_1
  doi: 10.1109/CVPR.2014.81
– start-page: 1
  year: 2013
  ident: e_1_2_9_4_1
  article-title: Capturing students' attention: an empirical student
  publication-title: J. Scholarsh. Teach. Learn.
– ident: e_1_2_9_10_1
  doi: 10.1016/j.patcog.2021.108102
– ident: e_1_2_9_22_1
  doi: 10.3390/s19204588
– ident: e_1_2_9_27_1
  doi: 10.3390/app12136790
– ident: e_1_2_9_17_1
  doi: 10.1007/978-3-319-46448-0_2
– ident: e_1_2_9_5_1
  doi: 10.1111/j.1553‐2712.2011.01270.x
– ident: e_1_2_9_32_1
  doi: 10.1109/ICCV.2017.324
– ident: e_1_2_9_33_1
– ident: e_1_2_9_21_1
  doi: 10.3390/s21093263
– ident: e_1_2_9_34_1
  doi: 10.1007/978-3-030-01264-9_45
– ident: e_1_2_9_30_1
  doi: 10.1007/978-3-319-10602-1_48
– ident: e_1_2_9_31_1
  doi: 10.1109/CVPR.2009.5206848
– ident: e_1_2_9_6_1
  doi: 10.1021/ed100409p
– ident: e_1_2_9_8_1
  doi: 10.1016/j.patcog.2022.108873
– ident: e_1_2_9_29_1
  doi: 10.1007/s11263‐009‐0275‐4
SSID ssj0056994
Score 2.393568
Snippet In recent years, artificial intelligence has been applied in many fields, and education has attracted more and more attention. More and more behaviour...
SourceID crossref
SourceType Enrichment Source
Index Database
Title Attention assessment based on multi‐view classroom behaviour recognition
Volume 19
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwELZgufTCq62gBWSJXtAqNHFe9nFBwLKinHYr4LKyHUeNVO1WkPTAiZ_Ab-SXMHYcJ4I9AJcospyH_E3GM5OZbxD6ATZxkMNO6WUiZ16UJNSjVDBPE4XkKfPzkOpC4V-XyXASja7iq7YrqqkuKcWhvF9YV_IRVGEMcNVVsu9A1t0UBuAc8IUjIAzHN2E8KEubrcgdwWZf70uZ_gdgcgVdMoOpUJGNrdyU51e3fZdCZAGylur5ydjkm-uWD_26Ar0NMqtaQ9z8mVejwonGRWGDz2fVfFS1uT6VCccOq-KocIPXNXfBkBemvqEbfCBxJ_hQ68s0Djz4hmuFqrpjNQuTU7LspTC90t3gq8CCy_8F0ZQX0QKC7Bcbl0snND_SIzbV107NtctohYDfQHpoZfB7cjNpNuc4YaY3pnvvhrE2Yj_bJ3dslI6xMV5Hq9ZLwIMa8g20pGabaM16DNjq47vPaOQkALcSgI0EYBgzEvD08Kixxw577LDHHey_oMnpyfh46NnuGJ4EN6_0uKSUck1ACD46yYIsSblKwd4UghNJ4ywTASd-Kv0gY3kYCh4qJlPh0ygkOjTxFfVm85naQpjEPKE5VXka5JEiSoDV78fwnRK4f0LVNjpo1mMqLXW87mDyd_p65bfRvpv7ryZMWTDr25tmfUefWqHbQb3ytlK7YAOWYs_i-gxw6193
linkProvider Wiley-Blackwell
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Attention+assessment+based+on+multi%E2%80%90view+classroom+behaviour+recognition&rft.jtitle=IET+computer+vision&rft.au=Zheng%2C+ZhouJie&rft.au=Liang%2C+GuoJun&rft.au=Luo%2C+HuiBin&rft.au=Yin%2C+HaiChang&rft.date=2025-01-01&rft.issn=1751-9632&rft.eissn=1751-9640&rft.volume=19&rft.issue=1&rft_id=info:doi/10.1049%2Fcvi2.12146&rft.externalDBID=n%2Fa&rft.externalDocID=10_1049_cvi2_12146
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1751-9632&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1751-9632&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1751-9632&client=summon