Skeleton-based image feature extraction for automated behavioral analysis in human-animal relationship tests

Arena tests are used to address various research questions related to animal behavior and human-animal relationships; e.g. how animals perceive specific human beings or people in general. Recent advancements in computer vision, specifically in application of key point detection models, might offer a...

Full description

Saved in:
Bibliographic Details
Published inApplied animal behaviour science Vol. 277; p. 106347
Main Authors Oczak, Maciej, Rault, Jean-Loup, Truong, Suzanne, Schmitt, Oceane
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.08.2024
Subjects
Online AccessGet full text
ISSN0168-1591
1872-9045
DOI10.1016/j.applanim.2024.106347

Cover

Loading…
Abstract Arena tests are used to address various research questions related to animal behavior and human-animal relationships; e.g. how animals perceive specific human beings or people in general. Recent advancements in computer vision, specifically in application of key point detection models, might offer a possibility to extract variables that are the most often recorded in these tests in an automated way. The objective of this study was to measure two variables in human-pig arena test with computer vision techniques, i.e. distance between the subjects and pig’s visual attention proxy towards pen areas including a human. Human-pig interaction tests were organized inside a test arena measuring 147 × 168 cm. Thirty female pigs took part in the arena tests from 8 to 11 weeks of age, for a total of 210 tests (7 tests per pig), each with a 10-min duration. In total, 35 hours of human-pig interaction tests were video-recorded. To automatically detect human and pig skeletons, 4 models were trained on 100 images of labeled data, i.e. two YOLOv8 models to detect human and pig locations and two VitPose models to detect their skeletons. Models were validated on 50 images. The best performing models were selected to extract human and pig skeletons on recorded videos. Human-pig distance was calculated as the shortest Euclidean distance between all key points of the human and the pig. Visual attention proxy towards selected areas of the arena were calculated by extracting the pig’s head direction and calculating the intersection of a line indicating the heads direction and lines specifying the areas i.e. either lines of the quadrangles for the entrance and the window or lines joining the key points of the human skeleton. The performance of the YOLOv8 for detection of the human and the pig was 0.86 mAP and 0.85 mAP, respectively, and for the VitPose model 0.65 mAP and 0.78 mAP, respectively. The average distance between the human and the pig was 31.03 cm (SD = 35.99). Out of the three predefined areas in the arena, pigs spend most of their time with their head directed toward the human, i.e. 12 hrs 11 min (34.83 % of test duration). The developed method could be applied in human-animal relationship tests to automatically measure the distance between a human and a pig or another animal, visual attention proxy or other variables of interest. •We automatised skeleton-based image feature extraction in an arena test.•The human’s and the pig’s skeletons were detected with 0.65 mAP and 0.78 mAP.•The average distance between the human and the pig was 31.03 cm.•Pigs spent 34.83 % of the test duration with their head directed toward the human.•The developed method can reduce the need for time-consuming manual observations.
AbstractList Arena tests are used to address various research questions related to animal behavior and human-animal relationships; e.g. how animals perceive specific human beings or people in general. Recent advancements in computer vision, specifically in application of key point detection models, might offer a possibility to extract variables that are the most often recorded in these tests in an automated way. The objective of this study was to measure two variables in human-pig arena test with computer vision techniques, i.e. distance between the subjects and pig’s visual attention proxy towards pen areas including a human. Human-pig interaction tests were organized inside a test arena measuring 147 × 168 cm. Thirty female pigs took part in the arena tests from 8 to 11 weeks of age, for a total of 210 tests (7 tests per pig), each with a 10-min duration. In total, 35 hours of human-pig interaction tests were video-recorded. To automatically detect human and pig skeletons, 4 models were trained on 100 images of labeled data, i.e. two YOLOv8 models to detect human and pig locations and two VitPose models to detect their skeletons. Models were validated on 50 images. The best performing models were selected to extract human and pig skeletons on recorded videos. Human-pig distance was calculated as the shortest Euclidean distance between all key points of the human and the pig. Visual attention proxy towards selected areas of the arena were calculated by extracting the pig’s head direction and calculating the intersection of a line indicating the heads direction and lines specifying the areas i.e. either lines of the quadrangles for the entrance and the window or lines joining the key points of the human skeleton. The performance of the YOLOv8 for detection of the human and the pig was 0.86 mAP and 0.85 mAP, respectively, and for the VitPose model 0.65 mAP and 0.78 mAP, respectively. The average distance between the human and the pig was 31.03 cm (SD = 35.99). Out of the three predefined areas in the arena, pigs spend most of their time with their head directed toward the human, i.e. 12 hrs 11 min (34.83 % of test duration). The developed method could be applied in human-animal relationship tests to automatically measure the distance between a human and a pig or another animal, visual attention proxy or other variables of interest.
Arena tests are used to address various research questions related to animal behavior and human-animal relationships; e.g. how animals perceive specific human beings or people in general. Recent advancements in computer vision, specifically in application of key point detection models, might offer a possibility to extract variables that are the most often recorded in these tests in an automated way. The objective of this study was to measure two variables in human-pig arena test with computer vision techniques, i.e. distance between the subjects and pig’s visual attention proxy towards pen areas including a human. Human-pig interaction tests were organized inside a test arena measuring 147 × 168 cm. Thirty female pigs took part in the arena tests from 8 to 11 weeks of age, for a total of 210 tests (7 tests per pig), each with a 10-min duration. In total, 35 hours of human-pig interaction tests were video-recorded. To automatically detect human and pig skeletons, 4 models were trained on 100 images of labeled data, i.e. two YOLOv8 models to detect human and pig locations and two VitPose models to detect their skeletons. Models were validated on 50 images. The best performing models were selected to extract human and pig skeletons on recorded videos. Human-pig distance was calculated as the shortest Euclidean distance between all key points of the human and the pig. Visual attention proxy towards selected areas of the arena were calculated by extracting the pig’s head direction and calculating the intersection of a line indicating the heads direction and lines specifying the areas i.e. either lines of the quadrangles for the entrance and the window or lines joining the key points of the human skeleton. The performance of the YOLOv8 for detection of the human and the pig was 0.86 mAP and 0.85 mAP, respectively, and for the VitPose model 0.65 mAP and 0.78 mAP, respectively. The average distance between the human and the pig was 31.03 cm (SD = 35.99). Out of the three predefined areas in the arena, pigs spend most of their time with their head directed toward the human, i.e. 12 hrs 11 min (34.83 % of test duration). The developed method could be applied in human-animal relationship tests to automatically measure the distance between a human and a pig or another animal, visual attention proxy or other variables of interest. •We automatised skeleton-based image feature extraction in an arena test.•The human’s and the pig’s skeletons were detected with 0.65 mAP and 0.78 mAP.•The average distance between the human and the pig was 31.03 cm.•Pigs spent 34.83 % of the test duration with their head directed toward the human.•The developed method can reduce the need for time-consuming manual observations.
ArticleNumber 106347
Author Oczak, Maciej
Truong, Suzanne
Rault, Jean-Loup
Schmitt, Oceane
Author_xml – sequence: 1
  givenname: Maciej
  surname: Oczak
  fullname: Oczak, Maciej
  email: Maciej.Oczak@vetmeduni.ac.at
  organization: Precision Livestock Farming Hub, The University of Veterinary Medicine Vienna (Vetmeduni Vienna), Veterinärplatz 1, Vienna 1210, Austria
– sequence: 2
  givenname: Jean-Loup
  surname: Rault
  fullname: Rault, Jean-Loup
  organization: Center for Animal Nutrition and Welfare, The University of Veterinary Medicine Vienna (Vetmeduni Vienna), Veterinärplatz 1, Vienna 1210, Austria
– sequence: 3
  givenname: Suzanne
  surname: Truong
  fullname: Truong, Suzanne
  organization: Center for Animal Nutrition and Welfare, The University of Veterinary Medicine Vienna (Vetmeduni Vienna), Veterinärplatz 1, Vienna 1210, Austria
– sequence: 4
  givenname: Oceane
  surname: Schmitt
  fullname: Schmitt, Oceane
  organization: Center for Animal Nutrition and Welfare, The University of Veterinary Medicine Vienna (Vetmeduni Vienna), Veterinärplatz 1, Vienna 1210, Austria
BookMark eNqFkE9v3CAQxVGUSNmk_QoVx1y8AWN7sdRDq6j5I0Xqoc0ZjWGcZYvBBRw1375sNrnkEnEYwbw3vPmdkWMfPBLyhbM1Z7y73K1hnh14O61rVjflsRPN5oisuNzUVc-a9pisilBWvO35KTlLaccYawVnK-J-_UGHOfhqgISG2gkekY4IeYlI8V-OoLMNno4hUlhymCAX2YBbeLIhgqPgwT0nm6j1dLtM4Kt9lNKI6GBvTVs704wpp0_kZASX8PNrPScP1z9-X91W9z9v7q6-31da9CJXhnEoh2ujN1Lo2pjaDNi0XQ2yBBcgTTO0g-RmNCB41418aGUPslxqgFGck4vD3DmGv0v5WU02aXQFEoYlKcFb0cnCqi_SrwepjiGliKPSNr_ELptbpzhTe8hqp94gqz1kdYBc7N07-xzL8vH5Y-O3gxELhyeLUSVt0Ws0NqLOygT70Yj_nPigJQ
CitedBy_id crossref_primary_10_1016_j_applanim_2025_106504
Cites_doi 10.1016/j.compag.2022.107135
10.2139/ssrn.4659489
10.1016/0168-1591(94)00545-P
10.3389/fanim.2022.913407
10.1016/j.applanim.2020.104965
10.1016/j.compag.2023.108119
10.1016/j.compag.2023.108038
10.1186/2049-1891-4-25
10.1038/s41592-018-0234-5
10.1016/j.physbeh.2007.03.016
10.1016/j.livsci.2014.06.025
10.1109/CVPR.2014.214
10.1016/0168-1591(86)90022-5
10.3389/fvets.2020.590867
10.1007/978-3-319-10602-1_48
10.1016/j.applanim.2006.02.001
10.1007/978-3-030-58580-8_27
10.1007/s11062-014-9458-x
10.1016/j.applanim.2019.02.004
ContentType Journal Article
Copyright 2024 The Authors
Copyright_xml – notice: 2024 The Authors
DBID 6I.
AAFTH
AAYXX
CITATION
7S9
L.6
DOI 10.1016/j.applanim.2024.106347
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
AGRICOLA
AGRICOLA - Academic
DatabaseTitle CrossRef
AGRICOLA
AGRICOLA - Academic
DatabaseTitleList AGRICOLA

DeliveryMethod fulltext_linktorsrc
Discipline Veterinary Medicine
Zoology
Psychology
EISSN 1872-9045
ExternalDocumentID 10_1016_j_applanim_2024_106347
S0168159124001953
GroupedDBID --K
--M
.~1
0R~
1B1
1RT
1~.
1~5
23M
4.4
457
4G.
53G
5GY
5VS
6I.
7-5
71M
8P~
9JM
AABNK
AACTN
AAEDT
AAEDW
AAFTH
AAHBH
AAIKJ
AAKOC
AALCJ
AALRI
AAOAW
AAQFI
AAQXK
AATLK
AAXUO
ABBQC
ABFNM
ABGRD
ABIVO
ABKYH
ABMAC
ABMZM
ABRWV
ABXDB
ACDAQ
ACGFS
ACIUM
ACPRK
ACRLP
ADBBV
ADEZE
ADMUD
ADQTV
AEBSH
AEKER
AENEX
AEQOU
AEXOQ
AFKWA
AFRAH
AFTJW
AFXIZ
AGHFR
AGUBO
AGYEJ
AHHHB
AI.
AIEXJ
AIKHN
AITUG
AJOXV
AJRQY
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
ANZVX
ASPBG
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
CS3
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
HLV
HVGLF
HZ~
IHE
J1W
KOM
LW9
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SAB
SCC
SDF
SDG
SDP
SEL
SES
SEW
SPCBC
SSA
SSZ
SVS
T5K
VH1
WUQ
~G-
~KM
AATTM
AAXKI
AAYWO
AAYXX
ABJNI
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
7S9
L.6
ID FETCH-LOGICAL-c393t-d01a1a11cdc783c2dd2dbe4562a80003a8d4b5b81dfda3166f1b589a8a312aaf3
IEDL.DBID .~1
ISSN 0168-1591
IngestDate Fri Jul 11 05:49:34 EDT 2025
Tue Jul 01 03:24:52 EDT 2025
Thu Apr 24 23:11:20 EDT 2025
Sat Aug 10 15:30:58 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Computer vision
Human-animal
Key point detection
Distance
Object detection
Language English
License This is an open access article under the CC BY license.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c393t-d01a1a11cdc783c2dd2dbe4562a80003a8d4b5b81dfda3166f1b589a8a312aaf3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
OpenAccessLink https://www.sciencedirect.com/science/article/pii/S0168159124001953
PQID 3153680249
PQPubID 24069
ParticipantIDs proquest_miscellaneous_3153680249
crossref_citationtrail_10_1016_j_applanim_2024_106347
crossref_primary_10_1016_j_applanim_2024_106347
elsevier_sciencedirect_doi_10_1016_j_applanim_2024_106347
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate August 2024
2024-08-00
20240801
PublicationDateYYYYMMDD 2024-08-01
PublicationDate_xml – month: 08
  year: 2024
  text: August 2024
PublicationDecade 2020
PublicationTitle Applied animal behaviour science
PublicationYear 2024
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Oczak, Maschat, Baumgartner (bib17) 2023; 10
Solawetz, J., 2023. What is YOLOv8? The Ultimate Guide [WWW Document]. Roboflow Blog. URL
Oczak, Bayer, Vetter, Maschat, Baumgartner (bib16) 2022; 3
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L., 2014. Microsoft COCO: Common Objects in Context, in: Computer Vision – ECCV 2014. Springer International Publishing, pp. 740–755.
Juarez, S., Kielar, A., Drabik, A., Stec, A., Stós-Wyżga, Z., Nowicki, J., Oczak, M., 2023. Standardisation of the Structure of Pig’s Skeleton for Automated Vision Tasks. https://doi.org/10.2139/ssrn.4659489.
Bensoussan, Tigeot, Meunier-Salaün, Tallet (bib1) 2020; 225
Parmiggiani, Liu, Psota, Fitzgerald, Norton (bib18) 2023; 212
Forkman, Boissy, Meunier-Salaün, Canali, Jones (bib6) 2007; 92
Rault, Waiblinger, Boivin, Hemsworth (bib21) 2020; 7
MMDetection Contributors, 2018. OpenMMLab Detection Toolbox and Benchmark. URL https://github.com/open-mmlab/mmdetection (accessed 12.1.23).
Wang, Wang, Lu, Wang (bib27) 2022; 22
Tallet, Sy, Prunier, Nowak, Boissy, Boivin (bib23) 2014; 167
Newell, Yang, Deng (bib15) 2016
(accessed 9.1.23).
Zulkifli (bib31) 2013; 4
MMPose Contributors, 2022. OpenMMLab Pose Estimation Toolbox and Benchmark [WWW Document]. OpenMMLab Pose Estimation Toolbox and Benchmark. URL
Tanida, Miura, Tanaka, Yoshimoto (bib24) 1995; 42
Czycholl, Menke, Straßburg, Krieter (bib5) 2019; 213
Ling, Jimin, Caixing, Xuhong, Sumin (bib11) 2022; 199
Grabovskaya, Salyha (bib7) 2014; 46
Prince (bib20) 1977
Wang, Zhou, Yin, Xu, Ye (bib28) 2023; 212
Toshev, A., Szegedy, C., 2013. DeepPose: Human Pose Estimation via Deep Neural Networks. arXiv [cs.CV].
Hemsworth, Barnett, Hansen, Gonyou (bib8) 1986; 15
Xu, Y., Zhang, J., Zhang, Q., Tao, D., 2022. ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation. arXiv [cs.CV].
Brooks, J., 2019. COCO Annotator. URL https://github.com/jsbroks/coco-annotator (accessed 12.1.23).
Waiblinger, Boivin, Pedersen, Tosi, Janczak, Visser, Jones (bib26) 2006; 101
Cai, Y., Wang, Z., Luo, Z., Yin, B., Du, A., Wang, H., Zhang, X., Zhou, X., Zhou, E., Sun, J., 2020. Learning Delicate Local Representations for Multi-person Pose Estimation, in: Computer Vision – ECCV 2020. Springer International Publishing, pp. 455–472.
(accessed 12.17.23).
Pereira, Aldarondo, Willmore, Kislin, Wang, Murthy, Shaevitz (bib19) 2019; 16
Bradski (bib2) 2000; 25
Welfare Quality® 2009. Welfare Quality® assessment protocol for pigs (sows and piglets, growing and finishing pigs). Welfare Quality® Consortium, Lelystad, Netherlands.
MMYOLO Contributors, 2022. MMYOLO: OpenMMLab YOLO series toolbox and benchmark [WWW Document]. MMYOLO: OpenMMLab YOLO series toolbox and benchmark. URL
Oczak (10.1016/j.applanim.2024.106347_bib16) 2022; 3
Tallet (10.1016/j.applanim.2024.106347_bib23) 2014; 167
Forkman (10.1016/j.applanim.2024.106347_bib6) 2007; 92
Pereira (10.1016/j.applanim.2024.106347_bib19) 2019; 16
Bradski (10.1016/j.applanim.2024.106347_bib2) 2000; 25
Czycholl (10.1016/j.applanim.2024.106347_bib5) 2019; 213
Oczak (10.1016/j.applanim.2024.106347_bib17) 2023; 10
10.1016/j.applanim.2024.106347_bib13
10.1016/j.applanim.2024.106347_bib12
Zulkifli (10.1016/j.applanim.2024.106347_bib31) 2013; 4
10.1016/j.applanim.2024.106347_bib9
10.1016/j.applanim.2024.106347_bib14
10.1016/j.applanim.2024.106347_bib30
10.1016/j.applanim.2024.106347_bib10
Parmiggiani (10.1016/j.applanim.2024.106347_bib18) 2023; 212
Tanida (10.1016/j.applanim.2024.106347_bib24) 1995; 42
Newell (10.1016/j.applanim.2024.106347_bib15) 2016
Wang (10.1016/j.applanim.2024.106347_bib27) 2022; 22
Grabovskaya (10.1016/j.applanim.2024.106347_bib7) 2014; 46
Ling (10.1016/j.applanim.2024.106347_bib11) 2022; 199
Wang (10.1016/j.applanim.2024.106347_bib28) 2023; 212
10.1016/j.applanim.2024.106347_bib3
10.1016/j.applanim.2024.106347_bib4
Prince (10.1016/j.applanim.2024.106347_bib20) 1977
10.1016/j.applanim.2024.106347_bib29
Rault (10.1016/j.applanim.2024.106347_bib21) 2020; 7
Waiblinger (10.1016/j.applanim.2024.106347_bib26) 2006; 101
10.1016/j.applanim.2024.106347_bib25
Hemsworth (10.1016/j.applanim.2024.106347_bib8) 1986; 15
10.1016/j.applanim.2024.106347_bib22
Bensoussan (10.1016/j.applanim.2024.106347_bib1) 2020; 225
References_xml – reference: MMYOLO Contributors, 2022. MMYOLO: OpenMMLab YOLO series toolbox and benchmark [WWW Document]. MMYOLO: OpenMMLab YOLO series toolbox and benchmark. URL
– volume: 15
  start-page: 55
  year: 1986
  end-page: 63
  ident: bib8
  article-title: The influence of early contact with humans on subsequent behavioural response of pigs to humans
  publication-title: Appl. Anim. Behav. Sci.
– volume: 101
  start-page: 185
  year: 2006
  end-page: 242
  ident: bib26
  article-title: Assessing the human–animal relationship in farmed species: A critical review
  publication-title: Appl. Anim. Behav. Sci.
– volume: 42
  start-page: 249
  year: 1995
  end-page: 259
  ident: bib24
  article-title: Behavioral response to humans in individually handled weanling pigs
  publication-title: Appl. Anim. Behav. Sci.
– volume: 10
  year: 2023
  ident: bib17
  article-title: Implementation of Computer-Vision-Based Farrowing Prediction in Pens with Temporary Sow Confinement
  publication-title: Vet. Sci. China
– reference: Solawetz, J., 2023. What is YOLOv8? The Ultimate Guide [WWW Document]. Roboflow Blog. URL
– volume: 4
  start-page: 25
  year: 2013
  ident: bib31
  article-title: Review of human-animal interactions and their impact on animal productivity and welfare
  publication-title: J. Anim. Sci. Biotechnol.
– reference: Xu, Y., Zhang, J., Zhang, Q., Tao, D., 2022. ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation. arXiv [cs.CV].
– reference: MMDetection Contributors, 2018. OpenMMLab Detection Toolbox and Benchmark. URL https://github.com/open-mmlab/mmdetection (accessed 12.1.23).
– reference: MMPose Contributors, 2022. OpenMMLab Pose Estimation Toolbox and Benchmark [WWW Document]. OpenMMLab Pose Estimation Toolbox and Benchmark. URL
– volume: 212
  year: 2023
  ident: bib18
  article-title: Don’t get lost in the crowd: Graph convolutional network for online animal tracking in dense groups
  publication-title: Comput. Electron. Agric.
– volume: 213
  start-page: 65
  year: 2019
  end-page: 73
  ident: bib5
  article-title: Reliability of different behavioural tests for growing pigs on-farm
  publication-title: Appl. Anim. Behav. Sci.
– reference: Toshev, A., Szegedy, C., 2013. DeepPose: Human Pose Estimation via Deep Neural Networks. arXiv [cs.CV].
– reference: Juarez, S., Kielar, A., Drabik, A., Stec, A., Stós-Wyżga, Z., Nowicki, J., Oczak, M., 2023. Standardisation of the Structure of Pig’s Skeleton for Automated Vision Tasks. https://doi.org/10.2139/ssrn.4659489.
– reference: Welfare Quality® 2009. Welfare Quality® assessment protocol for pigs (sows and piglets, growing and finishing pigs). Welfare Quality® Consortium, Lelystad, Netherlands.
– reference: Brooks, J., 2019. COCO Annotator. URL https://github.com/jsbroks/coco-annotator (accessed 12.1.23).
– volume: 199
  year: 2022
  ident: bib11
  article-title: Point cloud-based pig body size measurement featured by standard and non-standard postures
  publication-title: Comput. Electron. Agric.
– volume: 22
  year: 2022
  ident: bib27
  article-title: HRST: An Improved HRNet for Detecting Joint Points of Pigs
  publication-title: Sensors
– volume: 16
  start-page: 117
  year: 2019
  end-page: 125
  ident: bib19
  article-title: Fast animal pose estimation using deep neural networks
  publication-title: Nat. Methods
– volume: 3
  start-page: 92
  year: 2022
  ident: bib16
  article-title: Where Is Sow’s Nose?-RetinaNet Object Detector As A Basis For Monitoring Use Of Rack With Nest-Building Material
  publication-title: Front. Anim. Sci.
– volume: 212
  year: 2023
  ident: bib28
  article-title: GANPose: Pose estimation of grouped pigs using a generative adversarial network
  publication-title: Comput. Electron. Agric.
– reference: (accessed 9.1.23).
– volume: 7
  year: 2020
  ident: bib21
  article-title: The Power of a Positive Human-Animal Relationship for Animal Welfare
  publication-title: Front Vet. Sci.
– reference: Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L., 2014. Microsoft COCO: Common Objects in Context, in: Computer Vision – ECCV 2014. Springer International Publishing, pp. 740–755.
– volume: 167
  start-page: 331
  year: 2014
  end-page: 341
  ident: bib23
  article-title: Behavioural and physiological reactions of piglets to gentle tactile interactions vary according to their previous experience with humans
  publication-title: Livest. Sci.
– volume: 92
  start-page: 340
  year: 2007
  end-page: 374
  ident: bib6
  article-title: A critical review of fear tests used on cattle, pigs, sheep, poultry and horses
  publication-title: Physiol. Behav.
– volume: 46
  start-page: 376
  year: 2014
  end-page: 380
  ident: bib7
  article-title: Do results of the open field test depend on the arena shape?
  publication-title: Neurophysiology
– volume: 225
  year: 2020
  ident: bib1
  article-title: Broadcasting human voice to piglets (Sus scrofa domestica) modifies their behavioural reaction to human presence in the home pen and in arena tests
  publication-title: Appl. Anim. Behav. Sci.
– start-page: 696
  year: 1977
  end-page: 712
  ident: bib20
  article-title: The eye and vision
  publication-title: Dukes Physiology of Domestic Animals
– volume: 25
  start-page: 120
  year: 2000
  end-page: 123
  ident: bib2
  article-title: The openCV library. Dr. Dobb’s
  publication-title: J.: Softw. Tools Prof. Program.
– start-page: 483
  year: 2016
  end-page: 499
  ident: bib15
  article-title: Stacked Hourglass Networks for Human Pose Estimation
  publication-title: Computer Vision – ECCV 2016
– reference: (accessed 12.17.23).
– reference: Cai, Y., Wang, Z., Luo, Z., Yin, B., Du, A., Wang, H., Zhang, X., Zhou, X., Zhou, E., Sun, J., 2020. Learning Delicate Local Representations for Multi-person Pose Estimation, in: Computer Vision – ECCV 2020. Springer International Publishing, pp. 455–472.
– volume: 199
  year: 2022
  ident: 10.1016/j.applanim.2024.106347_bib11
  article-title: Point cloud-based pig body size measurement featured by standard and non-standard postures
  publication-title: Comput. Electron. Agric.
  doi: 10.1016/j.compag.2022.107135
– ident: 10.1016/j.applanim.2024.106347_bib29
– ident: 10.1016/j.applanim.2024.106347_bib9
  doi: 10.2139/ssrn.4659489
– ident: 10.1016/j.applanim.2024.106347_bib14
– volume: 42
  start-page: 249
  year: 1995
  ident: 10.1016/j.applanim.2024.106347_bib24
  article-title: Behavioral response to humans in individually handled weanling pigs
  publication-title: Appl. Anim. Behav. Sci.
  doi: 10.1016/0168-1591(94)00545-P
– ident: 10.1016/j.applanim.2024.106347_bib12
– start-page: 696
  year: 1977
  ident: 10.1016/j.applanim.2024.106347_bib20
  article-title: The eye and vision
– volume: 3
  start-page: 92
  year: 2022
  ident: 10.1016/j.applanim.2024.106347_bib16
  article-title: Where Is Sow’s Nose?-RetinaNet Object Detector As A Basis For Monitoring Use Of Rack With Nest-Building Material
  publication-title: Front. Anim. Sci.
  doi: 10.3389/fanim.2022.913407
– volume: 225
  year: 2020
  ident: 10.1016/j.applanim.2024.106347_bib1
  article-title: Broadcasting human voice to piglets (Sus scrofa domestica) modifies their behavioural reaction to human presence in the home pen and in arena tests
  publication-title: Appl. Anim. Behav. Sci.
  doi: 10.1016/j.applanim.2020.104965
– volume: 212
  year: 2023
  ident: 10.1016/j.applanim.2024.106347_bib28
  article-title: GANPose: Pose estimation of grouped pigs using a generative adversarial network
  publication-title: Comput. Electron. Agric.
  doi: 10.1016/j.compag.2023.108119
– volume: 25
  start-page: 120
  year: 2000
  ident: 10.1016/j.applanim.2024.106347_bib2
  article-title: The openCV library. Dr. Dobb’s
  publication-title: J.: Softw. Tools Prof. Program.
– ident: 10.1016/j.applanim.2024.106347_bib3
– volume: 212
  year: 2023
  ident: 10.1016/j.applanim.2024.106347_bib18
  article-title: Don’t get lost in the crowd: Graph convolutional network for online animal tracking in dense groups
  publication-title: Comput. Electron. Agric.
  doi: 10.1016/j.compag.2023.108038
– volume: 4
  start-page: 25
  year: 2013
  ident: 10.1016/j.applanim.2024.106347_bib31
  article-title: Review of human-animal interactions and their impact on animal productivity and welfare
  publication-title: J. Anim. Sci. Biotechnol.
  doi: 10.1186/2049-1891-4-25
– volume: 16
  start-page: 117
  year: 2019
  ident: 10.1016/j.applanim.2024.106347_bib19
  article-title: Fast animal pose estimation using deep neural networks
  publication-title: Nat. Methods
  doi: 10.1038/s41592-018-0234-5
– volume: 92
  start-page: 340
  year: 2007
  ident: 10.1016/j.applanim.2024.106347_bib6
  article-title: A critical review of fear tests used on cattle, pigs, sheep, poultry and horses
  publication-title: Physiol. Behav.
  doi: 10.1016/j.physbeh.2007.03.016
– volume: 167
  start-page: 331
  year: 2014
  ident: 10.1016/j.applanim.2024.106347_bib23
  article-title: Behavioural and physiological reactions of piglets to gentle tactile interactions vary according to their previous experience with humans
  publication-title: Livest. Sci.
  doi: 10.1016/j.livsci.2014.06.025
– ident: 10.1016/j.applanim.2024.106347_bib25
  doi: 10.1109/CVPR.2014.214
– volume: 15
  start-page: 55
  year: 1986
  ident: 10.1016/j.applanim.2024.106347_bib8
  article-title: The influence of early contact with humans on subsequent behavioural response of pigs to humans
  publication-title: Appl. Anim. Behav. Sci.
  doi: 10.1016/0168-1591(86)90022-5
– volume: 7
  year: 2020
  ident: 10.1016/j.applanim.2024.106347_bib21
  article-title: The Power of a Positive Human-Animal Relationship for Animal Welfare
  publication-title: Front Vet. Sci.
  doi: 10.3389/fvets.2020.590867
– ident: 10.1016/j.applanim.2024.106347_bib10
  doi: 10.1007/978-3-319-10602-1_48
– ident: 10.1016/j.applanim.2024.106347_bib13
– ident: 10.1016/j.applanim.2024.106347_bib30
– volume: 101
  start-page: 185
  year: 2006
  ident: 10.1016/j.applanim.2024.106347_bib26
  article-title: Assessing the human–animal relationship in farmed species: A critical review
  publication-title: Appl. Anim. Behav. Sci.
  doi: 10.1016/j.applanim.2006.02.001
– ident: 10.1016/j.applanim.2024.106347_bib22
– ident: 10.1016/j.applanim.2024.106347_bib4
  doi: 10.1007/978-3-030-58580-8_27
– start-page: 483
  year: 2016
  ident: 10.1016/j.applanim.2024.106347_bib15
  article-title: Stacked Hourglass Networks for Human Pose Estimation
– volume: 22
  year: 2022
  ident: 10.1016/j.applanim.2024.106347_bib27
  article-title: HRST: An Improved HRNet for Detecting Joint Points of Pigs
  publication-title: Sensors
– volume: 10
  year: 2023
  ident: 10.1016/j.applanim.2024.106347_bib17
  article-title: Implementation of Computer-Vision-Based Farrowing Prediction in Pens with Temporary Sow Confinement
  publication-title: Vet. Sci. China
– volume: 46
  start-page: 376
  year: 2014
  ident: 10.1016/j.applanim.2024.106347_bib7
  article-title: Do results of the open field test depend on the arena shape?
  publication-title: Neurophysiology
  doi: 10.1007/s11062-014-9458-x
– volume: 213
  start-page: 65
  year: 2019
  ident: 10.1016/j.applanim.2024.106347_bib5
  article-title: Reliability of different behavioural tests for growing pigs on-farm
  publication-title: Appl. Anim. Behav. Sci.
  doi: 10.1016/j.applanim.2019.02.004
SSID ssj0005310
Score 2.4143188
Snippet Arena tests are used to address various research questions related to animal behavior and human-animal relationships; e.g. how animals perceive specific human...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 106347
SubjectTerms animal behavior
automation
Computer vision
Distance
females
head
Human-animal
human-animal relations
humans
Key point detection
Object detection
people
skeleton
swine
Title Skeleton-based image feature extraction for automated behavioral analysis in human-animal relationship tests
URI https://dx.doi.org/10.1016/j.applanim.2024.106347
https://www.proquest.com/docview/3153680249
Volume 277
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT9tAEB4hUCUuFaStylOL1KtJ7fXziBAoEMGBNC3qZTXeXasOwYkS58CF387M2qZQIXGofLC8D8vemZ2Z1XwzA_CtyELU2qIXoA290Brac7JIvFwSC-ksIpXo0BbX8WAcXt5Gt2tw2sXCMKyylf2NTHfSum3pt6vZn5dlf0TGSkrK2GcUJDuDOII9TJjLjx9fwDyky0jAgz0e_SJKeHLMTmKsSo5ID0JqjCWXWXlbQf0jqp3-Od-Cj63hKE6ab9uGNVv1YPNZfj30oPeToS0uvlZctS7zHnz4PXP9n2A6uiMdwyWDWXUZUd6TLBGFdak9BQnpRRPkIMiOFbiqZ2TM0rC_gfwC2wwmoqyEK-7n8W9Rx6KD1P0p54KM13r5GcbnZz9OB15bbMHTMpO1Z777SJevjU5SqQNjApNbPh9hygcnTE2YRzmZt4VB6cdx4edRmmFKDwFiIb_AejWr7FcQGFmdIUrLwVE6IxsyyRkPGCeoJW3zHYi6FVa6zUTOBTGmqoOcTVRHGcWUUQ1ldqD_PG_e5OJ4d0bWEVC94ipFCuPduUcdxRVtOfajYGVnq6WSpCXilHMt7v7H-_dgk58aMOE-rNeLlT0gA6fODx0HH8LGycVwcM334c2v4RMLQgBB
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT9wwEB7RRVW5VLAt4lGKkbimS-IkmxwRKlrYx4WHUC_WxHZEFsiuluyBf89M4rBtVYlDlVNiO0o89nxjzTczAMd5GqLWFr0AbeiF1tCek3nfyyQtIZ1GBIk122ISD27Cy7vobg3O2lgYplU63d_o9Fpbuyc9N5u9eVH0rshYSQiMfWZBsjPoA6xzdqqoA-unF8PBZMX0kHVSAu7v8YDfAoWnP9hPjGXBQelBSA9jyZVW_o1Rf2nrGoLON-Gzsx3FafN5W7Bmyy5svKmwly50b5ndUofYirHzmnfh469Z3f4FHq8eCGa4ajCjlxHFE6kTkds6u6cgPb1o4hwEmbICl9WM7FnqtorlF-iSmIiiFHV9P49_ixoWLavuvpgLsl-r569wc_7z-mzguXoLnpaprDxz4iNdvja6n0gdGBOYzPIRCRM-O2FiwizKyMLNDUo_jnM_i5IUE7oJEHO5DZ1yVtodEBhZnSJKy_FROiUzsp8xJTDuo5a003chamdYaZeMnGtiPKqWdTZVrWQUS0Y1ktmF3tu4eZOO490RaStA9cfCUoQZ7449aiWuaNexKwVLO1s-K0lAESecbnHvP95_CJ8G1-ORGl1MhvuwwS0Nt_AbdKrF0h6QvVNl3916fgW8fwFP
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Skeleton-based+image+feature+extraction+for+automated+behavioral+analysis+in+human-animal+relationship+tests&rft.jtitle=Applied+animal+behaviour+science&rft.au=Oczak%2C+Maciej&rft.au=Rault%2C+Jean-Loup&rft.au=Truong%2C+Suzanne&rft.au=Schmitt%2C+Oceane&rft.date=2024-08-01&rft.issn=0168-1591&rft.volume=277+p.106347-&rft_id=info:doi/10.1016%2Fj.applanim.2024.106347&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0168-1591&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0168-1591&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0168-1591&client=summon