E-Health Self-Help Diagnosis from Feces Images in Real Scenes

Deep learning models and computer vision are commonly integrated for e-health self-help diagnosis. The abnormal colors and traits of feces can reveal the risks of cancer and digestive diseases. As such, this paper develops a self-help diagnostic system to conveniently analyze users’ health condition...

Full description

Saved in:
Bibliographic Details
Published inElectronics (Basel) Vol. 12; no. 2; p. 344
Main Authors Liao, Fengxiang, Wan, Jiahao, Leng, Lu, Kim, Cheonshik
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning models and computer vision are commonly integrated for e-health self-help diagnosis. The abnormal colors and traits of feces can reveal the risks of cancer and digestive diseases. As such, this paper develops a self-help diagnostic system to conveniently analyze users’ health conditions from feces images at home, which can reduce dependence on professional skills and examinations equipment. Unfortunately, real scenes at home suffer from several severe challenges, including the lack of labeled data, complex backgrounds, varying illumination, etc. A semi-supervised learning strategy is employed to solve the scarcity of labeled data and reduce the burden of manual labeling. The unlabeled data are classified by an initial model that is pretrained on a small number of training data. Then, the labels with high confidence are allocated to the unlabeled samples in order to extend the training data accordingly. With regard to the small feces areas in certain samples, an adaptive upsampling method is proposed to enlarge the suitable local area according to the proportion of the foreground. Synthesized feces images in real scenes are tested to confirm the effectiveness and efficiency of the proposed method. In terms of accuracy, our proposed model can achieve 100% and 99.2% on color and trait recognition in medical scenes, respectively, and 99.1% and 100% on color and trait recognition in real scenes, respectively. The related datasets and codes will be released on Github.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics12020344