Deep Convolutional Generative Adversarial Network-Based Food Recognition Using Partially Labeled Data
Traditional machine learning algorithms using hand-crafted feature extraction techniques (such as local binary pattern) have limited accuracy because of high variation in images of the same class (or intraclass variation) for food recognition tasks. In recent works, convolutional neural networks (CN...
Saved in:
Published in | IEEE sensors letters Vol. 3; no. 2; pp. 1 - 4 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.02.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Traditional machine learning algorithms using hand-crafted feature extraction techniques (such as local binary pattern) have limited accuracy because of high variation in images of the same class (or intraclass variation) for food recognition tasks. In recent works, convolutional neural networks (CNNs) have been applied to this task with better results than all previously reported methods. However, they perform best when trained with large amount of annotated (labeled) food images. This is problematic when obtained in large volume, because they are expensive, laborious, and impractical. This article aims at developing an efficient deep CNN learning-based method for food recognition alleviating these limitations by using partially labeled training data on generative adversarial networks (GANs). We make new enhancements to the unsupervised training architecture introduced by Goodfellow et al. , which was originally aimed at generating new data by sampling a dataset. In this article, we make modifications to deep convolutional GANs to make them robust and efficient for classifying food images. Experimental results on benchmarking datasets show the superiority of our proposed method, as compared to the current state-of-the-art methodologies, even when trained with partially labeled training data. |
---|---|
ISSN: | 2475-1472 2475-1472 |
DOI: | 10.1109/LSENS.2018.2886427 |