PM₂.₅ Monitoring: Use Information Abundance Measurement and Wide and Deep Learning
This article devises a photograph-based monitoring model to estimate the real-time PM 2.5 concentrations, overcoming currently popular electrochemical sensor-based PM 2.5 monitoring methods' shortcomings such as low-density spatial distribution and time delay. Combining the proposed monitoring...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 32; no. 10; pp. 4278 - 4290 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This article devises a photograph-based monitoring model to estimate the real-time PM 2.5 concentrations, overcoming currently popular electrochemical sensor-based PM 2.5 monitoring methods' shortcomings such as low-density spatial distribution and time delay. Combining the proposed monitoring model, the photographs taken by various camera devices (e.g., surveillance camera, automobile data recorder, and mobile phone) can widely monitor PM 2.5 concentration in megacities. This is beneficial to offering helpful decision-making information for atmospheric forecast and control, thus reducing the epidemic of COVID-19. To specify, the proposed model fuses Information Abundance measurement and Wide and Deep learning, dubbed as IAWD, for PM 2.5 monitoring. First, our model extracts two categories of features in a newly proposed DS transform space to measure the information abundance (IA) of a given photograph since the growth of PM 2.5 concentration decreases its IA. Second, to simultaneously possess the advantages of memorization and generalization, a new wide and deep neural network is devised to learn a nonlinear mapping between the above-mentioned extracted features and the groundtruth PM 2.5 concentration. Experiments on two recently established datasets totally including more than 100 000 photographs demonstrate the effectiveness of our extracted features and the superiority of our proposed IAWD model as compared to state-of-the-art relevant computing techniques. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2021.3105394 |