Multichannel data from temporal and contextual information for early wildfire detection

Modern forest fire surveillance systems offer automatic observers as an assistance to human monitoring. Intelligent algorithms analyze video stream, trying to find early visual signs of fire, which are smoke during the day and flames during the night, with large expected detection distance. In the e...

Full description

Saved in:
Bibliographic Details
Published in2023 8th International Conference on Smart and Sustainable Technologies (SpliTech) pp. 1 - 6
Main Authors Krstinic, Damir, Seric, Ljiljana, Ivanda, Antonia, Bugaric, Marin
Format Conference Proceeding
LanguageEnglish
Published University of Split, FESB 20.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Modern forest fire surveillance systems offer automatic observers as an assistance to human monitoring. Intelligent algorithms analyze video stream, trying to find early visual signs of fire, which are smoke during the day and flames during the night, with large expected detection distance. In the early stage of fire, smoke occupies a very small part of the image. Degradations like mist, dust, camera shake, pronounced sunlight effects and dirt on camera lenses lowers the quality of images. In this phase smoke is often hardly distinguishable even for a human operator responsible for confirming an alarm. All of the abovementioned make detecting early visible signs of a forest fire a complex task. Deep learning algorithms applied to emerging smoke footage typically perform poorly compared to other problems, with a high false alarm rate. In this paper, we study the possibility of using other available pieces of information that define the context and dynamic characteristics of an image. This information is merged into a multi-channel image. The information content of the resulting data set is evaluated by applying the same neural network architecture to original RGB images collected from surveillance cameras and compiled multichannel images. The obtained results encourage further research in this direction.
DOI:10.23919/SpliTech58164.2023.10192982