Comparative evaluation of machine learning algorithms for rainfall prediction to improve rice crops production

Rainfall has a huge impact on agriculture because it's one of the key causes of crops devastation. Farmers face a slew of issues when unexpected heavy rains fall, as their planted crops are washed away or damaged. Pakistan is an agricultural country where new methods and techniques are needed t...

Full description

Saved in:
Bibliographic Details
Published inMehran University research journal of engineering and technology Vol. 43; no. 3; pp. 1 - 14
Main Authors Akram, Beenish Ayesha, Zafar, Amna, Waheed, Talha, Khurshid, Khaldoon, Mahmood, Tayyab
Format Journal Article
LanguageEnglish
Published Mehran University of Engineering and Technology 01.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Rainfall has a huge impact on agriculture because it's one of the key causes of crops devastation. Farmers face a slew of issues when unexpected heavy rains fall, as their planted crops are washed away or damaged. Pakistan is an agricultural country where new methods and techniques are needed to improve the traditional farming methods. This research intends to provide aid in the protection of crops from severe rains using machine learning to accurately anticipate the possibility of rainfall, which is a well-known agricultural problem. Various weather factors such as temperature, humidity, and atmospheric pressure can be used to predict rainfall patterns. Rainfall prediction can be used to identify and furnish future rainfall descriptions for agricultural planning for food security, allowing farmers to take precautionary measures to safeguard rice fields. Naïve Bayes, LogitBoost, RIPPER, Decision Stump, AdaBoost, Random Forest, Artificial Neural network, and K* were evaluated for rainfall prediction based on accuracy, precision, recall, F1-measure, Root Mean Squared Error, area under receiver operating characteristic curve, elapsed training time and elapsed testing time. The results obtained indicate that the best performance is achieved by Random forest with maximum accuracy of 83.2%, followed by ANN (82.5%), LogitBoost (82.2%), RIPPER (82%), naïve Bayes (80.3%), AdaBoost (80.2%), and K*(79.2%) respectively. K* The lazy approach involved a minimum of training time and a maximum of test time. Maximum training time was consumed by Random Forest and minimum testing time was taken by Decision Stump.
ISSN:0254-7821
2413-7219
DOI:10.22581/muet1982.2232