No Smurfs: Revealing Fraud Chains in Mobile Money Transfers

Mobile Money Transfer (MMT) services provided by mobile network operators enable funds transfers made on mobile devices of end-users, using digital equivalent of cash (electronic money) without any bank accounts involved. MMT simplifies banking relationships and facilitates financial inclusion, and,...

Full description

Saved in:
Bibliographic Details
Published in2014 Ninth International Conference on Availability, Reliability and Security pp. 11 - 20
Main Authors Zhdanova, Maria, Repp, Jurgen, Rieke, Roland, Gaber, Chrystel, Hemery, Baptiste
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mobile Money Transfer (MMT) services provided by mobile network operators enable funds transfers made on mobile devices of end-users, using digital equivalent of cash (electronic money) without any bank accounts involved. MMT simplifies banking relationships and facilitates financial inclusion, and, therefore, is rapidly expanding all around the world, especially in developing countries. MMT systems are subject to the same controls as those required for financial institutions, including the detection of Money Laundering (ML) - a source of concern for MMT service providers. In this paper we focus on an often practiced ML technique known as micro-structuring of funds or smurfing and introduce a new method for detection of fraud chains in MMT systems. Whereas classical detection methods are based on machine learning and data mining, this work builds on Predictive Security Analysis at Runtime (PSA@R), a model-based approach for event-driven process analysis. We provide an extension to PSA@R which allows us to identify fraudsters in an MMT service monitoring network behavior of its end-users. We evaluate our method on simulated transaction logs, containing approximately 460,000 transactions for 10,000 end-users, and compare it with classical fraud detection approaches. With 99.81% precision and 90.18% recall, we achieve better recognition performance in comparison with the state of the art.
DOI:10.1109/ARES.2014.10