Designing Multi-Modal Embedding Fusion-Based Recommender
Recommendation systems have lately been popularised globally. However, often they need to be adapted to particular data and the use case. We have developed a machine learning-based recommendation system, which can be easily applied to almost any items and/or actions domain. Contrary to existing reco...
Saved in:
Published in | Electronics (Basel) Vol. 11; no. 9; p. 1391 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.05.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recommendation systems have lately been popularised globally. However, often they need to be adapted to particular data and the use case. We have developed a machine learning-based recommendation system, which can be easily applied to almost any items and/or actions domain. Contrary to existing recommendation systems, our system supports multiple types of interaction data with various modalities of metadata through a multi-modal fusion of different data representations. We deployed the system into numerous e-commerce stores, e.g., food and beverages, shoes, fashion items, and telecom operators. We present our system and its main algorithms for data representations and multi-modal fusion. We show benchmark results on open datasets that outperform the state-of-the-art prior work. We also demonstrate use cases for different e-commerce sites. |
---|---|
ISSN: | 2079-9292 2079-9292 |
DOI: | 10.3390/electronics11091391 |