Multimodal Recommendation
16 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Multimodal Recommendation
Libraries
Use these libraries to find Multimodal Recommendation models and implementationsMost implemented papers
A Tale of Two Graphs: Freezing and Denoising Graph Structures for Multimodal Recommendation
Based on this finding, we propose a simple yet effective model, dubbed as FREEDOM, that FREEzes the item-item graph and DenOises the user-item interaction graph simultaneously for Multimodal recommendation.
A Comprehensive Survey on Multimodal Recommender Systems: Taxonomy, Evaluation, and Future Directions
Recommendation systems have become popular and effective tools to help users discover their interesting items by modeling the user preference and item property based on implicit interactions (e. g., purchasing and clicking).
End-to-end training of Multimodal Model and ranking Model
In this paper, we propose an industrial multimodal recommendation framework named EM3: End-to-end training of Multimodal Model and ranking Model, which sufficiently utilizes multimodal information and allows personalized ranking tasks to directly train the core modules in the multimodal model to obtain more task-oriented content features, without overburdening resource consumption.
MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video
Existing works on multimedia recommendation largely exploit multi-modal contents to enrich item representations, while less effort is made to leverage information interchange between users and items to enhance user representations and further capture user's fine-grained preferences on different modalities.
Mining Latent Structures for Multimedia Recommendation
To be specific, in the proposed LATTICE model, we devise a novel modality-aware structure learning layer, which learns item-item structures for each modality and aggregates multiple modalities to obtain latent item graphs.
Enhancing Dyadic Relations with Homogeneous Graphs for Multimodal Recommendation
On top of the finding, we propose a model that enhances the dyadic relations by learning Dual RepresentAtions of both users and items via constructing homogeneous Graphs for multimOdal recommeNdation.
MMRec: Simplifying Multimodal Recommendation
This paper presents an open-source toolbox, MMRec for multimodal recommendation.
Multimodal Recommendation Dialog with Subjective Preference: A New Challenge and Benchmark
Existing multimodal task-oriented dialog data fails to demonstrate the diverse expressions of user subjective preferences and recommendation acts in the real-life shopping scenario.
Ducho: A Unified Framework for the Extraction of Multimodal Features in Recommendation
Motivated by the outlined aspects, we propose \framework, a unified framework for the extraction of multimodal features in recommendation.
LightGT: A Light Graph Transformer for Multimedia Recommendation
Considering its challenges in effectiveness and efficiency, we propose a novel Transformer-based recommendation model, termed as Light Graph Transformer model (LightGT).