Author: Budhiraja, Amar
Title: Revisiting SVD to generate powerful Node Embeddings for Recommendation Systems Cord-id: 3joctugn Document date: 2021_10_5
ID: 3joctugn
Snippet: Graph Representation Learning (GRL) is an upcoming and promising area in recommendation systems. In this paper, we revisit the Singular Value Decomposition (SVD) of adjacency matrix for embedding generation of users and items and use a two-layer neural network on top of these embeddings to learn relevance between user-item pairs. Inspired by the success of higher-order learning in GRL, we further propose an extension of this method to include two-hop neighbors for SVD through the second order of
Document: Graph Representation Learning (GRL) is an upcoming and promising area in recommendation systems. In this paper, we revisit the Singular Value Decomposition (SVD) of adjacency matrix for embedding generation of users and items and use a two-layer neural network on top of these embeddings to learn relevance between user-item pairs. Inspired by the success of higher-order learning in GRL, we further propose an extension of this method to include two-hop neighbors for SVD through the second order of the adjacency matrix and demonstrate improved performance compared with the simple SVD method which only uses one-hop neighbors. Empirical validation on three publicly available datasets of recommendation system demonstrates that the proposed methods, despite being simple, beat many state-of-the-art methods and for two of three datasets beats all of them up to a margin of 10%. Through our research, we want to shed light on the effectiveness of matrix factorization approaches, specifically SVD, in the deep learning era and show that these methods still contribute as important baselines in recommendation systems.
Search related documents:
Co phrase search for related documents- absolute difference and machine learning: 1, 2
- activation function and adam optimizer: 1, 2, 3
- activation function and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
- activation function and machine learning: 1, 2, 3, 4, 5, 6
- adam optimizer and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- adam optimizer and machine learning: 1, 2
- additional signal and machine learning: 1, 2
- additional time and machine learning: 1, 2
- adjacency matrix and machine learn: 1
- adjacency matrix and machine learning: 1
- loss function and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
Co phrase search for related documents, hyperlinks ordered by date