Java资源分享网 - 专业的Java学习网站 学Java,上Java资源分享网
知识图谱+推荐系统论文 PDF 下载
匿名网友发布于:2024-07-05 10:03:15
(侵权举报)
(假如点击没反应,多刷新两次就OK!)

知识图谱+推荐系统论文 PDF 下载 图1

 

资料内容:

3.3 Learning Algorithm
Through a single KGCN layer, the final representation of an entity
is dependent on itself as well as its immediate neighbors, which we
name 1-order entity representation. It is natural to extend KGCN
fromonelayertomultiplelayerstoreasonablyexploreusers’poten-
tialinterestsinabroaderanddeeperway.Thetechniqueisintuitive:
Propagating the initial representation of each entity (0-order repre-
sentation) to its neighbors leads to 1-order entity representation,
then we can repeat this procedure, i.e., further propagating and ag-
gregating 1-order representations to obtain 2-order ones. Generally
speaking, the h -order representation of an entity is a mixture of
initial representations of itself and its neighbors up to h hops away.
This is an important property for KGCN, which we will discuss in
the next subsection.
The formal description of the above steps is presented in Al-
gorithm 1. H denotes the maximum depth of receptive field (or
equivalently, the number of aggregation iterations), and a suffix [h]
attached by a representation vector denotes h -order. For a given
user-item pair (u,v) (line 2), we first calculate the receptive field M
of v in an iterative layer-by-layer manner (line 3, 13-19). Then the
aggregation is repeated H times (line 5): In iteration h , we calculate
the neighborhood representation of each entity e ∈ M[h] (line 7),
then aggregate it with its own representation e u [h − 1 ] to obtain
the one to be used at the next iteration (line 8). The final H -order
entity representation is denoted as v u (line 9), which is fed into a
function f : R d × R d → R together with user representation u for
predicting the probability: