Bootstrapping User and Item Representations for One-Class Collaborative Filtering

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 250
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Donghako
dc.contributor.authorKang, SeongKuko
dc.contributor.authorJu, Hyunjunko
dc.contributor.authorPark, Chanyoungko
dc.contributor.authorYu, Hwanjoko
dc.date.accessioned2021-07-14T01:30:40Z-
dc.date.available2021-07-14T01:30:40Z-
dc.date.created2021-05-13-
dc.date.issued2021-07-11-
dc.identifier.citationACM SIGIR Conference on Research and Development in Information Retrieval-
dc.identifier.urihttp://hdl.handle.net/10203/286692-
dc.description.abstractThe goal of one-class collaborative filtering (OCCF) is to identify the user-item pairs that are positively-related but have not been interacted yet, where only a small portion of positive user-item interactions (e.g., users’ implicit feedback) are observed. For discriminative modeling between positive and negative interactions, most previous work relied on negative sampling to some extent, which refers to considering unobserved user-item pairs as negative, as actual negative ones are unknown. However, the negative sampling scheme has critical limitations because it may choose “positive but unobserved” pairs as negative. This paper proposes a novel OCCF framework, named as BUIR, which does not require negative sampling. To make the representations of positively-related users and items similar to each other while avoiding a collapsed solution, BUIR adopts two distinct encoder networks that learn from each other; the first encoder is trained to predict the output of the second encoder as its target, while the second encoder provides the consistent targets by slowly approximating the first encoder. In addition, BUIR effectively alleviates the data sparsity issue of OCCF, by applying stochastic data augmentation to encoder inputs. Based on the neighborhood information of users and items, BUIR randomly generates the augmented views of each positive interaction each time it encodes, then further trains the model by this self-supervision. Our extensive experiments demonstrate that BUIR consistently and significantly outperforms all baseline methods by a large margin especially for much sparse datasets in which any assumptions about negative interactions are less valid.-
dc.languageEnglish-
dc.publisherACM-
dc.titleBootstrapping User and Item Representations for One-Class Collaborative Filtering-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameACM SIGIR Conference on Research and Development in Information Retrieval-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1145/3404835.3462935-
dc.contributor.localauthorPark, Chanyoung-
dc.contributor.nonIdAuthorLee, Dongha-
dc.contributor.nonIdAuthorKang, SeongKu-
dc.contributor.nonIdAuthorJu, Hyunjun-
dc.contributor.nonIdAuthorYu, Hwanjo-
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0