GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor Partitioning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 464
  • Download : 0
Tucker decomposition is used extensively for modeling multi-dimensional data represented as tensors. Owing to the increasing magnitude of nonzero values in real-world tensors, a growing demand has emerged for expeditious and scalable Tucker decomposition techniques. Several graphics processing unit (GPU)-accelerated techniques have been proposed for Tucker decomposition to decrease the decomposition speed. However, these approaches often encounter difficulties in handling extensive tensors owing to their huge memory demands, which exceed the available capacity of GPU memory. This study presents an expandable GPU-based technique for Tucker decomposition called GPUTucker. The proposed method meticulously partitions sizable tensors into smaller sub-tensors, which are referred to as tensor blocks, and effectively implements the GPU-based data pipeline by handling these tensor blocks asynchronously. Extensive experiments demonstrate that GPUTucker outperforms state-of-the-art Tucker decomposition methods in terms of the decomposition speed and scalability.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
Issue Date
2024-03
Language
English
Article Type
Article
Citation

EXPERT SYSTEMS WITH APPLICATIONS, v.237

ISSN
0957-4174
DOI
10.1016/j.eswa.2023.121445
URI
http://hdl.handle.net/10203/314348
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0