Large-scale incremental processing for mapreduce맵리듀스를 위한 대규모 점진적 처리에 대한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 610
  • Download : 0
An important property of today`s big data processing is that the same computation is often repeated on datasets evolving over time, such as web and social network data. This style of repeated computation is also used for many iterative algorithms. While repeating full computation of the entire datasets is feasible with distributed computing frameworks such as Hadoop, it is obviously inefficient and wastes resources.In this dissertation, we present HadUP (Hadoop with Update Processing), a modified Hadoop architecture tailored to large-scale incremental processing for conventional MapReduce algorithms. Several approaches have been proposed to achieve a similar goal using task-level memoization. They keep the previous results of tasks permanently, and reuse them when the same computation on the same task input is needed again. However, task-level memoization detects the change of datasets at a coarse-grained level, which often makes such approaches ineffective. Our analysis reveals that task-level memoization can be effective only if each task processes a few KB of input data. In contrast, HadUP detects and computes the change of datasets at a fine-grained level using deduplication-based snapshot differential algorithm (D-SD) and update propagation.Update propagation is a key primitive for efficient incremental processing of HadUP. Many applications for today`s big data processing consist of data parallel operations, where an operation transforms one or more input datasets into one output dataset. For each operation, the same computation is concurrently applied to a single input record or a group of input records. The independence between these executions allows us to compute the records to be inserted into or deleted from the output dataset, if those records inserted into or deleted from the input datasets are explicitly given. In this way, update propagation computes the updated result without full recomputation. Our evaluation shows that HadUP provides high pe...
Advisors
Maeng, Seung-Ryoulresearcher맹승렬
Description
한국과학기술원 : 전산학과,
Publisher
한국과학기술원
Issue Date
2014
Identifier
568602/325007  / 020037429
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학과, 2014.2, [ vii, 69 p. ]

Keywords

Big data processing; 중복 제거; 하둡; 맵리듀스; 점진적 처리; 빅데이터 처리; Incremental processing; MapReduce; Hadoop; Data deduplication

URI
http://hdl.handle.net/10203/197814
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=568602&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0