DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yoo, Chang Dong | - |
dc.contributor.advisor | 유창동 | - |
dc.contributor.author | Vu, Thang | - |
dc.date.accessioned | 2021-05-11T19:33:47Z | - |
dc.date.available | 2021-05-11T19:33:47Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=875361&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/283067 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2019.8,[iv, 27 p. :] | - |
dc.description.abstract | This thesis considers the End-to-end Residual Instance Segmentation (ERIS) network for improving the mask prediction performance as well as the inference speed by addressing the challenges involved in integrating multi-stage detection into segmentation. Compared to existing multi-stage instance segmentation methods, the proposed ERIS differs in two aspects: (1) it enhances sample diversity and model capacity in the mask branch using a single deep mask subnetwork shared across the detection stages, and (2) it enhances information flow through the network using residual blocks for the mask (and possibly box) heads. Extensive experiments on the MSCOCO dataset show that the proposed ERIS achieves better segmentation performance compared to state-of-the-art methods while having faster inference speed. In particular, without bells and whistles, ERIS achieves respectively 1.3% and 0.5% mask AP gains over Cascade Mask R-CNN and Hybrid Task Cascade while being 10% faster. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Instance segmentation▼aobject detection▼adeep neural network | - |
dc.subject | 객체 분할▼a객체 검출▼a심층 신경망▼a두-단계 객체 검출기▼a다단계 구조 | - |
dc.title | End-to-End residual learning for instance segmentation | - |
dc.title.alternative | 객체 분할을 위한 엔드 투 엔드 레지듀얼 러닝 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 탕부 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.