DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hong, Ji Woo | ko |
dc.contributor.author | Yoon, Sunjae | ko |
dc.contributor.author | Kim, Junyeong | ko |
dc.contributor.author | Yoo, Chang-Dong | ko |
dc.date.accessioned | 2023-06-05T07:01:07Z | - |
dc.date.available | 2023-06-05T07:01:07Z | - |
dc.date.created | 2023-06-05 | - |
dc.date.created | 2023-06-05 | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | IEEE ACCESS, v.11, pp.43267 - 43275 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | http://hdl.handle.net/10203/307052 | - |
dc.description.abstract | 3D human pose and shape estimation (3D-HPSE) from video aims to generate sequence of 3D mesh that depict human body in the video. Current deep learning based 3D-HPSE networks that takes video input have focused on improving temporal consistency among sequence of 3D joints by supervising acceleration error between predicted and ground-truth human motion. However, these methods overlooked the geometric misalignments of persistent discrepancy between geometric paths drawn by sequence of predicted joints and that of ground-truth joints. To this end, we propose Joint Path Alignment (JPA) framework, a model-agnostic approach that mitigates geometric misalignments by introducing Temporal Procrustes Alignment Regularization (TPAR) loss that performs group-wise sequence learning of joint movement paths. Unlike previous methods that rely solely on per-frame supervision for accuracy, our framework adds sequence-level accuracy supervision with TPAR loss by performing Procrustes analysis on the geometric paths drawn by sequences of predicted joints. Our experiments show that JPA framework advances the network to exceed the previous state-of-the-art performances on benchmark datasets in both per-frame accuracy and video smoothness metric. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Joint Path Alignment Framework for 3D Human Pose and Shape Estimation From Video | - |
dc.type | Article | - |
dc.identifier.wosid | 000986563800001 | - |
dc.identifier.scopusid | 2-s2.0-85159671076 | - |
dc.type.rims | ART | - |
dc.citation.volume | 11 | - |
dc.citation.beginningpage | 43267 | - |
dc.citation.endingpage | 43275 | - |
dc.citation.publicationname | IEEE ACCESS | - |
dc.identifier.doi | 10.1109/ACCESS.2023.3271285 | - |
dc.contributor.localauthor | Yoo, Chang-Dong | - |
dc.contributor.nonIdAuthor | Kim, Junyeong | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Pose estimation | - |
dc.subject.keywordAuthor | Three-dimensional displays | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Solid modeling | - |
dc.subject.keywordAuthor | Human factors | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Visualization | - |
dc.subject.keywordAuthor | 3D human pose and shape estimation from video | - |
dc.subject.keywordAuthor | temporal alignment | - |
dc.subject.keywordAuthor | Procrustes analysis | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.