Joint Path Alignment Framework for 3D Human Pose and Shape Estimation From Video

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 160
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHong, Ji Wooko
dc.contributor.authorYoon, Sunjaeko
dc.contributor.authorKim, Junyeongko
dc.contributor.authorYoo, Chang-Dongko
dc.date.accessioned2023-06-05T07:01:07Z-
dc.date.available2023-06-05T07:01:07Z-
dc.date.created2023-06-05-
dc.date.created2023-06-05-
dc.date.issued2023-
dc.identifier.citationIEEE ACCESS, v.11, pp.43267 - 43275-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/307052-
dc.description.abstract3D human pose and shape estimation (3D-HPSE) from video aims to generate sequence of 3D mesh that depict human body in the video. Current deep learning based 3D-HPSE networks that takes video input have focused on improving temporal consistency among sequence of 3D joints by supervising acceleration error between predicted and ground-truth human motion. However, these methods overlooked the geometric misalignments of persistent discrepancy between geometric paths drawn by sequence of predicted joints and that of ground-truth joints. To this end, we propose Joint Path Alignment (JPA) framework, a model-agnostic approach that mitigates geometric misalignments by introducing Temporal Procrustes Alignment Regularization (TPAR) loss that performs group-wise sequence learning of joint movement paths. Unlike previous methods that rely solely on per-frame supervision for accuracy, our framework adds sequence-level accuracy supervision with TPAR loss by performing Procrustes analysis on the geometric paths drawn by sequences of predicted joints. Our experiments show that JPA framework advances the network to exceed the previous state-of-the-art performances on benchmark datasets in both per-frame accuracy and video smoothness metric.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleJoint Path Alignment Framework for 3D Human Pose and Shape Estimation From Video-
dc.typeArticle-
dc.identifier.wosid000986563800001-
dc.identifier.scopusid2-s2.0-85159671076-
dc.type.rimsART-
dc.citation.volume11-
dc.citation.beginningpage43267-
dc.citation.endingpage43275-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2023.3271285-
dc.contributor.localauthorYoo, Chang-Dong-
dc.contributor.nonIdAuthorKim, Junyeong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorPose estimation-
dc.subject.keywordAuthorThree-dimensional displays-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorSolid modeling-
dc.subject.keywordAuthorHuman factors-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthor3D human pose and shape estimation from video-
dc.subject.keywordAuthortemporal alignment-
dc.subject.keywordAuthorProcrustes analysis-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0