Transformer Network-Aided Relative Pose Estimation for Non-cooperative Spacecraft Using Vision Sensor

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 7
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorAhmed, Jamalko
dc.contributor.authorArshad, Awaisko
dc.contributor.authorBang, Hyochoongko
dc.contributor.authorChoi, Yoonhyukko
dc.date.accessioned2024-09-05T10:00:06Z-
dc.date.available2024-09-05T10:00:06Z-
dc.date.created2024-08-29-
dc.date.issued2024-07-
dc.identifier.citationINTERNATIONAL JOURNAL OF AERONAUTICAL AND SPACE SCIENCES, v.25, no.3, pp.1146 - 1165-
dc.identifier.issn2093-274X-
dc.identifier.urihttp://hdl.handle.net/10203/322700-
dc.description.abstractThe objective of the proposed work is to perform monocular vision-based relative 6-DOF pose estimation of the non-cooperative target spacecraft relative to the chaser satellite in rendezvous operations. In this work, the convolutional neural network (CNN) is replaced by the high-resolution transformer network to predict the feature points of the target satellite. The self-attention mechanism inside the transformer provides the advantage of overcoming the inadequacies of the translation equivariance, 2D neighborhood awareness, and long-range dependencies in CNN. First, the 3D model of the target satellite is reconstructed using the inverse direct linear transform (IDLT) method. Then, the pose estimation pipeline is developed with a learning-based image-processing subsystem and geometric optimization of the pose solver. The image-processing subsystem performs target localization using CNN-based architecture. Then, the key points detection network performs regression to predict 2D key points using the transformer-based network. Afterward, the predicted key points based on their confidence scores are projected onto the corresponding 3D points, and the pose value is computed using the efficient perspective-n-point method. The pose is refined using the non-linear iterative Gauss-Newton method. The proposed architecture is trained and tested on the spacecraft pose estimation dataset and it shows superior accuracy both in translation and rotation values. The architecture has shown robustness against the drastically changing clutter background and light conditions in the space images due to the self-attention mechanism. Moreover, this method consumes less computation resources by using fewer floating-point operations and trainable parameters with low input image resolution.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleTransformer Network-Aided Relative Pose Estimation for Non-cooperative Spacecraft Using Vision Sensor-
dc.typeArticle-
dc.identifier.wosid001186162600002-
dc.identifier.scopusid2-s2.0-85187929027-
dc.type.rimsART-
dc.citation.volume25-
dc.citation.issue3-
dc.citation.beginningpage1146-
dc.citation.endingpage1165-
dc.citation.publicationnameINTERNATIONAL JOURNAL OF AERONAUTICAL AND SPACE SCIENCES-
dc.identifier.doi10.1007/s42405-023-00703-3-
dc.identifier.kciidART003097240-
dc.contributor.localauthorBang, Hyochoong-
dc.contributor.nonIdAuthorArshad, Awais-
dc.contributor.nonIdAuthorChoi, Yoonhyuk-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorMonocular vision-
dc.subject.keywordAuthorPose estimation-
dc.subject.keywordAuthorPerspective-n-point-
dc.subject.keywordAuthorGauss-Newton method-
dc.subject.keywordAuthorConvolution neural network-
dc.subject.keywordAuthorTransformer-
Appears in Collection
AE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0