DC Field | Value | Language |
---|---|---|
dc.contributor.author | Baek, Seung-Yeob | ko |
dc.contributor.author | Wang, Joon-Ho | ko |
dc.contributor.author | Song, Insub | ko |
dc.contributor.author | Lee, Kunwoo | ko |
dc.contributor.author | Lee, Jehee | ko |
dc.contributor.author | Koo, Seungbum | ko |
dc.date.accessioned | 2019-04-15T15:32:59Z | - |
dc.date.available | 2019-04-15T15:32:59Z | - |
dc.date.created | 2018-09-10 | - |
dc.date.issued | 2013-02 | - |
dc.identifier.citation | COMPUTER-AIDED DESIGN, v.45, no.2, pp.505 - 510 | - |
dc.identifier.issn | 0010-4485 | - |
dc.identifier.uri | http://hdl.handle.net/10203/255078 | - |
dc.description.abstract | Anatomical landmarks on bones play important roles in musculoskeletal simulations and surgical planning. This study develops an anatomically deformable model of the femur to predict bone landmarks automatically and quantifies its prediction accuracy. Forty-three angiographic computed tomography (CT) images of femurs were collected and 14 bone landmarks were manually marked on these images by experts. Surface mesh models of the femur were extracted from the CT images and combined with the bone landmark information to create an anatomical deformable model. The anatomical deformation technique developed in this study predicted bone landmarks automatically as the surface of a deformable model was matched to the surface of a given femur model. The prediction accuracy was quantified using the leave-one-out cross-validation method. The average prediction error for the 14 landmarks ranged from 2.80 to 5.93 mm. While the prediction accuracies of anterior and posterior cruciate ligaments and lateral epicondyle sites were high with averages (standard deviation) of 3.00 (+/- 1.55), 2.80 (+/- 1.76) and 2.97 (+/- 1.87) mm, respectively, those of gluteus minimus, ligament of head of femur and piriformis sites were low with averages of 5.93 (+/- 3.77), 4.89 (+/- 3.49) and 4.87 (+/- 2.70) mm, respectively. Accuracy can be expected to increase with the use of more population data as is the nature of a population-based statistical deformable model. (C) 2012 Elsevier Ltd. All rights reserved. | - |
dc.language | English | - |
dc.publisher | ELSEVIER SCI LTD | - |
dc.subject | TOTAL KNEE ARTHROPLASTY | - |
dc.subject | INTRAOBSERVER ERRORS | - |
dc.subject | REGISTRATION PROCESS | - |
dc.subject | CT-SCAN | - |
dc.subject | IMAGES | - |
dc.subject | REPRODUCIBILITY | - |
dc.subject | RECONSTRUCTION | - |
dc.subject | WALKING | - |
dc.subject | MODELS | - |
dc.title | Automated bone landmarks prediction on the femur using anatomical deformation technique | - |
dc.type | Article | - |
dc.identifier.wosid | 000311972700041 | - |
dc.identifier.scopusid | 2-s2.0-84868212385 | - |
dc.type.rims | ART | - |
dc.citation.volume | 45 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 505 | - |
dc.citation.endingpage | 510 | - |
dc.citation.publicationname | COMPUTER-AIDED DESIGN | - |
dc.identifier.doi | 10.1016/j.cad.2012.10.033 | - |
dc.contributor.localauthor | Koo, Seungbum | - |
dc.contributor.nonIdAuthor | Baek, Seung-Yeob | - |
dc.contributor.nonIdAuthor | Wang, Joon-Ho | - |
dc.contributor.nonIdAuthor | Song, Insub | - |
dc.contributor.nonIdAuthor | Lee, Kunwoo | - |
dc.contributor.nonIdAuthor | Lee, Jehee | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Bone landmarks | - |
dc.subject.keywordAuthor | Anatomical deformation technique | - |
dc.subject.keywordAuthor | Femur | - |
dc.subject.keywordAuthor | Statistical shape analysis | - |
dc.subject.keywordAuthor | Joint biomechanics | - |
dc.subject.keywordPlus | TOTAL KNEE ARTHROPLASTY | - |
dc.subject.keywordPlus | INTRAOBSERVER ERRORS | - |
dc.subject.keywordPlus | REGISTRATION PROCESS | - |
dc.subject.keywordPlus | CT-SCAN | - |
dc.subject.keywordPlus | IMAGES | - |
dc.subject.keywordPlus | REPRODUCIBILITY | - |
dc.subject.keywordPlus | RECONSTRUCTION | - |
dc.subject.keywordPlus | WALKING | - |
dc.subject.keywordPlus | MODELS | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.