A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network

Cited 11 time in webofscience Cited 0 time in scopus
  • Hit : 273
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorAshraf, Murtazako
dc.contributor.authorRobles, Willmer Rafell Quinonesko
dc.contributor.authorKim, Mujinko
dc.contributor.authorKo, Young Sinko
dc.contributor.authorYi, Mun Yongko
dc.date.accessioned2022-02-08T06:40:27Z-
dc.date.available2022-02-08T06:40:27Z-
dc.date.created2022-02-07-
dc.date.created2022-02-07-
dc.date.created2022-02-07-
dc.date.issued2022-01-
dc.identifier.citationSCIENTIFIC REPORTS, v.12, no.1, pp.1 - 18-
dc.identifier.issn2045-2322-
dc.identifier.urihttp://hdl.handle.net/10203/292094-
dc.description.abstractThis paper proposes a deep learning-based patch label denoising method (LossDiff) for improving the classification of whole-slide images of cancer using a convolutional neural network (CNN). Automated whole-slide image classification is often challenging, requiring a large amount of labeled data. Pathologists annotate the region of interest by marking malignant areas, which pose a high risk of introducing patch-based label noise by involving benign regions that are typically small in size within the malignant annotations, resulting in low classification accuracy with many Type-II errors. To overcome this critical problem, this paper presents a simple yet effective method for noisy patch classification. The proposed method, validated using stomach cancer images, provides a significant improvement compared to other existing methods in patch-based cancer classification, with accuracies of 98.81%, 97.30% and 89.47% for binary, ternary, and quaternary classes, respectively. Moreover, we conduct several experiments at different noise levels using a publicly available dataset to further demonstrate the robustness of the proposed method. Given the high cost of producing explicit annotations for whole-slide images and the unavoidable error-prone nature of the human annotation of medical images, the proposed method has practical implications for whole-slide image annotation and automated cancer diagnosis.-
dc.languageEnglish-
dc.publisherNATURE RESEARCH-
dc.titleA loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network-
dc.typeArticle-
dc.identifier.wosid000749232200010-
dc.identifier.scopusid2-s2.0-85123601871-
dc.type.rimsART-
dc.citation.volume12-
dc.citation.issue1-
dc.citation.beginningpage1-
dc.citation.endingpage18-
dc.citation.publicationnameSCIENTIFIC REPORTS-
dc.identifier.doi10.1038/s41598-022-05001-8-
dc.contributor.localauthorYi, Mun Yong-
dc.contributor.nonIdAuthorKo, Young Sin-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordPlusGASTRIC-CANCERCLASS NOISEDEEPCLASSIFICATIONPATHOLOGYTRENDS-
Appears in Collection
IE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0