CLASSIFICATION OF OIL PAINTING USING MACHINE LEARNING WITH VISUALIZED DEPTH INFORMATION

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 66
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJi Hoon, Kimko
dc.contributor.authorSHIM, HYESEUNGko
dc.contributor.authorAhn, Jaehongko
dc.contributor.authorJun, Ji Youngko
dc.contributor.authorMinki, Hongko
dc.date.accessioned2021-03-02T07:10:22Z-
dc.date.available2021-03-02T07:10:22Z-
dc.date.created2021-02-25-
dc.date.issued2019-09-05-
dc.identifier.citation2019 | 27th CIPA International Symposium - Documenting the past for a better future, pp.617 - 623-
dc.identifier.issn1682-1750-
dc.identifier.urihttp://hdl.handle.net/10203/281099-
dc.description.abstractIn the past few decades, a number of scholars studied painting classification based on image processing or computer vision technologies. Further, as the machine learning technology rapidly developed, painting classification using machine learning has been carried out. However, due to the lack of information about brushstrokes in the photograph, typical models cannot use more precise information of the painters painting style. We hypothesized that the visualized depth information of brushstroke is effective to improve the accuracy of the machine learning model for painting classification. This study proposes a new data utilization approach in machine learning with Reflectance Transformation Imaging (RTI) images, which maximizes the visualization of a three-dimensional shape of brushstrokes. Certain artist’s unique brushstrokes can be revealed in RTI images, which are difficult to obtain with regular photographs. If these new types of images are applied as data to train in with the machine learning model, classification would be conducted including not only the shape of the color but also the depth information. We used the Convolution Neural Network (CNN), a model optimized for image classification, using the VGG-16, ResNet-50, and DenseNet-121 architectures. We conducted a two-stage experiment using the works of two Korean artists. In the first experiment, we obtained a key part of the painting from RTI data and photographic data. In the second experiment on the second artists work, a larger quantity of data are acquired, and the whole part of the artwork was captured. The result showed that RTI-trained model brought higher accuracy than Non-RTI trained model. In this paper, we propose a method which uses machine learning and RTI technology to analyze and classify paintings more precisely to verify our hypothesis.-
dc.languageEnglish-
dc.publisherICOMOS CIPA Heritage Documentation-
dc.titleCLASSIFICATION OF OIL PAINTING USING MACHINE LEARNING WITH VISUALIZED DEPTH INFORMATION-
dc.typeConference-
dc.identifier.wosid000583155100083-
dc.identifier.scopusid2-s2.0-85072200020-
dc.type.rimsCONF-
dc.citation.beginningpage617-
dc.citation.endingpage623-
dc.citation.publicationname2019 | 27th CIPA International Symposium - Documenting the past for a better future-
dc.identifier.conferencecountrySP-
dc.identifier.conferencelocationAvila-
dc.identifier.doi10.5194/isprs-archives-XLII-2-W15-617-2019-
dc.contributor.nonIdAuthorJi Hoon, Kim-
dc.contributor.nonIdAuthorMinki, Hong-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0