Drivers' Visual Perception Quantification Using 3D Mobile Sensor Data for Road Safety

Cited 4 time in webofscience Cited 3 time in scopus
  • Hit : 420
  • Download : 219
DC FieldValueLanguage
dc.contributor.authorChoi, Kangheeko
dc.contributor.authorByun, Giyoungko
dc.contributor.authorKim, Ayoungko
dc.contributor.authorKim, Youngchulko
dc.date.accessioned2020-06-30T02:20:11Z-
dc.date.available2020-06-30T02:20:11Z-
dc.date.created2020-06-29-
dc.date.created2020-06-29-
dc.date.issued2020-05-
dc.identifier.citationSENSORS, v.20, no.10-
dc.identifier.issn1424-8220-
dc.identifier.urihttp://hdl.handle.net/10203/275046-
dc.description.abstractTo prevent driver accidents in cities, local governments have established policies to limit city speeds and create child protection zones near schools. However, if the same policy is applied throughout a city, it can be difficult to obtain smooth traffic flows. A driver generally obtains visual information while driving, and this information is directly related to traffic safety. In this study, we propose a novel geometric visual model to measure drivers' visual perception and analyze the corresponding information using the line-of-sight method. Three-dimensional point cloud data are used to analyze on-site three-dimensional elements in a city, such as roadside trees and overpasses, which are normally neglected in urban spatial analyses. To investigate drivers' visual perceptions of roads, we have developed an analytic model of three types of visual perception. By using this proposed method, this study creates a risk-level map according to the driver's visual perception degree in Pangyo, South Korea. With the point cloud data from Pangyo, it is possible to analyze actual urban forms such as roadside trees, building shapes, and overpasses that are normally excluded from spatial analyses that use a reconstructed virtual space.-
dc.languageEnglish-
dc.publisherMDPI-
dc.titleDrivers' Visual Perception Quantification Using 3D Mobile Sensor Data for Road Safety-
dc.typeArticle-
dc.identifier.wosid000539323700017-
dc.identifier.scopusid2-s2.0-85084802236-
dc.type.rimsART-
dc.citation.volume20-
dc.citation.issue10-
dc.citation.publicationnameSENSORS-
dc.identifier.doi10.3390/s20102763-
dc.contributor.localauthorKim, Ayoung-
dc.contributor.localauthorKim, Youngchul-
dc.contributor.nonIdAuthorChoi, Kanghee-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorvisibility-
dc.subject.keywordAuthorvisual perception-
dc.subject.keywordAuthorpoint cloud-
dc.subject.keywordAuthordriver&apos-
dc.subject.keywordAuthors safety-
dc.subject.keywordPlusCALIBRATION-
dc.subject.keywordPlusNORMALITY-
dc.subject.keywordPlusGEOMETRY-
dc.subject.keywordPlusNOVICE-
dc.subject.keywordPlusSCENE-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0