Continuous Prediction of Pointing Targets With Motion and Eye-Tracking in Virtual Reality

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 54
  • Download : 0
We present a study on continuously predicting the direction to a pointing target in virtual environments using motion and eye-tracker data throughout the pointing process. We first collect time series data for user motion and eye-tracker in a cursorless, single-target pointing task. Results from analyzing fixation points from different sensors and observing velocity profiles over the course of pointing provide insights into optimally configuring features for predicting the target angles. Following this analysis, we train a recurrent neural network that feeds on sliding window inputs for continuously operating target direction prediction from start to finish. The input window contains historical data from past to current frames, capturing temporal changes in the feature data. By feeding on this input, our model can predict the direction of the target at any given time during pointing. Our findings demonstrate that incorporating eye-tracker data into the prediction model boosts the maximum achievable accuracy by 2.5 times when compared to baselines without eye-tracker data inputs. The results suggest that using features from both the eye-tracker and joint motion contributes to higher prediction performance, as well as faster stabilization of output values at the starting phase of pointing.
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Issue Date
2024-01
Language
English
Article Type
Article
Citation

IEEE Access, v.12, pp.5933 - 5946

ISSN
2169-3536
DOI
10.1109/access.2024.3350788
URI
http://hdl.handle.net/10203/318918
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0