This study presents an effort to support human reliability analysis (HRA) data collection using simplified simulators. It aims to experimentally investigate how human performance measures and human errors-pertaining to both expert and non-expert operators alike-change in correlation with simulator complexity. A randomized factorial experiment was designed with two independent variables: expertise and simulator complexity. Four human performance measures were considered in the experiment. Thirty-six actual professional operators and thirty-six student operators participated in the experiment. Two simplified simulators having different complexity levels were employed in the experiment. Statistical analyses for the experimental data were conducted to understand differences in the participants' human performance measures when using the simplified simulators. The error data from the experiment were compared with that from full-scope HRA data collection. Finally, how HRA data collected from full-scope simulators can be inferred based on those collected from simplified simulators was proposed.