We present a method for interactively generating virtual fixtures for shared teleoperation in unstructured remote environments. The proposed method allows a human operator to intuitively assign various types of virtual fixtures on-the-fly to provide virtual guidance forces helping the operator to accomplish a given task while minimizing the cognitive workload. The proposed method augments the visual feedback image from the slave’s robot video camera with automatically extracted geometric features (shapes, surfaces, etc.) computed from both depth and color video sensor attached next to the slave robot’s base. The human operator can select a feature on the computer screen which is then automatically associated with a virtual haptic fixture. The performance of the proposed method was evaluated with a peg-in-hole task and the experiment showed improvements in teleoperation performance.