The avatar-mediated 3D telepresence aims to allow a user in a local space to interact with a 3D virtual avatar, that represents another user in a remote space, appearing in the local space. If the two spaces have different spatial layouts and furniture arrangements, directly applying a remote user’s motion to her avatar may fail to convey the semantics of the user’s motion and cause miscommunication. A solution would be to adapt the remote user’s placement and motion to the local space so as to preserve the semantics of the remote user’s motion. To this end, this paper presents methods to determine the placement and create deictic gestures for an avatar. First, we develop a method to learn the correspondence probability of a candidate placement of an avatar in a local space with respect to the placement of a remote user, from training data obtained by a user survey. From this, we find the optimal placement of an avatar given the layout of the local space and the placement of the remote user. Second, we develop a simple yet effective method to retarget a remote user’s deictic gesture to her avatar. Depending on the placement of the local user and an attended object, our method modifies the avatar’s head and hand poses to preserve the gaze and pointing targets of the remote user. Evaluations show that our methods improve user engagement and social presence in the telepresence environment.