With advances in artificial intelligence technology, Voice-based Conversational Agents (VCAs) can now imitate human abilities, sometimes almost indistinguishably from humans. However, concerns have been raised that too much perceived similarity can trigger threats and fears among users. This raises a question: Should VCAs be able to imitate humans perfectly? To address this, we explored what influences the negative aspects of user experience in human-like VCAs. We conducted a qualitative exploratory study to elicit participants' perceptions and feelings of human-like VCAs through comparable video prototypes of human-agent conversation and human-human conversation. We discovered that the dialogues of the human-likeness outside of the expressed purpose of a VCA and expressions pretending to come from a human identity could lead to negative experiences with VCAs. Based on our findings, we discussed design directions for overcoming potential issues of human imitation.