Artificial intelligence (AI) systems are increasingly being deployed in healthcare settings to assist patients with various health-related tasks. However, the effectiveness of these systems depends not only on their technical capabilities but also on patients' perceptions and acceptance of them. This study investigates how different types of AI explanations affect human perceptions of patient-facing AI-powered healthcare systems. Through an experimental study, we examine how explanations of AI’s capabilities, limitations, and decision-making processes influence users' trust, perceived usefulness, and intention to use these systems. Our findings provide insights for designing more effective and acceptable AI-powered healthcare systems for patient use.