ChatGPT Health, OpenAI's new feature that analyzes your medical records and wellness app data, is gaining traction among patients seeking quick health insights—but emerging research shows the AI tool has significant limitations in medical decision-making that both patients and physicians need to understand. The platform, which launched earlier this year, allows users to upload data from their medical records and wellness apps to ask questions about their health and explore potential treatment options. However, a February 2026 study published in Nature Medicine found that ChatGPT Health has notable blind spots when recommending which patients need immediate medical attention. What Is ChatGPT Health and How Does It Work? ChatGPT Health represents a significant shift in how patients can interact with their health information. The platform integrates data from medical records, wearable devices, and wellness applications, allowing users to ask questions about their symptoms, test results, and potential treatment options in a conversational format. Rather than simply searching for information online, patients can now have their personal health data analyzed by a large language model (LLM)—an artificial intelligence system trained on vast amounts of text to understand and generate human-like responses. The appeal is clear: patients get personalized insights without scheduling a doctor's appointment, and they can explore health questions on their own timeline. Some users have reported finding value in the tool. One Washington Post journalist analyzed a decade of Apple Watch data through ChatGPT Health and then discussed the findings with their doctor, while another Axios contributor described how the AI provided emotional closure after a near-death experience that their physicians hadn't fully addressed. The Research Reveals Critical Blind Spots in AI Triage Despite its promise, new research has exposed serious limitations. A study conducted at Mount Sinai, co-authored by first-year medical student Alvira Tyagi and published in Nature Medicine in February 2026, tested ChatGPT Health's ability to recommend which patients needed urgent medical attention. The research identified blind spots in the AI's triage recommendations—meaning the system sometimes failed to flag patients who actually needed immediate care. This finding matters because triage—the process of determining which patients need urgent attention—is a critical function in healthcare. If an AI system misses warning signs, patients could delay seeking necessary emergency care. The Mount Sinai research highlights why ChatGPT Health should be viewed as a supplementary tool rather than a replacement for medical judgment. How Are Doctors and Patients Actually Using AI Chatbots for Health? The adoption of large language models in healthcare is happening faster than many realize. Physicians are beginning to integrate these tools into their workflows, and patients are increasingly turning to AI chatbots for health advice before—or instead of—consulting their doctors. A webinar hosted by the Association of Health Care Journalists brought together experts to discuss real-world usage patterns, potential benefits, and critical cautions. The panelists, including health journalists, physicians, and researchers, emphasized that while these tools can help patients understand their health data and explore treatment options, they come with important limitations. One Harvard AI researcher noted that large language models can be "uncomfortable" for physicians and IT leaders, suggesting that healthcare providers are grappling with how to integrate these systems responsibly into clinical practice. Steps to Use ChatGPT Health Safely and Effectively - Always Verify with Your Doctor: Use ChatGPT Health to organize your thoughts and questions before your appointment, but never rely solely on its recommendations for medical decisions. Share what the AI suggested with your healthcare provider and ask for their professional interpretation. - Understand Its Limitations: Remember that ChatGPT Health cannot perform physical examinations, order tests, or understand the full context of your medical history the way a trained physician can. The AI may miss subtle warning signs that require urgent attention. - Use It for Health Literacy, Not Diagnosis: The tool works best when you use it to better understand your existing test results, learn about treatment options your doctor mentioned, or prepare questions for your next appointment—not as a diagnostic tool. - Be Cautious with Urgent Symptoms: If you're experiencing chest pain, difficulty breathing, severe bleeding, or other emergency symptoms, call 911 or go to an emergency room immediately. Do not rely on ChatGPT Health to determine if your symptoms are serious. - Protect Your Privacy: Be aware that uploading medical records to any online platform carries privacy considerations. Review OpenAI's privacy policy and understand how your health data will be stored and used. What Do Experts Say About the Future of AI in Healthcare? Healthcare professionals recognize that large language models will play an increasingly important role in medicine, but they emphasize the need for careful implementation. The webinar panelists included Dr. Henry Bair, a resident physician at Wills Eye Hospital in Philadelphia who previously researched digital health at Stanford University, and Rachael Robertson, an investigative health journalist covering medical misinformation. Both highlighted the importance of transparency about what these tools can and cannot do. "People are using AI chatbots for health advice," Robertson noted in her coverage for the Associated Press, "and here's what to know"—emphasizing that consumers need clear guidance on appropriate use. The consensus among experts is that ChatGPT Health and similar tools should enhance the doctor-patient relationship, not replace it. These platforms can help patients become more informed and engaged with their healthcare, but they require human oversight and clinical judgment to ensure safety. As healthcare systems continue to adopt AI-powered tools, the focus is shifting from whether these technologies should be used to how they can be implemented responsibly. OpenAI has already launched ChatGPT for Healthcare at several large health systems, suggesting that integration into clinical workflows is accelerating. The key takeaway for patients is clear: use these tools to empower yourself with health knowledge, but maintain your relationship with your doctor as the final authority on your medical care.