On July 25, 2025, OpenAI CEO Sam Altman, speaking on Theo Von’s podcast This Past Weekend in San Francisco, warned that conversations with ChatGPT lack legal confidentiality, unlike discussions with doctors, lawyers, or therapists, per TechCrunch. With millions, especially India’s 547-million OTT audience, using ChatGPT for personal advice, this gap in privacy protection raises concerns. Without legal safeguards, sensitive chats could be disclosed in court. Current laws allow OpenAI to share data if subpoenaed, prompting calls for an “AI privilege” framework, as discussed on X with the #AIPrivacy trends.
In This Article:
- Altman’s Statement and Its Implications
- Why This Matters
- The Path Forward
Altman’s Statement and Its Implications
Altman highlighted that users, particularly young people, treat ChatGPT as a therapist or life coach, sharing intimate details about relationships and mental health. Unlike doctor-patient or attorney-client privilege, which legally protects confidentiality, ChatGPT conversations lack such safeguards. “If you talk to ChatGPT about your most sensitive stuff and there’s a lawsuit, we could be required to produce that,” Altman said, calling it “very screwed up.” This vulnerability stems from the absence of a legal framework for AI interactions, a concern amplified by OpenAI’s ongoing lawsuit with The New York Times, which demands retention of user chats, excluding enterprise accounts.
Why This Matters
The lack of confidentiality could deter users, with 72% of teens using AI companions at least monthly, per a 2024 Common Sense Media survey. In India, where mental health stigma persists (60% avoid therapy, per NIMHANS 2024), reliance on AI for emotional support is growing. Yet, OpenAI’s policy allows staff to access chats for model training and misuse monitoring, with free-tier data deleted within 30 days unless legally required, per News18. This contrasts with encrypted apps like WhatsApp, raising privacy fears, especially post-Roe v. Wade, when users shifted to secure platforms like Apple Health.
The Path Forward
Altman advocates for therapist-level privacy for AI chats, urging urgent policy reform. India’s 2023 Data Protection Act offers some safeguards, but global standards are needed. Until then, users should exercise caution, treating AI chats as unsecured communication, ensuring India’s digital generation balances innovation with privacy protection.
-By Manoj H

