OpenAI’s decision to allow its AI models to be deployed in classified military operations has triggered one of the largest user backlashes in the company’s history. Within 24 hours of the announcement, ChatGPT uninstalls surged 295% day-over-day, while Anthropic’s Claude shot to the number one position in the App Store.
The deal, which allows OpenAI models to be used in classified defense scenarios, caught the AI community off guard. Reports had previously indicated that OpenAI intended to follow the same red lines as competitor Anthropic, which has maintained firm restrictions against military applications of its technology.
The User Exodus
The numbers tell a stark story. On the day following OpenAI’s announcement, app analytics firms tracked a 295% spike in ChatGPT uninstalls across iOS and Android. Simultaneously, downloads of competing AI assistants surged — Anthropic’s Claude reached the top of the App Store’s free apps chart for the first time.
Social media reactions ranged from disappointment to outrage. Many longtime ChatGPT users said the military deal contradicted OpenAI’s founding mission of developing AI that benefits humanity broadly, not military applications.
What the Deal Actually Involves
OpenAI’s agreement allows its models to operate within classified government environments, meaning the AI could be used for intelligence analysis, operational planning, and other defense applications that cannot be disclosed publicly.
The company defended the move as necessary for national security, arguing that American AI companies should ensure U.S. defense agencies have access to the most capable AI tools rather than leaving a capability gap that adversaries could exploit.
Critics counter that classified deployment means there is no public oversight of how the AI is being used, which directly conflicts with OpenAI’s stated commitment to transparency and safety.
The Competitive Fallout
The backlash has created an unexpected opening for Anthropic, Google, and other AI providers. Anthropic, which has consistently maintained that its Claude models should not be used for military purposes, has positioned itself as the ethical alternative. The company’s Claude product saw its largest single-day download spike since launch.
Google’s Gemini also saw increased interest, though its own government contracts complicate the "ethical alternative" narrative.
What Happens Next
Industry analysts predict the uninstall spike will stabilize within weeks as the news cycle moves on. However, the episode has permanently shifted the conversation about AI ethics in the industry. The question of whether AI companies should work with military and intelligence agencies is no longer theoretical — it is now the defining fault line between major AI providers.
For ChatGPT’s 900 million weekly active users, the choice between convenience and conscience has suddenly become very real.























