S. app uninstalls of ChatGPT’s mobile app jumped 295% day-over-day on Saturday, February 28, as consumers responded to the news of OpenAI’s deal with the Department of Defense (DoD), which has been rebranded under the Trump administration as the Department of War.
This data, which comes from market intelligence provider Sensor Tower, represents a sizable increase compared with ChatGPT’s typical day-over-day uninstall rate of 9%, as measured over the past 30 days.
Meanwhile, U.S. downloads for OpenAI competitor Anthropic’s Claude jumped up by 37% day-over-day on Friday, February 27, and 51% as of Saturday, February 28, after the company announced that it would not partner with the U.S. defense department. Anthropic said it was not able to agree on the deal terms over concerns that AI would be used to surveil Americans and be used in fully autonomous weaponry, which AI is not yet ready to do safely.
What really sucks is Google is likely to remove all those 1 star reviews, they’ve done it before when shit like this happens.
Edit: i feel like a good review could be…
1 star. By using this service you’ll be helping ChatGPT autonomously surveil you through the government and murder you with robots without any human in the loop. Don’t let ChatGPT kill you!
I am not subscribed to openai. Is there a way for me to not subscribe even harder?
I’d say pirate it but well cloud based and all kinda takes the wind from those sails.
I didn’t have a subscription but I still deleted my account.
same
i don’t have app, but may install to prop the uninstall numbers
I’m-doing-my-part.gif
(I seem to have lost the ability to embed gifs.)
But the fewer people who use it the less money they’ll lose
How are they tracking uninstalls?
Anthropic have at least one active 2 years deal with DoD that expires ~2027 https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-advance-responsible-ai-in-defense-operations
Anthropic said it was not able to agree on the deal terms over concerns that AI would be used to surveil Americans and be used in fully autonomous weaponry, which AI is not yet ready to do safely.
To add to this, Anthropic also said (in other words I do not recall) that even if AI could do it safely, there would be ethical concerns about oversight.
Copilot says these are ethical alternatives








