From the beginning of the summer, there had been a rise in ChatGPT Plus cancellations by paid users reflecting the increased dissatisfaction of the pay users of OpenAI’s offering. Although continuing to develop their AI, many subscribers think that it does not meet the needs of users anymore. Below are the major factors behind this trend.
Increased subscription fee versus value proposition
Cost concerns head among complaints of members whose subscriptions have already been canceled. In August 2025, 38 percent of former users of ChatGPT Plus indicated that the service cost $20/month as just too much, especially when service-packed with the others. People are getting diminishing returns on what seems to be a premium subscription. Irregular slowdowns and issue with features are undermining the value promised for premium subscriptions. For the most budget-conscious student or professional, this could often tip toward canceling because the free versions or occasionally using the basic tier would suffice.
Performance-and accessibility limitations
Payments to the expectations of a paid service fail to meet even Plus itself. This deters 31% of the cancelled cases, as users attack the peak during usage, with a limit of 40 messages. Another consideration for underperformance includes slow or lagging responses, especially generated from the GPT-4 models. It breaks their workflow and shows reliability. When paying users still confront “capacity” messages or multi-minute response time, the premium label rings hollow.
With OpenAI’s pricing plan, competitors offering pay-per-use flexibility do not hold back. Approximately 23% of subscribers have shifted to API-based services such as laozhang.ai for more predictable, usage-based fees. Others have moved to competing platforms—Claude Pro or Microsoft Copilot Pro—associated with similar prices, but better suited to work with office suites and fewer caps. These alternatives become even more attractive to developers and occasional hardcore users because of their ability to attach costs directly to needs.
Technical and usability frustrations
Aside from cost and speed, users also reported inconsistent adherence to prompts and had overly cautious content filters. ChatGPT can not infrequently ignore clear instructions and output very generic or non-related material through those instructions. Equally annoying is that the model’s guardrails can trigger unwarranted refusals over innocent requests, forcing subscribers to elaborate rephrase just to get straightforward information or creative prose. These two hinder productivity and do away with users looking for seamless practical AI use.
Losing context for the long term and memory wipes
Trust goes out of the window once a subscriber becomes cut off from previous work. In early 2025, a memory-architecture change erased at least years of chat history for many users, derailing ongoing projects and research threads. Even in the absence of such wholesale data loss, one could argue that the session-based context ChatGPT does have without persistent memory is not user-friendly, since users expect an AI to remember across sessions. And for those treating ChatGPT like a collaborative notebook, this unpredictability rules it out as a viable proposition.
OpenAI moved to GPT-5 as a massively controversial step by taking off GPT-4o without prior notice as reported in this article, What is the difference between GPT-5 and GPT-4? These are the features of ChatGPT’s new language model, which is already in operation. Many subscribers mourn the loss of personality and creativity, which, they judged, belonged with earlier models, and viewed GPT-5 as also “cold and disappointing.” Access to GPT-4o would later be conceded, but that sudden stop gave the feeling of betrayal, which drives some subs to cancel on principle rather than risk future feature churn.
Concerns of customer services and policy
Very lastly, the problems with the range of billing support worsen frustrations. Though plus members expect quick assistance, they experience slow response times or have messy cancellation flows mostly through web or mobile outlets. Trust erodes when account issues hang unresolved or needing users to go through hoops to cancel via the Apple or Google’s ecosystems rather than just through OpenAI’s portal.
Though ChatGPT still provides a preeminent generative AI, the misalignment of cost/value, performance hiccups, rigid use policies, data-loss incidents, and unilateral product changes have triggered a flurry of subscription cancellations. OpenAI must address these pain points in reliability, availability of flexible plans, restoring trust on data integrity, and transparent communication on updates to reduce outflow from and retain its paying user base.
Read more: What do Trump’s tariffs mean for the average consumer in America: Liberation Day tariffs take effect