The rollout of ChatGPT5 has sparked one of the most dramatic backlashes in AI history. As someone who’s been closely following these developments, I’ve watched the internet erupt with complaints about OpenAI’s latest model, with users lamenting everything from personality changes to mathematical errors.
But is GPT5 truly “the worst model ever” as some claim, or are we witnessing the growing pains of a technology still finding its footing? After examining the evidence and testing the model myself, I believe the truth lies somewhere in between the hype and the hate.
Let’s start with what went wrong. OpenAI’s pre-launch hype created expectations that no product could realistically meet. When Sam Altman posted an image of the Death Star rising above a planet to tease GPT5, he wasn’t just announcing a product—he was promising a revolution. The actual release, while impressive in some ways, couldn’t possibly live up to such cosmic expectations.
The Model Switcher Fiasco
On launch day, OpenAI made a decision that infuriated many users: they removed access to older models like GPT-4o, GPT-4.5, and GPT-3.5. This forced migration felt like a bait-and-switch to loyal users who had built workflows around specific models.
The new “model router” system, which automatically selects which model to use based on your query, appeared to be a cost-cutting measure rather than an improvement. Many suspected OpenAI was routing queries to cheaper, less capable models whenever possible.
To their credit, OpenAI listened to the backlash. They’ve since restored access to legacy models for Plus users, allowing people to continue using GPT-4o if they prefer. This quick response shows they’re at least willing to adapt when users revolt.
Performance Issues That Matter
After testing GPT5 extensively, I found several legitimate concerns:
- It’s frustratingly slow at times, even for simple questions
- It occasionally freezes or gets stuck “thinking” indefinitely
- Its coding abilities don’t match Claude Opus 4.1 or even GPT-3.5 Pro in some tests
- It makes concerning accuracy errors on basic problems
The accuracy issues are particularly troubling. When I tested GPT5 with the same problems others had reported, it correctly answered some (like the “How many Bs in blueberry?” question) but completely failed others (like a simple algebraic equation). This inconsistency means users can’t fully trust its responses without verification.
The Personality Debate
Many users complained that GPT5’s personality felt cold and mechanical compared to GPT-4o’s warmth. In my testing, I didn’t notice a dramatic difference when comparing responses to prompts like “Baby just walked” or roleplay scenarios.
However, GPT5 does tend to give slightly shorter responses. This brevity might be contributing to the perception that it has less personality. It’s also possible that during the initial rollout, when the model router wasn’t working correctly, users were getting responses from the “thinking” model that prioritized logic over warmth.
The Future of AI Assistants
Despite its flaws, I believe GPT5 represents a stepping stone toward a more personalized AI future. OpenAI seems to be moving toward a system where each user’s ChatGPT experience is customized to their preferences—whether that’s warm and friendly or direct and efficient.
The current version isn’t the final destination but rather a work in progress. Many of the current concerns are valid, but they’re also fixable with time and user feedback.
I remain cautiously optimistic about where this technology is heading. The goal shouldn’t be to create an AI that impresses us with party tricks, but one that genuinely makes our lives better through reliable, helpful assistance.
For now, GPT5 exists in that middle ground—sometimes impressive, sometimes frustrating. It’s neither the disaster its harshest critics claim nor the revolution its marketing suggested. It’s simply the latest iteration of a technology that’s still evolving, with all the growing pains that entails.
Frequently Asked Questions
Q: Is GPT5 actually worse than previous models?
In some ways, yes. My testing showed that GPT5 struggles with certain tasks that older models handled well, particularly coding and some math problems. However, it also performs better in other areas. The inconsistency is the biggest issue rather than an across-the-board decline in quality.
Q: Why did OpenAI remove access to older models initially?
OpenAI likely wanted to simplify the user experience by having their system automatically select the appropriate model for each query. This approach makes sense for casual users who don’t know which model to choose. However, they underestimated how many power users had built workflows around specific models and their unique capabilities.
Q: Can I still use GPT-4o if I prefer it over GPT5?
Yes, OpenAI has restored access to legacy models for Plus subscribers. You can enable this feature in your settings under the “General” tab by turning on “Show legacy models.” This will give you access to GPT-4o again.
Q: Is the model router just a cost-cutting measure?
While cost efficiency is likely a factor, the router also serves a legitimate user experience purpose. Most casual users don’t understand the differences between models and shouldn’t need to choose manually. The problem was that the router initially wasn’t working properly, often selecting inappropriate models for certain queries.
Q: Will GPT5 improve over time?
Almost certainly. OpenAI has shown they’re responsive to user feedback by restoring legacy models and making adjustments to how the router works. Sam Altman has also acknowledged some of the initial issues. As with previous models, we can expect ongoing improvements and refinements to address the current limitations.










