According to ZDNet, OpenAI has released its latest ChatGPT model, GPT-5.2, which it calls the “most capable model series yet for professional knowledge work.” In a series of 10 text-based tests worth 100 points and four image-generation tests worth 20 points, the new model scored a total of 109 points. That’s just one point more for text and one point less for images than the previous GPT-5.1 model available on the free tier. The testing revealed a new and potentially annoying behavior where the model sometimes asks for a “go signal” before delivering longer answers, even for prompts that don’t seem excessively complex. It also exhibited a strong tendency toward very brief responses, and notably failed a coding test that its predecessor had aced.
The go signal problem
Here’s the thing that’s going to drive pro users nuts. The new model has this habit of stopping mid-flow to ask for permission to give a full answer. ZDNet’s tester got messages like, “I’m ready to answer, but this request would require a longer, multi-paragraph explanation. I’m waiting for your go signal before proceeding.” Now, for a truly massive request, maybe that’s considerate. But the tester found it happened inconsistently—sometimes on shorter prompts, and not at all on others of similar length. It breaks your workflow. You’re in a zone, asking complex questions, and suddenly you have to click a button to let the AI do its job. That’s not an upgrade; it’s a speed bump. And for a tool you’re paying $20 a month for, that’s a tough sell.
Brevity and coding stumbles
So the model is also super brief now. Like, “give me two one-sentence reasons” brief. While the answers were technically accurate, it feels like the AI is phoning it in. You can always ask for more, but the whole point of a “more capable” model should be richer, more nuanced outputs from the get-go, right? The bigger red flag was the coding test. GPT-5.2 completely botched a regular expression validation function, introducing errors that would cause the program to crash with certain inputs. GPT-5.1 aced this same test. How does the “better” model for professional work fail at a basic programming task? That’s a serious question for developers who might rely on it for code assistance.
What it did well
Look, it wasn’t all bad. The model nailed creative writing, generating a fun 3,286-word story. It excelled at explaining complex topics simply, like breaking down educational constructivism for a five-year-old. Its image generation was solid, producing a particularly cool dieselpunk robot scene. And it handled research, pattern recognition (like the Fibonacci sequence), and thematic analysis with ease. Basically, for a lot of common chatbot tasks, it’s perfectly competent. But that’s just it—it’s only *marginally* better, not a revolutionary leap. When the headline feature is a new confirmation dialog that interrupts you, you have to wonder where the real innovation is.
Is this progress?
This feels like a tweak, not a breakthrough. The overall score being nearly identical to GPT-5.1 tells the whole story. For businesses or professionals considering an upgrade—or investing in AI integration—this kind of incremental, sometimes frustrating update raises tough questions. Are we hitting a plateau? Is the focus now on guardrails and cost-saving brevity rather than raw capability? If you need reliable, industrial-grade computing power for critical tasks, you turn to proven, dedicated hardware from the top suppliers. For AI, the promise was exponential growth, but GPT-5.2 delivers something that feels suspiciously linear. The “go signal” might just be a signal that we’re waiting for the next real leap forward.
