An unreleased OpenAI image generation model briefly appeared on LMArena under three different codenames before being taken down within hours. The model, internally referred to as GPT-Image-2, was live under aliases including maskingtape-alpha, gaffer-tape-alpha, and packingtape-alpha — a naming pattern that, according to The AI Corner, suggests OpenAI was stress-testing multiple variants simultaneously to evaluate which performed best in blind comparisons ahead of a potential public release.
Users who accessed the model before it disappeared reported significant improvements over its predecessor. The persistent warm yellow color cast that had become a signature flaw of GPT-Image-1 appears to have been eliminated. Photorealistic portraits — including complex beach scenes with multiple subjects, accurate hand anatomy, and realistic sunglass reflections — were described as indistinguishable from real photographs by people shown them without context.
World Knowledge: The Gap GPT-Image-2 Seems to Have Closed
Where GPT-Image-1 often defaulted to generic aesthetics, GPT-Image-2 demonstrates what early testers described as genuine world knowledge: IKEA storefronts at night rendered with architectural accuracy, YouTube and Windows interfaces reproduced closely enough to pass as real screenshots, and Minecraft scenes with correct in-game UI and art style. A fabricated “Claude Opus 5 Internal Document” embedded inside a Minecraft world generated by the model went viral on X, accumulating over 439,000 views before the post was widely shared as a case study in the model’s attention to detail.
This recall for fine-grained real-world detail sets it apart from earlier image generators, and mirrors the kind of race dynamic already playing out in language models — where DeepSeek recently launched its V4 model built on Huawei chips, forcing competitors to accelerate their own roadmaps, as Frontierbeat reported.
The model’s text rendering ability was cited across multiple reports as a key improvement. Rather than floating text awkwardly over images — a persistent limitation in most AI image generators — GPT-Image-2 integrates written language into scenes naturally, including handwritten medical notes with convincing penmanship and comic book panels with readable speech bubbles and accurate costume details on characters like Spider-Man and Batman, reported The AI Corner. The three code names used simultaneously — masking tape, gaffer tape, packing tape — suggest OpenAI was running parallel safety and quality evaluations before selecting a final version to ship, a process that typically signals a release is weeks away at most.
OpenAI Closes the Gap With Google DeepMind
Prior to the leak, Google DeepMind’s Nano Banana Pro had established itself as the benchmark for photorealism in early 2026, consistently outperforming GPT-Image-1.5 in head-to-head comparisons across AI benchmarking trackers. The early GPT-Image-2 samples suggest OpenAI has closed — and in some categories, surpassed — that lead. Independent testers noted the model outperforms Nano Banana Pro in realism, text rendering, and world knowledge simultaneously, a combination that, if sustained in the final release, would represent a rare sweep across all three categories at once.
OpenAI has been consolidating its product lineup around the GPT-5 family after retiring GPT-4o in February, as Frontierbeat previously covered, and the appearance of a new image model in that orbit suggests the company is not limiting its upgrades to language capabilities alone.
The rapid removal of all three variants — within hours of being identified by users — is itself a signal. Leaks happen in AI, but a company that leaves a near-production model exposed long enough to generate viral samples is rare. The consensus among early analysts: this was not an accidental publish. It was a model about to ship.

