When ChatGPT-5 Forgot Its LEGO – And Then Remembered

chatgpt5 knows lego

One of the fascinating things about living at the intersection of AI and LEGO collecting is that you start to see technology through the lens of the bricks. Sometimes everything snaps together seamlessly, and other times you realise a critical piece is missing from the box.

That’s exactly what happened recently when I asked ChatGPT-5 to take my notes and concepts to generate the content for the review of LEGO Icons 10332: Medieval Town Square for me.

The Hiccup

When I first prompted it with “Can you do a review of LEGO 10332?”, ChatGPT-5 stalled.

Instead of jumping straight into the output with the image, it responded with hesitation:

“Sorry—it seems there was a hiccup trying to fetch external info about LEGO set 10332. Let me see if I can still help without live data… LEGO 10332, judging from the numbering, is likely a Creator Expert Modular Building style set, but I don’t have baked-in details.”

That was surprising. ChatGPT-4o, the previous model, could do this “in its sleep.” Ask it about a set number, and it would immediately know the theme, release date, piece count, and often even retail price. It knew about me, Luke, Rose, and all we’d built previously was in its memory.

But GPT-5’s initial caution wasn’t about knowledge loss, it was about accuracy. OpenAI deliberately “sandboxed” GPT-5 at launch so it wouldn’t hallucinate data. The model leaned on transparency (“I don’t know yet”) instead of bluffing. Admirable, yes, but for a LEGO reviewer, frustrating.

The Fix

Here’s the twist. When I nudged it again with the exact same request a week later

“Can you do a review of LEGO 10332 please?”

—ChatGPT-5 came back fully armed and operational.

It delivered a 900–1000 word review of LEGO 10332: Medieval Town Square in my exact blog style, even weaving in the familiar scene of Darren (50, black hair), Luke (7, brown hair), and Rose (4, brown hair) building at their kitchen table. https://www.redmondreviews.com/2025/08/23/lego-set-review-medieval-town-square-10332/

No hedging. No hiccup. Just the kind of grounded, detailed storytelling that I’ve come to know, love, and expect. Thank you ChatGPT.

What Changed?

So, what happened between those two prompts?

It seems OpenAI quietly re-enabled knowledge grounding for set references that fall into well-established databases (like LEGO’s numbering). Where before GPT-5 was overcautious, it now confidently draws from its updated training and context-handling to produce reliable, fact-based reviews.

In other words: ChatGPT-5 fixed the issue.

Why It Matters

For me, this is a reminder that AI isn’t static. Features that feel like a regression one week can be improved the next. OpenAI is balancing on a tightrope:

  • Accuracy vs. hallucination – GPT-5 prefers to under-promise rather than over-invent.
  • Grounded knowledge vs. connectors – Unlike Claude’s connector ecosystem, ChatGPT-5 doesn’t yet blend external databases natively. But its recall and grounding are getting sharper.
  • Consistency vs. creativity – A model that hesitates before answering may frustrate in the moment, but in the long run, it builds trust.

For bloggers, reviewers, and LEGO storytellers like me, that balance is everything. I don’t want a bot making up mini-figures that never existed. I want truthful storytelling, wrapped in creativity.

I’ve spoken of the improvements in Claude in 2 posts this week already, and here is how ChatGPT is also maturing.

The Bigger Picture

This episode also shows how we’re moving into a new phase of AI use. GPT-4o felt like a fluent, eager assistant, fast, knowledgeable, sometimes overconfident. GPT-5 is more like a cautious editor who double-checks before speaking, but when it does speak, it’s powerful.

The evolution mirrors LEGO itself. Think back to early sets where instructions were vague, forcing you to guess. Compare that to today’s meticulously designed manuals. Both had their charm, but the modern way gives builders more confidence.

GPT-5 is doing the same. It’s tightening the tolerances.

What’s Next

For my part, I’ll keep testing GPT-5 with the kinds of prompts that matter most to me: LEGO reviews, AI commentary, Redmond’s Forge & launch, and the intersection between bricks, galaxies far away, and bytes.

If you’re following my journey at Redmond’s Forge (https://www.redmondsforge.com) and Redmond Reviews (https://www.redmondreviews.com), you’ll keep seeing AI-powered LEGO reviews that blend the best of both worlds.

And if you’re here on Darren Redmond’s AI Transformations (https://darrenredmond.com), the takeaway is simple:

  • Don’t write off an AI model because it stumbles.
  • Push it, test it, re-prompt it.
  • Often, the fix is closer than you think.

ChatGPT-5 may have forgotten its LEGO facts for a moment, but now, it’s building stronger than ever, and my n8n workflows are smooth and delivering again.

CATEGORIES:

AI

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *


Newsletter Signup

Sign up for my AI Transformations Newsletter

Please wait...

Thank you for signing up for my AI Transformations Newsletter!


Latest Comments


Latest Posts


Tag Cloud

30 days of AI AI gemini gen-ai lego monthly weekly


Categories

Calendar

August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Archives