From Manual Toy Appraisal to AI-Powered Valuation: Transforming Redmond’s Forge with Claude Code

redmonds forge ai assisted evaluations

How we’re revolutionizing collectible toy evaluation using AI vision and automated systems

Running a Brick, LEGO, toy museum like Redmond’s Forge means constantly making critical decisions: Is this vintage LEGO set worth displaying in our museum, or should we place it in our LEGO City, or price it for resale? What’s the current market value of these rare mini-figures? Which platform will give us the best return on investment?

For years, these decisions required hours of manual research, cross-referencing multiple databases, and relying on expertise that, while valuable, couldn’t scale with our growing inventory. That’s all changing thanks to an AI transformation that’s reshaping how we approach toy valuation.

The Challenge: Scaling Expertise in a Growing Market

The collectible toy market has exploded in recent years. LEGO sets that sold for €50 a decade ago now command thousands on the secondary market. Rare mini-figures can be worth more than gold by weight. But identifying these treasures requires encyclopedic knowledge of:

  • Set numbers and release years
  • Regional variants and production errors
  • Current market trends across multiple platforms
  • Condition assessment standards
  • Historical significance for museum curation

Our team (me, 7 year old Luke, and 4 year old Rose) at Redmond’s Forge has built this expertise over decades, but we were hitting a scalability wall. Each appraisal took 15-30 minutes of careful research. With hundreds of items coming through our doors weekly, we needed a better approach.

Enter AI Vision: Teaching Machines to See Value

The breakthrough came when we realized that AI vision technology had reached a tipping point. Modern systems like Claude can now:

  • Identify specific LEGO sets from photographs
  • Recognize rare variants and printing errors
  • Assess condition from visual inspection
  • Cross-reference against multiple pricing databases
  • Make intelligent recommendations about curation vs. cityscape vs. resale

But the real game-changer wasn’t just the AI capabilities, it was how we could integrate them into a complete system using Claude Code.

Building the Future with Claude Code

Traditional software development for a project like this would take months. We’d need to hire developers, architect complex systems, handle API integrations, and build user interfaces. The timeline and cost would be prohibitive for a boutique operation like ours.

Claude Code changed everything. Instead of writing code, we started having conversations about what we needed:

“I want to build a toy valuation system that can process images of LEGO minifigures, identify them using AI vision, cross-reference current market prices, and recommend whether items should go to our museum, our LEGO city, LEGO Friends Heartlake city, one of our MOCs, or be listed for resale.”

Within hours, Claude Code had:

  • Architected a complete Python-based system
  • Integrated multiple APIs (Claude Vision, BrickLink, Brickset, eBay, redmondsforge.com)
  • Built a database schema for inventory tracking
  • Created automated report generation
  • Set up testing frameworks
  • Established proper version control

The Technical Architecture: Simple Yet Sophisticated

The system Claude Code built for us follows enterprise-grade patterns while remaining surprisingly simple:

Image Processing Pipeline: Upload photos via web interface or mobile app. The system automatically crops, enhances, and prepares images for analysis.

AI Vision Analysis: Claude’s vision capabilities identify specific sets, mini-figures, and condition details. The accuracy is remarkable, often catching variants that human appraisers might miss.

Multi-Source Pricing: Real-time integration with BrickLink, eBay, and other marketplaces provides current market values. The system tracks price trends and seasonal fluctuations.

Decision Engine: Our business rules are encoded into the system. Items above certain value thresholds or with historical significance get flagged for museum consideration. Everything else gets optimization recommendations for city, mocs, resale.

Automated Reporting: Generate detailed appraisal reports, inventory summaries, and market analysis documents with a single click.

The Results: Transformation by the Numbers

One day after implementation, the results speak for themselves:

Efficiency Gains: What once took 20 minutes per item now takes 2 minutes for 50. We’ve increased our processing capacity by 500x without adding staff.

Accuracy Improvements: The AI catches details we might miss, leading to more accurate valuations. We’ve discovered several high-value pieces that would have been underpriced, or the opposite they were over-valued.

Inventory Optimization: Data-driven decisions about museum vs. city vs. MOC vs. resale have improved our inventory stock value accuracy by 35% while enhancing our museum collection’s quality.

Scalability: Today we’ve been able to process estate sales and large collections that would have been impossible to handle manually.

Beyond Efficiency: Enhancing Human Expertise

The most surprising benefit hasn’t been replacing human judgment, it’s been augmenting it. Myself and the kids (the expert appraisers) now focus on high-level curation decisions, customer relationships, and complex cases that require nuanced understanding. The AI handles routine identification and pricing, freeing our team to do what humans do best.

The system also serves as a training tool. New volunteers and team members can learn alongside the AI, understanding why certain pieces are valuable and how market dynamics affect pricing and curation.

The Broader AI Transformation

Our toy valuation system is just one example of how AI is transforming traditional businesses. The key insight isn’t that AI replaces human expertise, it’s that the right AI tools can amplify human capabilities while automating routine tasks.

Claude Code proved particularly powerful because it bridges the gap between business needs and technical implementation. We didn’t need to become software developers (Luke is 7, Rose is 4, only one of us is :), we just needed to clearly articulate our challenges and goals.

Looking Forward: The Museum of Tomorrow

We’re already planning the next phase of our AI transformation. Computer vision will help us create interactive museum displays where visitors can point their phones at display pieces and QR code for instant historical context, audio and video playback on the phone all stored on our website and build by our AI assistant Claude. Going forward natural language processing will power a chatbot that answers collector questions 24/7. Predictive analytics could help us anticipate market trends and acquisition opportunities.

The collectible toy business has always been about recognizing value where others don’t. AI isn’t changing that fundamental truth, it’s just giving us better tools to see clearly.

Getting Started with Your Own AI Transformation

For business owners considering similar transformations, the barrier to entry has never been lower. Tools like Claude Code, Cursor IDE, democratize advanced AI development, making sophisticated systems accessible to organizations of any size.

The key is starting with clear problems and specific use cases. Don’t try to “add AI” to everything, identify where automation and augmentation can create the most value.

At Redmond’s Forge, we’re not just curating a museum anymore, we’re demonstrating how traditional businesses can embrace AI transformation while preserving the human expertise and personal touch that customers value.

The future isn’t about choosing between human wisdom and artificial intelligence, it’s about combining them to create something better than either could achieve alone.


Interested in learning more about our AI transformation journey? Visit us at Redmond’s Forge or here on darrenredmond.com follow our blog for more insights on modernizing traditional businesses with cutting-edge technology.

Claude Code Setup Guide for Redmond’s Forge Toy Valuation System

Step 1: Install Claude Code

Option A: NPM Installation (Recommended)

bash

npm install -g @anthropic-ai/claude-code
Option B: Native Installation (Beta)

bash

# macOS/Linux
curl -fsSL https://claude.ai/install.sh | bash

# Windows (PowerShell)
irm https://claude.ai/install.ps1 | iex

Step 2: Initial Setup

bash

# Navigate to your project directory
mkdir redmonds-forge-toy-valuation
cd redmonds-forge-toy-valuation

# Start Claude Code (will prompt for login)
claude

Step 3: Authentication

  • You’ll be prompted to log in via Anthropic Console
  • Complete OAuth process in browser
  • Your credentials will be stored for future use

Step 4: Verify Installation

bash

claude doctor

Step 5: Project Initialization

Once Claude Code is running, give it this initial prompt:

I want to build a toy valuation system for Redmond's Forge collectible toy and brick museum. The system should:

1. Accept uploaded images of LEGO minifigures and sets
2. Use Claude's vision capabilities to identify specific pieces
3. Cross-reference with BrickLink API for current market values
4. Determine whether items should go to museum, city, MOC, or resale
5. Recommend best resale platforms
6. Generate valuation reports
7. Track inventory in a database

Please create a project structure and start building this system. Use Python as the main language.

Key Claude Code Commands You’ll Use

Basic Commands

  • /help – Show all available commands
  • /config – Configure settings
  • /accept-all – Auto-approve all file changes (careful!)
  • /bug – Report issues directly to Anthropic

Project Management

  • /plan – Ask Claude to create a detailed project plan
  • /commit "message" – Create git commits
  • /test – Run project tests

File Operations

  • /read filename.py – Read specific files
  • /create – Create new files
  • /edit – Modify existing files

System Architecture Claude Will Build

minifigure_id
├── src/
│   ├── image_processor.py      # Handle image uploads
│   ├── claude_vision.py        # Claude API integration
│   ├── bricklink_api.py        # BrickLink price lookups
│   ├── valuation_engine.py     # Business logic
│   ├── database.py             # SQLite/PostgreSQL
│   └── report_generator.py     # PDF/Excel reports
├── api/
│   ├── main.py                 # FastAPI endpoints
│   └── models.py               # Pydantic models
├── frontend/                   # Optional web interface
├── config/
│   ├── settings.py             # Configuration
│   └── api_keys.env            # API credentials
├── tests/
├── data/
│   ├── images/                 # Input images
│   └── reports/                # Generated reports
└── requirements.txt

Required API Keys

You’ll need to set up:

  1. Anthropic API Key (for Claude vision)
  2. BrickLink API Credentials
  3. eBay API Key (optional, for market research)

Claude Code will help you configure these securely.

Best Practices for This Project

1. Start with Planning

bash

claude
> "Think through the architecture for a toy valuation system. Create a detailed plan before coding."

2. Use Incremental Development

bash

> "First, build just the image upload and Claude vision integration. Test it thoroughly."
> "Next, add BrickLink API integration."
> "Finally, add the database and reporting features."

3. Create Custom Commands

Claude Code will help you create custom slash commands like:

  • /value-image path/to/image.jpg – Quick valuation
  • /generate-report – Create inventory reports
  • /sync-bricklink – Update pricing data

4. Testing Strategy

bash

> "Create unit tests for each component as we build them"
> "Set up integration tests with sample LEGO images"

Expected Development Time

With Claude Code:

  • Day 1: Project setup, image processing, basic Claude vision integration
  • Day 2: BrickLink API integration, basic valuation logic
  • Day 3: Database design, data models
  • Day 4: Reporting system, PDF generation
  • Day 5: Testing, refinement, deployment setup

Next Steps

  1. Install Claude Code using the commands above
  2. Create project directory and start Claude Code
  3. Give Claude the initial prompt I provided
  4. Let Claude create the project structure and start building
  5. Iteratively refine based on your specific needs

Tips for Working with Claude Code

  • Be specific about your requirements
  • Ask for explanations when Claude suggests architecture decisions
  • Use /think to trigger extended reasoning for complex problems
  • Create checkpoints with git commits after major features
  • Test frequently as features are built

Troubleshooting

If you encounter issues:

bash

claude doctor          # Check installation
claude update          # Update to latest version
/bug "description"      # Report issues to Anthropic

Ready to start? Run the installation commands and let’s build this system!

How Long to Production?

It didn’t take 5 days, it took 2 hours.

How Good is the Code?

You can even ask Claude to rate itself, or ask Cursor.ide to rate it using chatgpt, if you don’t trust Claude Sonnet 4 to review itself and be objective.

Prompt: can you review the code you’ve written as a senior architect doing a code and architecture review, how good is it? how well documented is the code, how good are the tests and the test coverage. How much time would it have taken a senior python engineer to write this system

🏗️ Senior Architect Code Review

Executive Summary

Grade: B+ (7.5/10) – A solid, production-ready system with good architecture but room for improvement in documentation and some design patterns.

📊 Metrics Overview

MetricValueGrade
Total Lines of Code3948
Production Code~2,600
Test Code1,348 (34% of total)A-
Test Coverage42% overall, 78-100% on core modulesB
Number of Tests61B+
Docstrings75C
Inline Comments 154B-
Pylint Score8.52/10B+

  ✅ Strengths

  1. Architecture (A-)

  – Clean Domain-Driven Design: Excellent separation of concerns with clear boundaries

  – Dependency Injection: Good use in test fixtures, though could be better in production

  – Modular Structure: Each component has a single responsibility

  – Extensibility: Easy to add new toy categories beyond LEGO

  2. Testing (B+)

  – Comprehensive Test Coverage: Critical modules at 78-100%

  – Good Test Pyramid: Unit tests, integration tests, and end-to-end tests

  – Mocking Strategy: Excellent use of mocks for external dependencies

  – Edge Cases: Well-covered including error scenarios, malformed data, network failures

  3. Error Handling (A)

  – Graceful Degradation: System continues without external APIs

  – Specific Exception Handling: Not just catching Exception everywhere

  – User-Friendly Messages: Clear error reporting

  – Fallback Mechanisms: Smart defaults when services unavailable

  4. Code Quality (B+)

  – Type Hints: Used throughout with Pydantic models

  – Consistent Style: Following Python conventions

  – No Code Duplication: Good DRY principle application

  – Security: No credential logging, proper OAuth implementation

⚠️ Areas for Improvement

  1. Documentation (C)

  # Current state – minimal docstrings

  async def identify_lego_items(self, image_path: str) -> IdentificationResult:

      “””Identify LEGO items in the provided image using Claude Vision”””

      # Should have:

      “””

      Identify LEGO items in the provided image using Claude Vision.

      Args:

          image_path: Absolute path to the image file

      Returns:

          IdentificationResult containing identified items and confidence

      Raises:

          FileNotFoundError: If image_path doesn’t exist

          ValueError: If image format is unsupported

      “””

  2. Configuration Management (C+)

  – Settings scattered between .env and settings.py

  – No environment-specific configs (dev/staging/prod)

  – Missing config validation on startup

  3. Async/Await Inconsistency (B-)

  # Mixed async patterns

  async def identify_lego_items(self, image_path: str):

      # But then uses synchronous Anthropic client

      message = self.client.messages.create()  # Should be await

  4. Database Layer (D)

  – 0% test coverage on repository and models

  – No migration system (Alembic)

  – Missing indexes on frequently queried fields

  – No connection pooling configuration

  5. API Layer (D)

  – 0% test coverage on FastAPI endpoints

  – No API versioning

  – Missing rate limiting

  – No request validation middleware

  – No OpenAPI documentation customization

  6. Logging (D)

  – Using print() statements instead of proper logging

  – No structured logging (JSON format)

  – No log levels (DEBUG, INFO, ERROR)

  – No correlation IDs for request tracking

🔧 Architectural Recommendations

  1. Implement Proper Logging

  import logging

  import structlog

  logger = structlog.get_logger()

  # Instead of print()

  logger.info(“BrickLink API call”,

              status=response.status_code,

              item_id=item_id)

  2. Add Dependency Injection Container

  from dependency_injector import containers, providers

  class Container(containers.DeclarativeContainer):

      config = providers.Configuration()

      anthropic_client = providers.Singleton(

          anthropic.Anthropic,

          api_key=config.anthropic_api_key

      )

      lego_identifier = providers.Factory(

          LegoIdentifier,

          client=anthropic_client

      )

  3. Implement Repository Pattern Properly

  from abc import ABC, abstractmethod

  class ValuationRepository(ABC):

      @abstractmethod

      async def save(self, valuation: ValuationRecord) -> int:

          pass

      @abstractmethod

      async def find_by_id(self, id: int) -> Optional[ValuationRecord]:

          pass

  class SQLAlchemyValuationRepository(ValuationRepository):

      # Implementation

  4. Add Circuit Breaker for External APIs

  from circuit_breaker import CircuitBreaker

  class BrickLinkClient:

      @CircuitBreaker(failure_threshold=5, recovery_timeout=30)

      def search_items(self, item_type: str, search_term: str):

          # Prevents hammering failed API

 📈 Performance Considerations

  Current Issues:

  1. No caching layer – Redis would help

  2. Synchronous image processing – Should use Celery/RQ

  3. No database connection pooling

  4. Large images processed in memory

  Recommended Improvements:

  # Add caching

  from functools import lru_cache

  from redis import Redis

  @lru_cache(maxsize=100)

  def get_cached_market_data(item_id: str):

      # Check Redis first

      pass

  # Add async processing

  from celery import Celery

  @celery.task

  def process_valuation(image_path: str):

      # Background processing

      pass

  ⏱️ Time Estimation for Senior Python Engineer

  Actual Development Time:

ComponentTime Estimate
Initial Architecture & Setup4-6 hours
Core Models & Schemas3-4 hours
Image Processing4-5 hours
Claude Vision Integration6-8 hours
BrickLink OAuth Implementation8-10 hours
Valuation Engine6-8 hours
Database Layer4-6 hours
FastAPI Implementation6-8 hours
Report Generation4-6 hours
Testing (61 tests)12-16 hours
Documentation2-3 hours
Debugging & Refinement8-10 hours
TOTAL67-90 hours

  For a Senior Engineer:

  – Working alone: 2-3 weeks (80-120 hours)

  – Includes: Research, API learning curve, testing, documentation

  – Without AI assistance: Add 30-40% more time

  🎯 Final Assessment

  What’s Production-Ready:

✅ Core identification and valuation logic

✅ Image processing pipeline

✅ Error handling and fallbacks

✅ OAuth implementation

✅ Test coverage on critical paths

  What Needs Work Before Production:

❌ Database migrations and indexing

❌ API endpoint testing

❌ Proper logging system

❌ Performance optimizations

❌ Monitoring and alerting

❌ Rate limiting and security headers

❌ Documentation completion

Overall Quality Score: B+ (7.5/10)

Verdict: This is a solid foundation that demonstrates good engineering practices. With 2-3 days of additional work on the identified gaps, it would be production-ready. The architecture is extensible and maintainable, which is more important than having every feature perfect on day one. The code shows characteristics of experienced development with AI assistance – strong architecture and testing but weaker on boilerplate items like logging and comprehensive documentation that AI tools often skip.

CATEGORIES:

AI

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *


Newsletter Signup

Sign up for my AI Transformations Newsletter

Please wait...

Thank you for signing up for my AI Transformations Newsletter!


Latest Comments


Latest Posts


Tag Cloud

30 days of AI AI gemini gen-ai lego monthly weekly


Categories

Calendar

September 2025
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Archives