There are some tools in a developer’s life that feel a bit like an old guitar in a honky-tonk bar, the shine may have faded, the strings might creak, but every time you pick it up, it plays just right. For me, that’s SonarQube. I’ve seen frameworks come and go, languages rise and fall, AI assistants promise the moon, but when it comes to keeping code honest, SonarQube has been the steady companion at the heart of the band. Call it my codebase cowboy, the one that helps me wrangle bugs, tame duplications, and keep the project riding true down the trail.
Setting up the stage (yesterday’s gig)
Yesterday felt like tuning up a brand-new steel guitar. I pulled down the latest version of SonarQube, dusted off the dashboard, and got it running on my Mac. Then I hitched up our AI-powered LEGO Valuation System, the one built in record time with Claude Code and Cursor, and sent it out on stage under SonarQube’s spotlight.
What happened next was pure harmony. The AI-generated code strutted its stuff, showing off its clever tricks and lightning-fast riffs, but SonarQube kept the beat steady. It was less a battle of egos and more like a band finding its groove.
The trio that plays in tune
If Claude Code is the hotshot guitarist and Cursor is the piano player laying down the chords, then SonarQube is the drummer with perfect timing. Together they sound like a Kenny Rogers, Kris Kristofferson, and Dolly Parton masterpiece, each one brilliant on their own, but unforgettable when they play together.
- Claude & Cursor: generating 10,000+ lines of clean, tested code in a single day. 5000+ lines of tests, >80% code coverage.
- SonarQube: stepping in to say, “That solo was fine, but let’s tighten up that rhythm, fix that missed note, and make sure the song can last a whole tour.”
- The Result: a system that’s not just flashy, but solid, reliable enough to stand the test of time.
The honest friend
SonarQube doesn’t sugarcoat. Yesterday it gave me an A in security (a clean bill of health), but also a D in reliability (21 open bugs that needed wrangling). It reminded me that even when the AI writes code faster than I can blink, somebody’s got to check if the fences are mended, the gates are locked, and the herd’s not about to run wild.
And that’s the thing: AI can generate code like a fiddle on fire, but SonarQube keeps it honest. It calls out the duplications, shows where the coverage dips, and makes sure we don’t ride into production with a busted wheel.
Why I still love it
After all these years, SonarQube is still the bandmate I wouldn’t tour without. It’s not the flashiest, it doesn’t grab headlines like AI coding companions, but it gives me something they can’t: confidence that the show will go on without a hitch.
It’s the steel string that never breaks, the backbeat that never falters, the quiet reassurance that when the lights go up and the crowd leans in, we’ll be ready to play.
Final chorus
So here’s my country chorus:
- AI can give us speed like we’ve never seen.
- But SonarQube gives us the discipline to make it last.
- Together, they don’t just write code, they write songs worth singing.
And that’s why, after all of these years, I still love SonarQube.
🎶 “You picked a fine time to leave me, all of this loose code, but now it’s time to work out the technical load…”
The SonarQube Report

What this says at a glance
- Security: A (0 issues, 0 hotspots outstanding) → ship-safe once auth/CORS are handled in app config.
- Reliability: D (21 open “bug” issues) → these are correctness risks (crashes, leaks, logic errors). This is the red flag.
- Maintainability: A (858 code smells) → lots of low-impact smells, but the technical-debt ratio is still low vs. code size.
- Coverage: 80.5% on 4.4k lines → solid, but make sure new code stays ≥90%.
- Duplications: 7% on 12k lines → higher than ideal; will slow future changes.
Immediate priorities (in order)
- Fix the 21 Reliability issues → unblock the D rating.
- Cap duplication on new code to ≤3% and gradually refactor legacy duplication.
- Hold/new-code coverage ≥90–95% (overall will float up from 80.5%).
- Chip away at top-impact maintainability smells that touch hot paths (async I/O, valuation engine, API handlers).
Playbook to clear the 21 Reliability issues
Open Sonar → Issues → Reliability → Severity ≥ Major and work top-down. Typical Python “bug” patterns and fixes:
- Resource leaks (files/sockets not closed) → wrap in
with
blocks; ensure aiofiles for async. - Potential NPE/attribute errors → add guards,
Optional
typing,pydantic
validation at boundaries. - Mismatched equals/hash / dataclass pitfalls → use
@dataclass(eq=True,frozen=True)
where appropriate. - Dangerous default args (
def f(x=[]):
) → replace withNone
+ create in-body. - Regex/timeouts → compile patterns once; set explicit timeouts on HTTP calls (BrickLink/OpenAI).
- Error swallowing (
except Exception: pass
) → narrow exceptions; log + rethrow or handle.
Aim: zero “Bug” issues; you should pop to Reliability B or A immediately.
Bring duplication down from 7% (practical steps)
- Find the clusters (Issues → Duplications → by file). You’ll likely see repeated:
- BrickLink API request/parse blocks
- Report rendering helpers
- Image pre/post-processing snippets
- Extract shared utilities (
src/shared/…
) and keep functions small & pure. - Template inheritance for PDF/HTML reports (Jinja) instead of copy-paste.
- Strategy pattern for multi-modal ID: keep shared pre/post hooks in base class.
First pass target: ≤5% in 2–3 days without risky refactors. Longer-term: ~3%.
Testing/coverage guardrails
Keep overall ~80% but enforce high standards on new code:
Quality Gate (suggested)
- No new Bugs or Vulnerabilities
- New code coverage ≥ 90%
- New code duplication ≤ 3%
- New code maintainability rating A
- Security Hotspots reviewed = 100%
Example sonar-project.properties
additions
sonar.coverage.exclusions=**/tests/**,**/scripts/**,**/migrations/**
sonar.cpd.exclusions=**/schemas/**,**/generated/**
sonar.python.version=3.11
(Only exclude generated/boilerplate; don’t blanket-exclude core logic.)
One-week “clean sweep” plan (realistic)
Day 1: Reliability bugs (21 items)
- Fix resource handling, default args, exception misuse, timeouts
- Add/adjust unit tests to reproduce + lock in fixes
Day 2: Duplication hotspots
- Extract BrickLink client helpers; factor report templating base
- Quick wins in image/valuation utilities
Day 3-4: Async/perf correctness
- Move blocking I/O off event loop; add
asyncio.gather
where safe - Add DB pagination/index hints where Sonar flags “complexity/loops”
Day 5: Quality Gate + CI
- Add Sonar step to CI (PR decoration) + coverage threshold fail @ 90% new code
- Document suppression policy (see below)
Effort: ~5–7 person-days (senior), yielding Reliability A/B, duplication ~5–6%, and a locked quality gate for future PRs.
Suppression policy (use sparingly)
- Only suppress with a Jira/GitHub issue ID and short rationale:
# sonarignore: FP-123 – library requires broad except to normalize HTTP client errors
- Prefer code changes over suppression. Periodic audit of suppressed rules.
30-day hardening (optional but wise)
- Security in app (outside Sonar): JWT/RBAC, strict CORS, rate limits.
- PostgreSQL migration + indices for any JSON or LIKE queries.
- Centralized logging/metrics (Sentry/OTel); error budgets per endpoint.
- Load test typical batches (e.g., 50–200 images) and set perf budgets.
Quick checklist you can paste into Jira / Asana / Notion / Github
- Reduce Reliability (Bug) issues from 21 → 0
- Refactor duplicate blocks in BrickLink client & reports (dup ≤5%)
- Enforce PR Quality Gate (no new Bugs/Vulns; new code cov ≥90%; dup ≤3%)
- Replace blocking file ops with async equivalents in API paths
- Add HTTP client timeouts + retries with jitter; log 4xx/5xx metrics
- Add parameter validation in API endpoints (pydantic models)
- Add DB pagination to list endpoints; create missing indexes
- Write tests for fixed bug rules (one test per rule cluster)
- Document suppression policy and tag any suppressions with ticket IDs
The Setup
Here’s a practical guide for running SonarQube locally on a Mac:
1. Install prerequisites
- Java JDK (17 or 21) → SonarQube requires a supported JDK.
brew install openjdk@21
Then add it to your path:export PATH="/usr/local/opt/openjdk@21/bin:$PATH"
- Database (optional for dev/test): SonarQube Community Edition ships with an embedded H2 DB. For production you’d use PostgreSQL.
- Homebrew → recommended for package management.
2. Download SonarQube
Get the latest Community Edition from SonarQube Downloads.
Unzip it into a directory, e.g.:
cd ~/Downloads
unzip sonarqube-<version>.zip -d ~/sonarqube
3. Start SonarQube
Go into the extracted folder and start the server:
cd ~/sonarqube/sonarqube-<version>/bin/macosx-universal-64
./sonar.sh console
👉 This starts SonarQube on http://localhost:9000.
Default login = admin / admin. <- change it immediately, obviously 🙂
Note: Ensure you have more than 10% diskspace available or the ES part of this step will not install and start up correctly.
4. Install SonarScanner
You need SonarScanner to analyze your code.
brew install sonar-scanner
Verify installation:
sonar-scanner -v
5. Run an analysis on your project
In your project root, create a sonar-project.properties
file:
sonar.projectKey=my_project
sonar.projectName=My Project
sonar.projectVersion=1.0
sonar.sources=src
Then run:
sonar-scanner
This will push analysis results to the SonarQube server at http://localhost:9000
.
6. Optional: Run via Docker (faster start)
If you prefer Docker:
docker run -d --name sonarqube -p 9000:9000 sonarqube:lts-community
Then install SonarScanner as above and run scans.
✅ After this, you’ll be able to view quality reports, code smells, vulnerabilities, and maintainability scores in your browser.
7. Quick mental model
- SonarQube server: runs at
http://localhost:9000
(or your team server). - SonarScanner: CLI that sends your project’s metrics + reports to the server.
- Reports you generate: coverage (
coverage.xml
) and test results (junit.xml
). - Quality Gate: rules that can pass/fail builds/PRs.
8. Add Python test + coverage outputs
In your repo (assuming src/
and tests/
layout):
# Add dev deps
pip install pytest pytest-cov coverage
Standard commands to produce machine-readable reports Sonar understands:
# JUnit XML test results
pytest --junitxml=reports/junit.xml
# Coverage XML (branch coverage recommended)
pytest --cov=src --cov-report=xml:reports/coverage.xml --cov-branch
Make sure the
reports/
folder is git-ignored or created in CI before tests run.
Optional (but recommended for richer issues):
pip install pylint bandit mypy
pylint src > reports/pylint.txt || true
bandit -r src -f xml -o reports/bandit.xml || true
mypy src --sqlite-cache --no-error-summary --junit-xml reports/mypy.xml || true
9. Create sonar-project.properties
at repo root
Tailored to your structure and LEGO project:
# ----- Identity
sonar.projectKey=lego-valuation
sonar.projectName=LEGO Valuation System
sonar.projectVersion=1.0
# ----- Where is the code?
sonar.sources=src
sonar.tests=tests
sonar.python.version=3.11
# ----- Coverage & tests
sonar.python.coverage.reportPaths=reports/coverage.xml
sonar.junit.reportPaths=reports/junit.xml
# If you generate Bandit/Mypy reports (optional):
sonar.python.bandit.reportPaths=reports/bandit.xml
sonar.python.mypy.reportPaths=reports/mypy.xml
# ----- Exclusions (be conservative)
sonar.exclusions=**/migrations/**,**/schemas/generated/**,**/build/**,**/dist/**
sonar.test.exclusions=**/integration/**/data/**
# ----- Duplication: don’t exclude core app code or you’ll hide issues
# sonar.cpd.exclusions=**/schemas/**
# ----- Encoding
sonar.sourceEncoding=UTF-8
Do not exclude big chunks of
src/
—you’ll hide problems and skew metrics.
10. Run analysis locally
Export the token and host URL, then run the scanner:
export SONAR_HOST_URL=http://localhost:9000
export SONAR_TOKEN=<your_generated_token>
# 1) generate reports
pytest --junitxml=reports/junit.xml
pytest --cov=src --cov-report=xml:reports/coverage.xml --cov-branch
# 2) push to SonarQube
sonar-scanner
Open your project in SonarQube and you’ll see:
- Reliability/Maintainability/Security issues
- Coverage from
coverage.xml
- Duplications
- Quality Gate status
11. Wire it into CI (GitHub Actions example)
Self-hosted SonarQube (not SonarCloud). Add .github/workflows/sonar.yml
:
name: SonarQube
on:
pull_request:
push:
branches: [ main ]
jobs:
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install deps
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest pytest-cov
- name: Test & coverage
run: |
mkdir -p reports
pytest --junitxml=reports/junit.xml
pytest --cov=src --cov-report=xml:reports/coverage.xml --cov-branch
- name: SonarQube Scan
env:
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }} # e.g. http://your-ci-sonar:9000
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
run: |
brew install sonar-scanner || true
sonar-scanner \
-Dsonar.projectKey=lego-valuation \
-Dsonar.projectName="LEGO Valuation System"
Put
SONAR_HOST_URL
andSONAR_TOKEN
in GitHub Secrets.
For runners without Homebrew, download the scanner ZIP and./bin/sonar-scanner
.
12. Add a Quality Gate that blocks bad PRs
In SonarQube UI → Quality Gates → create/enforce rules like:
- No new Bugs or Vulnerabilities
- New code coverage ≥ 90%
- New code duplication ≤ 3%
- New maintainability rating = A
Set this gate as default, or bind only your LEGO project.
13. Nice extras (high ROI)
- Pre-commit hooks to catch issues fast:
pip install pre-commit
pre-commit install
.pre-commit-config.yaml
:
repos:
– repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
– id: trailing-whitespace
– id: end-of-file-fixer
– id: check-yaml
– id: check-added-large-files
– id: check-merge-conflict
– id: debug-statements
– id: check-docstring-first
– repo: https://github.com/psf/black
rev: 23.11.0
hooks:
– id: black
language_version: python3
– repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
– id: isort
args: [“–profile”, “black”]
– repo: https://github.com/pycqa/flake8
rev: 6.1.0
hooks:
– id: flake8
args: [“–max-line-length=120”, “–extend-ignore=E203,E501,F401,F841,E722,F541,E402,F821,F811,E712”]
– repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.7.1
hooks:
– id: mypy
additional_dependencies: [types-requests, types-Pillow]
args: [–ignore-missing-imports]
– repo: https://github.com/pycqa/bandit
rev: 1.7.7
hooks:
– id: bandit
additional_dependencies: [‘pbr>=6.1.1’]
args: [-r, src/]
exclude: tests/
pass_filenames: false
– repo: https://github.com/Lucas-C/pre-commit-hooks-safety
rev: v1.3.2
hooks:
– id: python-safety-dependencies-check
files: requirements\.txt$
– repo: local
hooks:
– id: pytest-check
name: pytest-check
entry: python -m pytest tests/ –tb=short
language: system
pass_filenames: false
always_run: true
stages: [pre-commit]
- Pylint in CI (Surface extra issues in Sonar via generic issue import if desired).
- Bandit for security smells:
bandit -r src -f xml -o reports/bandit.xml || true
- Split test suites: quick unit tests on every PR; heavier integration tests nightly.
14. Minimal “do this now” checklist
- Add
sonar-project.properties
(above). - Add pytest coverage + junit commands to your Makefile/CI.
- Create
reports/
in CI and publishcoverage.xml
+junit.xml
. - Set Quality Gate (no bugs, ≥90% new-code coverage, ≤3% new-code dup).
- Run
sonar-scanner
locally once to verify end-to-end.
The Country & Western Song
🎶 Why I Still Love SonarQube After All of the Years 🎶
Verse 1
I fired up my Mac just the other day,
Pulled down SonarQube, let it start to play.
Claude wrote the code and Cursor kept time,
But Sonar rode in, made the numbers align.
Chorus
’Cause Claude strums the six-string, Cursor plays the keys,
Sonar keeps it honest, brings the code to its knees.
Like Kenny and Kris with sweet Dolly in tune,
They keep my project singin’ from sun up to moon.
Verse 2
AI builds it faster than a runaway train,
Ten thousand lines written before I could explain.
But Sonar looks deeper, says, “Son, not so fast,
Fix those bugs and leaks if you want it to last.”
Chorus
Claude strums the six-string, Cursor plays the keys,
Sonar keeps it honest, brings the code to its knees.
Like Kenny and Kris with sweet Dolly in tune,
They keep my project singin’ from sun up to moon.
Bridge
Reliability’s a heartbreak, D on the chart,
But maintainability’s still strong at the heart.
Security’s golden, coverage holds near,
That’s why I still love SonarQube after all of the years.
Final Chorus
Claude strums the six-string, Cursor plays the keys,
Sonar keeps it honest, brings the code to its knees.
Together they’re a trio, a masterpiece sound,
With Sonar to guide me, my code’s solid ground.
Outro (spoken like a country balladeer)
AI might write the verses, but SonarQube writes the truth —
That’s why I still love it, after all of the years. 🎶
No responses yet