Live snapshot · Reporting period: 2026-05-04 — 2026-05-05

Maintaining The Hub
across 0 codebases
and 0 lines of code.

A real, in-production case from S·PRO — a multi-year engagement covering mobile, web, admin and backend. AI is woven through code review, documentation, regression analysis and reporting. Below is what that actually produces.

0
Vulnerabilities
across all 4 repositories
A/A/A/A
Maintainability
SonarQube rating, all repos
0%
Regression pass rate
excl. blocked, last 3 runs
0
Test cases mapped
in Testomatio · eBoard
The Hub stack React Native + Expo SDK Angular Webapp Node Backend Admin Panel Telegraph (Chat)
01 · The spine of how we work

AI-driven engineering, end-to-end

Every artifact you see below — code review notes, gap analysis, regression summaries — was either produced or accelerated by AI. Not as a gimmick: as the spine of a long-term maintenance contract.

1
Plan & Spec
Confluence + AI doc-gap analysis
Feature specs validated against the test plan; missing pages flagged before code is written.
2
Implement
Claude Code · in-IDE pair
Engineers ship features with an AI pair: skeleton generation, refactor suggestions, security hints.
3
Review
SonarQube + AI explainers
Every MR is scanned. Each issue is opened with a code excerpt, plain-language explanation and a suggested fix.
4
QA & Regression
Claude QA runs · Testomatio
Reports are aggregated and triaged by AI: pass / fail / blocked, with risk-ranked sub-suites.
5
Report
Executive summaries
Stakeholders get a one-page picture every week — generated, not hand-typed.

Reports as a primitive

The aggregated regression summary on this page comes from three locally-saved HTML test reports from prior Claude QA runs — the AI does the bookkeeping so QA can stay on judgment.

Source: eBoard Summary, 2026-05-04

Issues with explanations

SonarQube Community 25.9 is wired into CI (runs on every MR and push to develop/master). Each finding lands with a fix recipe — not a stack trace.

Source: HUB SonarQube Report, 2026-05-05

Gap analysis you can act on

AI reads the Confluence spec and the Testomatio test set side-by-side and tells you exactly where the spec lags the tests, or vice-versa.

Source: eBoard Gap Analysis, 2026-05-05
0
AI-generated reports
feeding this dashboard
0
Test cases automatically
reconciled against spec
0
Cross-cutting topics surfaced
by AI doc-gap analysis
0
Code-smell findings, each
with an AI-suggested fix
02 · Documentation state

We document the spec the way we run the tests — page by page.

For the eBoard feature alone, 7 Confluence pages cover 287 Testomatio tests. AI reconciles both sides every reporting cycle and flags drift before it ships.

0
Total tests
(documented + cross-cutting)
0
Confluence pages
in the eBoard set
0/0
v1.0 / v1.1 distribution
of mapped tests
0%
Undocumented tests
flagged for spec follow-up

Coverage of documented requirements per page

eBoard · v1.0 + v1.1
Confluence page Ver Spec items Tests Coverage Verdict
Create a Custom Boardv1.02267
64%
MEDIUM
View Board – Webv1.0105
50%
MEDIUM
View Board – TV / Fullscreenv1.0917
33%
HIGH
View Board – Mobilev1.070
0%
CRITICAL
Date-Based Viewv1.120103
90%
LOW
Locked Columnsv1.120143
90%
LOW
Fullscreen Table Formattingv1.1041
n/a
CRITICAL · spec missing
Coverage = test items directly verifying a documented requirement. Two pages flagged CRITICAL by AI gap-analysis are now in the action plan with named owners.

Top 5 gaps · severity-ranked

AI-prioritised
  1. 1
    "Fullscreen Table Formatting" page does not exist
    41 tests · 0 pages · Doc gap
  2. 2
    View Board – Mobile: 0 tests
    0 / 7 reqs (0%) · Test gap
  3. 3
    v1.1 cross-cutting topics undocumented
    11 topics · audit, perms, network, a11y… · Doc gap
  4. 4
    View Board – TV: missing negative tests
    ~5 missing reqs · Test gap
  5. 5
    Create Board: Shared Mode + UI/UX boundaries
    ~6 missing reqs · Test gap

Action plan · in flight

Owners assigned
  • CRITICALBACreate "Fullscreen Table Formatting" Confluence page
  • CRITICALQAWrite View Board – Mobile suite (~25–30 cases)
  • HIGHBAAdd cross-cutting sections to v1.1 pages (11 topics)
  • HIGHQATV/Fullscreen negative tests + View Mode Toggle (~6)
  • MEDIUMBAUnify "All" filter spec across pages
  • MEDIUMQAv1.0 boundary + Shared Mode + step indicator (~10)
  • LOWQAWeb View: lazy-load, last-updated, collapse (~5)
03 · Code health · SonarQube

0 lines of code, scanned on every push.

SonarQube Community 25.9 runs on every merge request and push to develop/master across all four repositories. Zero open vulnerabilities. All four rated A on Security and Maintainability.

hub-v2-telegraph Chat
0 LoC
A E A
MaintRelSec
  • Bugs2
  • Vulnerabilities0
  • Code smells26
hub-v2-backend Node
0 LoC
A D A
MaintRelSec
  • Bugs43
  • Vulnerabilities0
  • Code smells278
hub-v2-admin Angular
0 LoC
A E A
MaintRelSec
  • Bugs26
  • Vulnerabilities0
  • Code smells490
hub-v3-webapp Angular
0 LoC
A C A
MaintRelSec
  • Bugs167
  • Vulnerabilities0
  • Code smells1,589

Issues by repository

Bugs · Code smells
Vulnerabilities are intentionally absent — the count is 0 across the board. Bugs and code smells are concentrated in the legacy webapp; new-code mode is on, so the trend is one-way down.

Lines of code distribution

119,675 total
Four codebases, one product. The dashboard you're reading collapses all four into a single picture every time SonarQube finishes a run.

Continuous integration

Source: sonarqube.s-pro.io
0%
of MRs scanned
0
open vulnerabilities
A
on Maintainability · all 4 repos
A
on Security · all 4 repos
25.9
SonarQube Community edition
04 · Regression coverage

Three Claude QA sessions across six weeks — reconciled into one report.

59 acceptance criteria executed across three AI-assisted sessions on the eBoard feature. Every result reconciled against the master plan in Testomatio (287 cases, 8 sub-suites).

Run outcomes

59 ACs · 3 runs
76.3%Pass rate (raw)
95.7%Pass rate (excl. blocked)
3.4%Defect density

Sub-suite execution

5 of 8 touched · 62.5%
Date-Based View
100%
Editing & Filtering
100%
Lock Column Data
60%
Editing Locked Column
blocked
Auto-Pop & Read-Only
40%
Interaction · DBV
blocked
Fullscreen Mode
100%
Fullscreen Table Formatting
not run
0
Passed ACs
0
Failed · isolated to one sub-suite
0
Blocked · pre-conditions queued
0
Bugs filed and tracked

What this means in practice

Translated for stakeholders
When tests can run
95.7% pass on the first attempt — a strong signal that recent code changes didn't regress core flows.
When tests are blocked
20.3% block rate flags missing fixtures, not bad code. AI summary names the exact pre-conditions to fix.
What's still untested
3 sub-suites untouched in the last cycle — already scheduled, with named owners and ETAs.
05 · The S·PRO approach

Three commitments we make on every long-term project.

01

Visibility, by default

Code health, doc state and regression coverage are first-class stakeholder data — generated continuously, never on request.

02

Automation as the floor

Every MR is scanned. Every report is regenerated. Quality rituals don't depend on a single engineer remembering to run them.

03

AI as a teammate

AI explains issues, drafts gap analyses, summarises runs and ranks risk — so humans spend their time on judgment, not bookkeeping.

Considering a long-term partner?

Let's run your project assurance the same way.

Fixed cadence, real numbers, AI throughout. The picture above is what your status review can look like — not a once-a-quarter PDF.