Evidence.dev β NHS Quickstart
Evidence turns SQL + Markdown into a static site. That means: auditable queries, small attack surface, simple hosting (S3/CloudFront, Azure Static Web Apps, or intranet IIS). Itβs perfect when analysts live in SQL and you want reports, not apps.
Great for: BI Analyst Β· Data Engineer Β· IG-friendly reporting.
βοΈ 10-minute installβ
npm create evidence@latest nhs-evidence
cd nhs-evidence
npm install
Keep data local at first (CSV/Parquet) and switch to a managed source later.
π βHello NHSβ report (3 options)β
- A) Parquet/CSV (quickest)
- B) DuckDB database
- C) SQL Server via safe extracts
Create a Parquet file in data/kpi.parquet (see SQL or Python pages to export).
Then create src/pages/kpi.md:
---
title: NHS KPI
---
```sql kpi
SELECT practice_id, total_appointments, attendance_rate
FROM read_parquet('data/kpi.parquet')
ORDER BY total_appointments DESC
```
# Appointments by Practice
<Table data={kpi} />
Run:
npm run dev
Add a DuckDB file data/local.duckdb and a table appointments.
Create src/pages/kpi.md:
---
title: NHS KPI (DuckDB)
---
```sql kpi
SELECT practice_id, COUNT(*) AS total_appointments
FROM appointments
GROUP BY practice_id
ORDER BY total_appointments DESC
```
# Appointments by Practice
<Table data={kpi} />
Best NHS pattern: nightly read-only extract β Parquet β Evidence.
- Use a scheduled script (see Python example below) to write
data/kpi.parquet. - Point your page query at
read_parquet('data/kpi.parquet')(as in option A).
import os, urllib.parse, pandas as pd
from sqlalchemy import create_engine, text
from dotenv import load_dotenv; load_dotenv()
params = urllib.parse.quote_plus(
"DRIVER={ODBC Driver 18 for SQL Server};"
f"SERVER={os.getenv('SQLSERVER_SERVER')};"
f"DATABASE={os.getenv('SQLSERVER_DATABASE')};"
"Trusted_Connection=Yes;Encrypt=Yes;"
)
engine = create_engine(f"mssql+pyodbc:///?odbc_connect={params}")
sql = text("""
SELECT practice_id, total_appointments, attendance_rate
FROM dbo.vw_PracticeKPI
""")
df = pd.read_sql(sql, engine)
os.makedirs("data", exist_ok=True)
df.to_parquet("data/kpi.parquet", index=False)
print("Wrote data/kpi.parquet", df.shape)
π Project structure youβll keepβ
nhs-evidence/
data/ # CSV/Parquet or DuckDB file
src/
pages/ # Markdown + SQL pages
kpi.md
evidence.config.ts # data source config (optional)
.env # secrets for local dev (never commit)
π Secrets & data sourcesβ
- Start with Parquet/CSV to avoid storing DB creds in Evidence.
- If you must connect directly to a DB, load creds from environment variables (never commit).
- Prefer read-only roles and restricted schemas/views for reporting.
See: Secrets & .env, SQL.
π’ Publish (static)β
Build a static site:
npm run build
Youβll get a build/ folder you can host almost anywhere:
- AWS: S3 + CloudFront; add basic auth/WAF as needed.
- Azure: Static Web Apps or Storage Static Website + Front Door.
- Intranet: Serve
build/via IIS/NGINX behind Trust SSO.
Add a footer component with a βData last updatedβ timestamp (derive from your extract job) and link to KPI definitions.
π§° Useful patternsβ
- Glossary page for metric definitions and owners.
- Page-per-KPI with the SQL visible and a compact chart + table.
- Evidence + DuckDB for ad-hoc QA: inline
read_parquet()queries. - Staging file convention:
data/_staging/*.parquetβ validate βdata/published/.
π‘ IG & safety checklistβ
- Use synthetic/de-identified data for demos.
- Apply small-number suppression in SQL before rendering.
- Keep secrets out of the repo; inject via environment.
- Make KPI SQL read-only and document owners/refesh cadence.
- Host behind SSO; turn on access logging at the reverse proxy.
π Measuring impactβ
- Latency: extract finished β report updated.
- Stability: build success rate; broken pages count.
- Adoption: unique viewers; teams using Evidence pages.
- Auditability: % pages showing βSQL usedβ + last updated stamp.
π See alsoβ
Whatβs next?
Youβve completed the Learn β Evidence.dev stage. Keep momentum: