Model Context Protocol server
The MCP server exposes the active Digital Twin to LLM clients through tools, resources, and prompts. It sits directly on top of the same DigitalTwinService layer used by the desktop and HTTP API.
How to run it
python python/mcp_server.py
python python/tools/mcp_smoke_test.py
The server uses stdio JSON-RPC. The repository root should be the working directory so relative profile and documentation paths resolve correctly.
Tools
The current server advertises status/config tools, precision and optimisation workflows, diagnostics/data tools, session helpers, and instrument-profile drafting/finalisation tools.
| Tool | Purpose |
|---|---|
get_status | Backend readiness, active effective mode, and data availability. |
get_config / update_config | Read or patch the active configuration. |
run_precision | Run one-dimensional theory and optional Monte Carlo precision analysis. |
run_optimisation | Run the integrated optimisation workflow. |
simulate_basic / simulate_advanced | Generate and fit synthetic datasets. |
get_data_summary, get_results_snapshot, get_tau_map, get_phasor_map, get_pixel_analysis, get_diagnostics_snapshot, get_theoretical_locus | Inspect current backend outputs. |
save_session / load_session | Persist and restore sessions. |
list_vendor_sources, read_vendor_source, get_instrument_profile_schema, draft_instrument_profile, finalize_instrument_profile | LLM-guided instrument-profile creation workflow. |
ingest_instrument_profile_source | Single-source profile extraction helper for PDFs, pages, or local files. |
run_precision mirrors the Python service payload: it always returns the ideal reference and the raw detector-limited theory, can add a deadtime_correction companion block when a correction method is enabled, and keeps the raw Monte Carlo baseline separate from any corrected companion Monte Carlo statistics. The currently exposed user-facing dead-time methods are Isbaner-lite, Rapp (MCPDF-lite), Rapp (MCPDF-full), Rapp (MCHC-lite), and Rapp (MCHC-full). Rapp (MCPDF-full) is the promoted stationary detected-histogram implementation, Rapp (MCHC-full) is the promoted stationary histogram-correction implementation, and the lite methods remain surrogate variants rather than paper-faithful stationary-chain implementations.
Resources
| Resource | Meaning |
|---|---|
hilight://status | Backend status and effective-mode summary. |
hilight://config | Current full configuration. |
hilight://gui-schema | Machine-readable controller/workspace schema. |
hilight://data-summary | Map availability and warnings. |
hilight://results | Current result snapshot. |
hilight://tau-map | Current tau map. |
hilight://theory-locus | Current theoretical phasor locus. |
hilight://diagnostics | Time vector, gates, excitation profile, and reference PDF. |
hilight://instrument-profile-schema | JSON structure expected by the profile tools. |
hilight://vendors-info/index and hilight://vendors-info/<file> | Vendor-document resources from docs/vendors_info/. |
Prompts
The server exposes prompt templates that steer an LLM toward repeatable workflows. These are intended as entry prompts, not as a substitute for the tools.
| Prompt | Purpose |
|---|---|
precision-audit | Review theory, Monte Carlo, and confidence-interval consistency for the active precision sweep. |
instrument-design | Design or refine excitation, detection, and gating settings for a target regime. |
data-inspection | Inspect summaries, maps, diagnostics, phasors, and representative pixels. |
workspace-navigation | Guide a user or agent through the desktop using GUI-schema identifiers. |
instrument-profile-builder | Use vendor documentation and MCP tools to build an instrument-definition JSON. |
optimisation-loop | Run and interpret the integrated optimisation workflow. |
Instrument-profile workflow for LLMs
Plain English
The current MCP path supports a multi-step profile-building conversation. An LLM can read vendor documents, draft a profile, identify missing assumptions, ask the user for confirmation, then save/apply the final instrument definition.
For specialists
The preferred workflow is no longer only ingest_instrument_profile_source. The current structured flow is:
list_vendor_sources -> read_vendor_source -> get_instrument_profile_schema -> draft_instrument_profile -> user confirmation -> finalize_instrument_profile.
Example LLM workflow
1. list_vendor_sources
2. read_vendor_source { "source_name": "HydraHarp500.pdf" }
3. read_vendor_source { "source_name": "PMA-40.pdf" }
4. get_instrument_profile_schema
5. draft_instrument_profile {
"profile_name": "HydraHarp500 + PMA-40",
"sources": ["HydraHarp500.pdf", "PMA-40.pdf"],
"component_names": ["HydraHarp500", "PMA-40"]
}
6. Ask the user to confirm any missing assumptions
7. finalize_instrument_profile {
"profile_name": "HydraHarp500 + PMA-40",
"profile_json": { ... },
"save_profile": true,
"apply_profile": true
}
This is the intended workflow for vendor-document-driven instrument creation and the one the manual should now point users toward.