MCP

Model Context Protocol server

The MCP server exposes the active Digital Twin to LLM clients through tools, resources, and prompts. It sits directly on top of the same DigitalTwinService layer used by the desktop and HTTP API.

How to run it

python python/mcp_server.py
python python/tools/mcp_smoke_test.py

The server uses stdio JSON-RPC. The repository root should be the working directory so relative profile and documentation paths resolve correctly.

Tools

The current server advertises status/config tools, precision and optimisation workflows, diagnostics/data tools, session helpers, and instrument-profile drafting/finalisation tools.

ToolPurpose
get_statusBackend readiness, active effective mode, and data availability.
get_config / update_configRead or patch the active configuration.
run_precisionRun one-dimensional theory and optional Monte Carlo precision analysis.
run_optimisationRun the integrated optimisation workflow.
simulate_basic / simulate_advancedGenerate and fit synthetic datasets.
get_data_summary, get_results_snapshot, get_tau_map, get_phasor_map, get_pixel_analysis, get_diagnostics_snapshot, get_theoretical_locusInspect current backend outputs.
save_session / load_sessionPersist and restore sessions.
list_vendor_sources, read_vendor_source, get_instrument_profile_schema, draft_instrument_profile, finalize_instrument_profileLLM-guided instrument-profile creation workflow.
ingest_instrument_profile_sourceSingle-source profile extraction helper for PDFs, pages, or local files.

run_precision mirrors the Python service payload: it always returns the ideal reference and the raw detector-limited theory, can add a deadtime_correction companion block when a correction method is enabled, and keeps the raw Monte Carlo baseline separate from any corrected companion Monte Carlo statistics. The currently exposed user-facing dead-time methods are Isbaner-lite, Rapp (MCPDF-lite), Rapp (MCPDF-full), Rapp (MCHC-lite), and Rapp (MCHC-full). Rapp (MCPDF-full) is the promoted stationary detected-histogram implementation, Rapp (MCHC-full) is the promoted stationary histogram-correction implementation, and the lite methods remain surrogate variants rather than paper-faithful stationary-chain implementations.

Resources

ResourceMeaning
hilight://statusBackend status and effective-mode summary.
hilight://configCurrent full configuration.
hilight://gui-schemaMachine-readable controller/workspace schema.
hilight://data-summaryMap availability and warnings.
hilight://resultsCurrent result snapshot.
hilight://tau-mapCurrent tau map.
hilight://theory-locusCurrent theoretical phasor locus.
hilight://diagnosticsTime vector, gates, excitation profile, and reference PDF.
hilight://instrument-profile-schemaJSON structure expected by the profile tools.
hilight://vendors-info/index and hilight://vendors-info/<file>Vendor-document resources from docs/vendors_info/.

Prompts

The server exposes prompt templates that steer an LLM toward repeatable workflows. These are intended as entry prompts, not as a substitute for the tools.

PromptPurpose
precision-auditReview theory, Monte Carlo, and confidence-interval consistency for the active precision sweep.
instrument-designDesign or refine excitation, detection, and gating settings for a target regime.
data-inspectionInspect summaries, maps, diagnostics, phasors, and representative pixels.
workspace-navigationGuide a user or agent through the desktop using GUI-schema identifiers.
instrument-profile-builderUse vendor documentation and MCP tools to build an instrument-definition JSON.
optimisation-loopRun and interpret the integrated optimisation workflow.

Instrument-profile workflow for LLMs

Plain English

The current MCP path supports a multi-step profile-building conversation. An LLM can read vendor documents, draft a profile, identify missing assumptions, ask the user for confirmation, then save/apply the final instrument definition.

For specialists

The preferred workflow is no longer only ingest_instrument_profile_source. The current structured flow is:
list_vendor_sources -> read_vendor_source -> get_instrument_profile_schema -> draft_instrument_profile -> user confirmation -> finalize_instrument_profile.

Example LLM workflow
1. list_vendor_sources
2. read_vendor_source { "source_name": "HydraHarp500.pdf" }
3. read_vendor_source { "source_name": "PMA-40.pdf" }
4. get_instrument_profile_schema
5. draft_instrument_profile {
     "profile_name": "HydraHarp500 + PMA-40",
     "sources": ["HydraHarp500.pdf", "PMA-40.pdf"],
     "component_names": ["HydraHarp500", "PMA-40"]
   }
6. Ask the user to confirm any missing assumptions
7. finalize_instrument_profile {
     "profile_name": "HydraHarp500 + PMA-40",
     "profile_json": { ... },
     "save_profile": true,
     "apply_profile": true
   }

This is the intended workflow for vendor-document-driven instrument creation and the one the manual should now point users toward.