How It Works

Qlro uses WCPP (Workload-Conditioned Physical Projection) — a scoring framework that evaluates quantum devices based on physics, not marketing.

Every score is computed from real benchmark data published by Metriq (Unitary Foundation, CC BY 4.0), anchored to a specific git commit for full reproducibility.

The 3-Stage Pipeline

Raw BenchmarksPhysical Values [0,1]4 Capability AxesFit Score
1

Physical Translation

Raw benchmark results become dimensionless values in [0, 1]

Each benchmark (e.g., gate error rate, coherence measurement) is transformed into a physically meaningful quantity bounded by a theoretical maximum, not by another device's performance. This is why scores don't change when you add or remove a device from the comparison.

Example: A 2-qubit gate error rate of 0.007 becomes F = 1 - 0.007 = 0.993, bounded by the theoretical perfect fidelity of 1.0

2

Axis Aggregation

Transformed values are grouped into four capability axes

Multiple benchmarks feed into each axis via geometric mean. If a device has no data for an axis, a conservative population prior is used — and the uncertainty is inflated to honestly flag the gap.

ΓConnectivity

How well qubits can interact. High Γ means less routing overhead for complex circuits.

BSEQLR-QAOA
ΦCoherence

How long qubits stay usable. Critical for deep circuits that need sustained quantum states.

Mirror CircuitsQFT
FFidelity

How accurate each operation is. Per-gate errors compound — high F means less noise accumulation.

EPLGWITQML Kernel
TThroughput

How fast the device executes. Matters when running many shots or iterative algorithms.

CLOPS
3

Workload Composition

Axes are combined with workload-specific weights

The final fit score is a weighted geometric mean across the four axes. The weights depend on your workload — a chemistry simulation weights fidelity and coherence heavily, while an optimization problem cares more about connectivity and throughput.

fit(device) = Γw₁ × Φw₂ × Fw₃ × Tw₄

where w₁ + w₂ + w₃ + w₄ = 1 and weights are set by your workload spec

The geometric mean ensures that a device with any single weak axisgets pulled down — you can't compensate for terrible coherence with great fidelity if your workload needs both. This reflects the physical reality of quantum computing: capabilities are complements, not substitutes.

Provable Properties

WCPP isn't a heuristic — it has four mathematically provable properties:

Baseline Invariance

Adding or removing any device from the comparison leaves all other scores unchanged. Your recommendation doesn't shift because a new device entered the market.

Workload Discrimination

Different workloads produce different rankings. A chemistry workload and an optimization workload rank the same devices differently — because they have different requirements.

Monotonicity

If a device improves on any benchmark, its score can only go up (never down). Better hardware always means a better score.

Bounded Output

Every score falls in [0, 1] with a clear interpretation: 0 means completely unsuitable, 1 means theoretically perfect on all axes the workload cares about.

What the Fit Score Does NOT Include

Transparency means telling you what's excluded:

--

Cost: Cloud pricing, queue times, and access tiers are not part of the score.

--

Availability: Whether you can actually access the device right now.

--

Software ecosystem: SDK maturity, documentation quality, transpiler support.

--

Future roadmap: Planned upgrades, qubit count expansions, error correction timelines.

The fit score measures physical capability match only. Practical deployment decisions should combine the fit score with these external factors.

Ready to try it?

Full technical specification: WCPP paper v0.8