Pricing

Start free. Upgrade when the decisions start mattering.

Qlro is free for individual engineers and open-source research. Paid tiers (Team, Enterprise) open in Q3 2026 with a waitlist locked at the pricing shown below. Early-access customers get a permanent 20% discount.

Free
$0500 requests / month
Individual engineers, students, open-source contributors
  • HTTP /predict API
  • Full qlro SDK (pip install)
  • Retrospective CLI (2 sessions / week)
  • Public accuracy dashboard + snapshot DOIs
  • Community Discord
Get free API key
Coming soon
Team
$49per seat / month · minimum 2 seats
2–10 person quantum R&D teams running recurring vendor comparisons
  • Everything in Free
  • 10,000 requests / month per seat
  • Retrospective CLI — unlimited sessions
  • Comparison Workspace — save + share rankings
  • Shared prediction history (12 months retention)
  • Weekly usage report + drift alerts
  • Priority email support
Join waitlist

Billing opens Q3 2026. Starts at $98/mo for 2 seats. Early-access customers lock in a permanent 20% discount.

Enterprise
Customannual contract · from $50K / year
Government procurement, regulated industries, 10+ seat teams, audit-trail-required deployments
  • Everything in Team, no seat cap
  • Private deployment (AWS, on-prem)
  • Tamper-evident hash-chained decision records
  • SLA + uptime guarantee on citation URLs
  • Custom workload templates
  • Dedicated success engineer
  • Legal / compliance review support
Contact sales

FAQ

Pricing
Why is the API free right now?
Qlro is in the free-adoption phase. Our moat is the paired (prediction, hardware outcome) dataset that accumulates as users contribute back observations. We want as many quantum engineers as possible to run Qlro on their workloads before we turn on billing.
What happens when billing opens?
Existing free-tier keys stay free forever at the 500 req/month quota. If you joined the Team waitlist before Q3 2026 you lock in the $49/seat pricing with a 20% permanent discount (so 2 seats stay at $78.40/mo for the lifetime of the subscription). Enterprise contracts are negotiated individually.
Why no Pro tier for individuals?
Quantum-device decisions are organisational, not personal — we haven't seen demand for a per-seat individual-professional tier in this market. Free covers solo research; Team starts at 2 seats ($98/mo) for small pilot groups. If enough individual demand shows up we'll revisit.
Can I deploy Qlro on-prem?
Yes, via Enterprise. The full predictor + dashboard runs in a private container with your own Metriq snapshot and a private audit-trail database. Contact sales.
What does Enterprise actually include beyond the SaaS?
Enterprise contracts start at $50K/yr (base) and stack with five add-on lines: Audit + Reporting (+$30K), Private Deployment (+$50K), Error Mitigation Advisory (+$80K), Procurement Advisory (+$100K), and Continuous Drift Monitoring(+$120K). A government-pilot contract typically lands at $230K ACV; an R&D fleet contract typically at $250K ACV. See the Enterprise page for the add-on detail and the 3-year vision roadmap.
Is the code open source?
The SDK ( github.com/linsletoh/qlro, Apache 2.0) and the paper (Zenodo DOI 10.5281/zenodo.19785800) are fully public. The hosted API / dashboard / pricing tiers are commercial.
Defensibility
The WCPP formula is published. What stops a competitor from copying it?
Nothing — and we don't want to stop them. The paper and the four-axis projection are the ingredients; the moat is the paired (prediction, real-hardware outcome) dataset that compounds with every user submission, plus the citation graph that forms when external papers, RFPs, and procurement records reference Qlro snapshot DOIs. A cloned formula with no residual time-series and no cited snapshots produces numbers nobody can reproduce or audit.
Why wouldn't IBM or AWS just build this in-house?
Because a vendor-run comparison is structurally non-credible. The value of a cross-vendor recommendation comes from the neutrality of the party producing it — the instant IBM ranks IBM against Quantinuum, buyers discount the result. This is why Bloomberg could build a cross-market terminal that JPM and Goldman couldn't: the neutral party wins the reference-point role.
What if Metriq / Unitary Foundation builds a recommender themselves?
Metriq is the benchmark layer; Qlro is the workload-conditioned decision layer on top of it. The two compose rather than compete — we cite Metriq in every snapshot. If they ship a ranker, we ingest it as a signal; if they don't, we keep scoring their data.
What if your predictions turn out to be wrong?
We publish the residuals. Every prediction is timestamped against the Metriq commit it was made on, and the corresponding hardware outcome — when users contribute one back — is diffed publicly on /accuracy. A predictor that hides its misses has no feedback loop; one that shows them compounds calibration over time.