Methodology
How Plinth thinks
A decision system that makes confidence boundaries explicit. What the evidence supports now, what it doesn't yet, and what would increase certainty.
Decision credibility over AI novelty.
What Plinth looks at
Pricing and packaging signals
Product docs and API surfaces
Changelog/release velocity
Reviews and community signals
Security/compliance posture
Public positioning and proof points
How signals become opportunities
Collect public market evidence
We gather evidence from publicly accessible sources across your competitors. Marketing sites, documentation, announcements, and reviews.
Normalize into comparable claims
Evidence is organized into structured claims so sources can agree or conflict clearly. This makes patterns visible across competitors.
Synthesize opportunities with explicit assumptions and citations
Opportunities are synthesized from evidence patterns, with clear citations and explicit assumptions. You see what supports the recommendation and what doesn't.
Confidence boundaries
The organizing principle. Three states as states, not modes:
Exploratory
Safe to explore
Signals are early or uneven. Worth testing, but not yet safe to commit significant resources.
Directional
Safe to prioritize discovery
Multiple signals align, but key assumptions remain. Safe to begin scoping and validation work.
Investment-ready
Safe to invest
Evidence converges; risks are explicit. Safe to commit resources with clear understanding of assumptions.
Plinth won't pretend uncertainty is resolved.
Why Plinth fails closed
If evidence is thin, Plinth returns fewer (or no) opportunities.
"No output" is a trust feature, not a bug.
When the evidence doesn't support confident recommendations, Plinth tells you that. It's better to know you don't have enough signal than to get recommendations based on thin evidence.
What Plinth is / is not
Plinth is
- • Evidence-bound
- • Conservative
- • Explainable
Plinth is not
- • A dashboard
- • A brainstorm toy
- • A replacement for judgment