Disclaimer: This site is an independent, amateur project. It tracks the empirical accuracy of public statements for entertainment purposes only. It does not constitute investment advice, and does not reflect total portfolio returns or professional performance.
Predictions are sourced from public media appearances. If a prediction has been misattributed or misinterpreted, use the flag feature to request a review.
Prediction Leaderboard
12 forecasters · 732 predictions tracked · Last updated 19 Apr 2026
| # | Forecaster | Resolved | Weighted % |
|---|---|---|---|
| 1 | Michael Howell | 23 | 80.0% |
| 2 | Lyn Alden | 18 | 76.6% |
| 3 | Jordi Visser | 11 * | 68.2% |
| 4 | Jim Cramer | 21 | 52.8% |
| 5 | Luke Gromen | 14 * | 47.2% |
| 6 | Jim Bianco | 23 | 46.0% |
| 7 | Stanley Druckenmiller | 18 | 44.8% |
| 8 | Michael Burry | 17 | 44.4% |
| 9 | Arthur Hayes | 17 | 34.9% |
| 10 | Jeff Snider | 14 * | 13.8% |
| 11 | Raoul Pal | 14 * | 13.2% |
| 12 | Peter Schiff | 17 | 11.9% |
Methodology
How predictions are selected, scored, and resolved.
Inclusion Criteria
A prediction must pass three gates before it enters the database:
- 1 Clear resolution oracle — a specific, identifiable data source that can objectively confirm or deny the claim (price feed, economic release, etc.).
- 2 Meaningful condition — not just a directional opinion; a specific threshold or event that can be verified true or false.
- 3 Stated timeframe — a resolution date or window must be present or clearly implied by the forecaster's statement.
Predictions that fail any gate are excluded entirely — not held as pending.
Specificity Score (1–4)
Each prediction earns one point per gate it clears. This score becomes its weight in the weighted accuracy calculation.
- +1 Exact numeric threshold (e.g. "Bitcoin above $100,000")
- +1 Exact date, not just a quarter or year
- +1 Named, locked oracle (e.g. "BTC-USD on Yahoo Finance")
- +1 Unconditional — no "if X then Y" clause
A score of 4 is the maximum. Most predictions score 1–2. A bold, precise, unconditional call with a locked oracle scores 4.
Weighted vs Simple Accuracy
Simple accuracy counts correct predictions as a share of total resolved — one prediction, one vote.
Weighted accuracy weights each prediction by its specificity score. Getting a score-4 prediction right contributes four times as much as a score-1 prediction. Getting it wrong subtracts four times as much.
The Δ column shows the difference. A positive Δ means precision pays off for that forecaster — their bold calls are more right than their hedged ones.
Data Sourcing
Predictions are sourced from public statements: interviews, newsletters, podcasts, Twitter, and YouTube. Each prediction links to the original source.
Oracle-automated resolutions use live price feeds (Yahoo Finance) and economic data (FRED API). Manual resolutions include a cited source and evidence URL. All resolutions are factual records — no editorial judgement is applied.
Lookback Period
The leaderboard tracks predictions from 2018 to the present to ensure relevance to current market regimes.
A small number of earlier predictions exist in the database as historical records but are not the focus of this project. Data coverage and consistency improves significantly from 2018 onwards.