What You'll Learn
By the end of this article, you will know how to verify traction by measuring the forecast error of a single GTM loop, so you can avoid allocating scarce runway to high-variance motions under execution constraints.
Why This Matters
In seed-stage ventures, runway is not just cash. It is a finite count of controlled attempts. Most founders mistake a spike in interest for traction. That is a category error.
Scaling an unstable acquisition loop amplifies noise (not signal). If customer acquisition depends on founder heroics, timing luck, or one-off distribution anomalies, the apparent unit economics are a narrative, not a system.
This matters because scaling decisions create irreversible costs. Hiring, spend, and roadmap commitments increase coordination overhead. If the loop cannot predict its own output under controlled inputs, adding fuel does not create growth. It creates burn with better charts.
A seed-stage company does not need “more growth” first. It needs a loop that behaves consistently enough to be steered.
Signals That Change the Decision
At low volume, outcomes are dominated by variance. When you are running fewer than ~50 meaningful attempts per week, you can “win” a week by luck and lose the next week by the same mechanism.
So the signal is not raw output. The signal is forecastability under control.
Use these operational signals to decide whether a loop is traction or noise:
Forecast error on the primary output
If you hold inputs constant and output swings wildly, you do not have traction. You have an unmodeled system.
Stage-to-stage variance
If the same level of input produces inconsistent next-step rates (reply rate, booked calls, qualified calls, pilots), the loop is not ready for scaling decisions.
Surprise frequency
If your team regularly says “this week was weird” without identifying a controllable variable that changed, you are not operating a system. You are observing weather.
A practical boundary for “under control” at seed stage is a forecast error below ±30% on the primary output of the loop. This is not a benchmark to brag about. It is a gating threshold: above it, scaling decisions are mostly gambling.
The Non-Obvious Claim
Traction is the absence of surprise (not the presence of customers).
Traction exists in a GTM loop only when next week’s primary output is forecastable from this week’s controlled inputs within a tight error band. If you can forecast, you can allocate. If you cannot forecast, scaling is not growth. It is multiplying uncertainty.
This is falsifiable: pick one loop, freeze inputs, and measure forecast error for five working days. If your output remains unpredictable, you do not need “more leads.” You need repeatability.
A loop with flat output and low forecast error is more valuable than a loop with rising output and high forecast error. The first can be tuned and scaled. The second is how founders accidentally annualize a spike and call it progress.
The 3-Step Protocol
Step 1: Isolate the primary loop (~45 minutes)
Select the single GTM channel you ran most in the past 7 days (outbound email, LinkedIn outbound, referrals, partnerships). Commit to running only this loop for the next five working days.
Define:
- Three controllable inputs (example: new prospects sourced/day, messages sent/day, follow-ups/day).
- One primary output that matters (example: qualified replies/day, qualified calls booked/day).
Constraints:
- You are prohibited from changing the script, the offer, or the targeting during this measurement week.
- You may fix obvious data hygiene issues (duplicates, wrong emails), but you may not change the loop design.
Deliverable (end of Day 1): a one-page “Loop Card” containing the 3 inputs, the single output, and your daily target counts.
Step 2: Measure the five-day forecast error (~15 minutes per day, ~15 minutes to compute)
Run the isolated loop for five working days using the same daily input levels.
Daily log (10 lines total is enough):
- Inputs executed (three numbers)
- Output observed (one number)
On Day 6, compute a simple five-day Mean Absolute Percentage Error (MAPE) versus last week’s daily average output:
- Expected Output (E) = last week’s average daily output for this same loop.
- Daily error = |E − Actual| ÷ E
Important guardrail:
- If E is 0 or very small (0–2), percentage error becomes misleading. In that case, use absolute error instead: |E − Actual|, and set a concrete tolerance (example: “within ±1 qualified reply/day”).
Deliverable (end of Day 6): one number (your forecast error) plus the five-day log.
Step 3: Execute the allocation decision memo (~30 minutes)
Write a one-page “Traction Decision Memo” for next week based on the error.
Rules:
- If forecast error is ≤ 30% (or within your absolute tolerance), increase inputs by 20% next week and change nothing else.
- If forecast error is > 30%, keep inputs flat and change exactly one variable for the next week (choose one: list quality, ICP slice, or script). Do not change more than one variable, or you will not know what reduced variance.
Constraint:
- Do not increase spend or headcount until the loop is under control.
Deliverable: a signed memo that dictates the team’s focus for the next seven days.
The Rule That Matters
Scale comes after control.
What to Do Next
This week, freeze one GTM loop, measure its five-day forecast error, and refuse all scaling decisions until the error is below 30%.
References & Further Readings
- Andreessen Horowitz. (2015). 16 Startup Metrics.
- Andreessen Horowitz Speedrun. (2025). Make ARR Useful Again.
- Eisenhardt, K. M. (1989). Making Fast Strategic Decisions in High-Velocity Environments. Academy of Management Journal.
- First Round Review. (2023). The Most Common Go-to-Market Questions from Early Founders (including forecasting and measurement).
- Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting.
- Makridakis, S., & Hibon, M. (2000). The M3-Competition: results, conclusions and implications. International Journal of Forecasting.
- Sequoia Capital. (n.d.). The Arc Product-Market Fit Framework.
- Sequoia Capital. (n.d.). Measuring Product Health.
- Y Combinator. (2018). How to Measure Your Product.
- Y Combinator. (2023). Key Startup Metrics.