What to Measure in a GTM Stack Audit
Most GTM stack audits start with a spreadsheet of tools and a column for monthly cost. That is necessary but not sufficient. Cost alone tells you what you are spending; it does not tell you what you are getting, what is broken, or what your team has quietly stopped using. This framework adds seven additional metrics that collectively tell a complete story about stack health.
A full stack audit using this framework takes 3–4 hours for a stack of 10–15 tools. The output is a scored card for each tool and a prioritized action list: tools to consolidate, tools to renegotiate, tools to eliminate, and integrations to fix.
The 8 Metrics
Metric 1: Monthly Cost Per FTE
Calculation: (total monthly GTM tool spend) / (total FTEs on GTM team). GTM team includes sales, marketing, RevOps, and any CS roles that use these tools.
Benchmarks: Under $300/FTE/month = lean (may be under-tooled). $300–$800/FTE/month = normal for Series A–B. Above $800/FTE/month = investigate for redundancy and zombie tools. Above $1,500/FTE/month = almost certainly over-stacked.
Rubric: 5 = under $300/FTE. 4 = $300–500/FTE. 3 = $500–800/FTE. 2 = $800–1,200/FTE. 1 = above $1,200/FTE.
Metric 2: Integration Health Rate
Calculation: for each tool that claims to integrate with your CRM, verify that data is actually flowing correctly. Integration health rate = (tools with verified, current data flowing to/from CRM) / (tools that should be integrated with CRM) x 100.
How to measure: pull a sample of 10 recent events from each tool (email opens, meeting bookings, form submissions, enrichment updates) and verify that those events appear in your CRM within the expected latency window. A tool that “integrates with HubSpot” but whose data is 48 hours stale, missing 40% of events, or writing to the wrong field has a broken integration regardless of what the vendor says.
Rubric: 5 = 90%+ of tools passing integration health check. 4 = 75–90%. 3 = 50–75%. 2 = 25–50%. 1 = below 25%.
Metric 3: Redundancy Overlap Percentage
Calculation: (number of tool pairs that do meaningfully overlapping work) / (total tools) x 100. A tool pair overlaps if more than 30% of their core features are duplicated and you are using both.
Common overlaps to check: enrichment (Apollo AND Clay AND ZoomInfo all enriching the same contacts). Email sequencing (Outreach AND HubSpot Sequences both being used by the same team). Scheduling (Calendly AND HubSpot Meetings both connected to the same calendar). Intent data (Bombora AND 6sense both running on the same domain list). Video prospecting (Loom AND Vidyard AND Hippo Video all licensed).
Rubric: 5 = 0% overlap (no redundant tools). 4 = 1–10% (one redundant pair, in consolidation process). 3 = 10–25% (2–3 redundant pairs). 2 = 25–40%. 1 = above 40%.
Metric 4: Team Activation Rate
Calculation: (seats with at least one meaningful action logged in the last 7 days) / (total purchased seats) x 100. A “meaningful action” is tool-specific: for a sequencer, it means at least one email sent or sequence modified. For a CRM, it means at least one deal updated or activity logged. For an enrichment tool, at least one enrichment run.
Most tool dashboards show login-based usage, which is a misleading metric — people log in to check something and do nothing. Pull from the tool’s activity API or export if available. HubSpot’s Users report, Outreach’s User Activity report, and Clay’s workspace analytics are sources for this data.
Rubric: 5 = 80%+ of seats active weekly. 4 = 60–80%. 3 = 40–60%. 2 = 20–40%. 1 = below 20% (this tool is a zombie — team has stopped using it).
Metric 5: Data Freshness
Calculation: average days since the last enrichment or update across your contact and company records in the CRM. Pull a sample of 100 contacts from your active pipeline (contacts associated with open deals) and check when each was last enriched.
Why it matters: job titles change at a 20–25% annual rate in B2B. A contact enriched 18 months ago has a 35–40% probability of having a different title, company, or both. Outreach to stale data generates higher bounce rates, lower reply rates, and wasted AE time.
Rubric: 5 = average staleness under 30 days. 4 = 30–60 days. 3 = 60–90 days. 2 = 90–180 days. 1 = above 180 days (your data is significantly stale).
Metric 6: Tool Sprawl Per FTE
Calculation: (total GTM tools in stack) / (total GTM FTEs). This measures cognitive overhead — how many tools does each person on your team need to be fluent in?
Benchmarks: under 2 tools/FTE = lean, possibly under-tooled. 2–4 tools/FTE = normal. 4–6 tools/FTE = approaching sprawl — context-switching costs are real. Above 6 tools/FTE = sprawl is degrading productivity; consolidation is urgent.
Rubric: 5 = under 2. 4 = 2–3. 3 = 3–4. 2 = 4–6. 1 = above 6.
Metric 7: ROI Per Tool
Calculation: this is the hardest metric to measure precisely, so use a simplified version: (estimated revenue influenced by this tool in the last 90 days) / (cost of the tool in the last 90 days). Revenue influenced = sum of deal ACV for deals where this tool was used in the sales process.
For tools where direct attribution is impossible (intent data, enrichment), use a proxy: (meetings booked from the motion this tool enables) x (average ACV) x (average win rate) / (90-day tool cost). This is directional, not exact. The goal is to identify tools where the ROI math is clearly negative — not to achieve perfect attribution.
Rubric: 5 = ROI multiple above 10x. 4 = 5–10x. 3 = 2–5x. 2 = 1–2x (break-even territory — needs watching). 1 = below 1x (negative ROI — immediate review required).
Metric 8: Switching Cost Estimate
Calculation: for each tool, estimate the hours required to: migrate data out, re-train the team, rebuild integrations, and replace functionality with a new tool. Multiply by your internal hourly cost rate. This metric prevents over-indexing on poor ROI tools that would cost more to switch away from than to keep.
Low switching cost (under 20 hours): standalone tools with clean export APIs, no deeply embedded workflows. Medium switching cost (20–80 hours): tools with moderate CRM integration and some team habit formation. High switching cost (80+ hours): tools that are the backbone of your outbound or inbound motion, deeply integrated with 3+ other systems, or that hold historical data you cannot easily export.
Rubric: 5 = switching cost under 10 hours. 4 = 10–20 hours. 3 = 20–50 hours. 2 = 50–100 hours. 1 = above 100 hours (switching this tool is a project, not a decision).
The 1-Page Audit Template
GTM Stack Audit Template
Audit date: [DATE]
Auditor: [NAME]
Stack size: [NUMBER] tools
Total monthly spend: $[AMOUNT]
GTM FTE count: [NUMBER]
| Tool Name | Monthly Cost | M1: Cost/FTE | M2: Integration Health | M3: Redundancy | M4: Activation Rate | M5: Data Freshness | M6: Sprawl | M7: ROI Multiple | M8: Switch Cost | Score /40 | Action |
|-----------|-------------|--------------|------------------------|----------------|---------------------|---------------------|------------|-----------------|-----------------|-----------|--------|
| [Tool 1] | $[amount] | [score 1-5] | [score 1-5] | [score 1-5] | [score 1-5] | [score 1-5] | [score 1-5]| [score 1-5] | [score 1-5] | [sum] | [Keep/Consolidate/Eliminate/Renegotiate] |
Action thresholds:
Score 32-40: Keep — healthy tool, no action needed
Score 24-31: Monitor — one or two weak metrics, set a 90-day review
Score 16-23: Renegotiate or consolidate — the tool has real problems; find fixes or alternatives
Score below 16: Eliminate — the cost to keep this tool exceeds the value; begin migration planning
Top 3 consolidation opportunities identified:
1.
2.
3.
Top 3 integration health issues to fix:
1.
2.
3.
Next audit date: [90 days from today]
Run this audit quarterly. Stack composition changes faster than most teams realize — new tools get added during a pilot, old tools get forgotten when the champion who bought them leaves, integrations break silently when an API version is deprecated. A quarterly audit prevents the slow accumulation of technical and budget debt that makes stack rationalization feel like a painful big-bang project rather than ongoing hygiene.
Related reading: For a framework on vetting new tools before they enter your stack, see How to Vet a GTM Tool in 30 Minutes. For CRM-specific audit considerations, see Choosing Your CRM at Seed Stage. The GTMLens comparison index covers tool-by-tool alternatives for each category in a typical GTM stack.