woman sitting at computer with bluetooth headphones talking to monitor

The Hiring Signal Is Broken: Stop Betting on Self‑Reported Evidence for GTM Roles

 • 
Listen to this article
0:00 / 0:00

Unable to load audio. Please try again later.

Hiring used to be hard because the right candidates were rare. Now it’s hard because the inputs are easy to manufacture. When applications can be generated at scale—and recruiters are being overwhelmed by AI‑assisted submissions—polish stops being a proxy for competence.

That matters most in GTM roles, where “looking good on paper” is already correlated with the job (selling is persuasion). In a noisy market, it’s increasingly possible to select a candidate who is exceptional at self‑presentation and mediocre at creating outcomes.

This isn’t about how to run interviews. This is about what happens before an interview: how to make evidence-led decisions when deciding who deserves a conversation in the first place—especially when application answers have become less trustworthy signals.

The core problem: most pre-interview screening is self-reported

Resumes are self‑reported. LinkedIn profiles are self‑reported. “Impact bullets” are self‑reported. Even many case-study narratives are written to be impressive rather than verifiable. That used to be manageable when application volume was lower and when “effort” signaled seriousness. But when application volume spikes and AI can generate keyword-perfect “fit,” self-reported data becomes a weaker filter.

So the new discipline for scale-ups is not “better interviewing.” It’s better upstream selection: moving from self‑reported signals to observable, attributable evidence when choosing whom to interview.

What “evidence-led” means in candidate selection (pre-interview)

Evidence-led selection means treating a resume like a marketing asset and asking one question before you grant an interview slot:

What can we verify—outside of the candidate’s own claims—that suggests they can perform this job here?

This framing is consistent with decades of research in personnel selection that emphasizes predictive validity—methods that better predict future job performance are more useful than those that merely “sound convincing.”

It also aligns with compliance reality: selection procedures are broadly defined and can be effective, but they should be job-related and consistently applied; “informal” or inconsistent screens can create risk.

A practical “evidence hierarchy” for deciding who to interview

Look for ways to bring a hierarchy to triaging candidates—without adding process overhead.

  • Tier 1 evidence: Direct, attributable work product (best signal). This is work that the candidate can point to publicly or provide (with appropriate redactions) that maps tightly to the role.

The point isn’t the format. It’s attribution: can the skill or achievement can be verified independently? This pushes selection away from storytelling and toward observable proof, which aligns with the predictive-validity mindset in selection research.

  • Tier 2 evidence: Third‑party validation tied to outcomes (strong signal). This is external evidence that the candidate’s work created outcomes.

This matters because self-reported claims are easy to polish, but durable third-party signals are harder to fake at scale—especially when application volume is high.

  • Tier 3 evidence: Reputation signals. Previous employers or internships are not meaningless—but they’re weaker than most hiring teams admit. In the current environment, they’re also easier to game (network signaling, keyword optimization, title inflation). When you rely heavily on Tier 3, you’re back to betting on narrative.

The takeaway: Tier 3 can get someone noticed. Tier 1 and Tier 2 should determine who gets interviewed.

The shift to make: from resume screening to evidence-based matching

Most scale-ups still screen like this:

  • “Does the resume match the JD?”
  • “Do they have the right logos/titles?”
  • “Do they sound credible in a quick call?”

Evidence-led selection screens like this instead:

  • “Do we see work that maps to the role?”
  • “Can we validate outcomes beyond self-report?”
  • “Is there proof of operating in constraints similar to ours (speed, ambiguity, cross-functional friction)?”

This is the upstream fix for the same problem Semafor surfaced: when application volume and AI-assisted submissions surge, conventional screening becomes less reliable.

One more reason this matters now: GTM roles are evolving fast

Even aside from application noise, GTM roles (especially in AI-enabled companies) are blending technical and commercial responsibilities. The job market itself is signaling that hybrid profiles are rising fast—so old “title-based” screens are increasingly brittle. When role boundaries shift, self-reported titles become even less informative. Evidence—what someone actually built, shipped, and operationalized—stays relevant.

Treat self-report as a hypothesis, not a signal

In 2026, “impressive on paper” is not a meaningful filter—because paper is easy to generate. What scale-ups need is a selection posture: Resumes and LinkedIn are hypotheses. Evidence is signal.

That’s the only way to protect interview time for candidates who are likely to create outcomes—and to keep “precision GTM hiring” from turning into a charisma contest.

Sources reviewed for this insight: LinkedIn, Financial Times, Nist.gov, The Carbon Cut, a16z.com (Fast Company/Semafor), Optif.ai

Innovation insights delivered to you

Zeki’s bi-weekly Insights Newsletter provides actionable market perspectives.


Discover more from Zeki

Subscribe now to keep reading and get access to the full archive.

Continue reading