When the numbers can't be trusted
Bad survey data doesn't just sit there. It makes decisions for you
The trouble with unreliable or invalid data isn't that it looks suspicious. It looks fine. It moves through the deck, into the strategy, into next quarter's budget. Only later does anyone realize the conclusion was built on noise.
Scenario one
The "engagement is up" report
An HR team announces a six-point lift in employee engagement. The CEO praises the wellness initiative. Six months later, the best engineers leave anyway.
The survey was measuring how comfortable people felt being honest, not how engaged they actually were.
The cost: a year of investment in the wrong intervention, and a leadership team that no longer believes the people data.
Scenario two
The brand tracker that swings
A marketing team reports a 14-point NPS jump after a campaign. Leadership doubles the budget. Next quarter NPS drops 19 points, with no campaign change.
The instrument bounced because the question wording was tweaked between waves and the sample mix was different. Nothing actually changed in the brand.
The cost: ad spend chasing a phantom signal, and a board that stops trusting any tracker number.
Scenario three
The accreditation rejection
A program submits course-evaluation data to its accreditor. The reviewer asks a single question: "What's the reliability of this instrument?" Nobody knows.
The submission goes back for another year of data collection, this time with proper psychometric reporting.
The cost: a delayed accreditation cycle, faculty re-doing surveys they thought were done, and a director's hardest year on the job.