Evaluating Statistical Claims Pattern - Spot the Traps
Digital SAT® Math — Evaluating Statistical Claims
Spotting common distractors like subsets, supersets, or wrong populations
These questions describe a study where the sampling method is flawed — and your job is to identify what's wrong. The most common setup: an organization wants to know what an entire population thinks, but it only surveys a biased subgroup.
The Core Pattern — Biased Sampling
A study claims to represent Group X but only samples from a subset of X that is likely biased. For example:
- Wants: all residents' opinions on a gym → Surveys: only gym members
- Wants: all commuters' views on toll lanes → Surveys: only toll-road users
- Wants: all residents' views on hunting → Surveys: only licensed hunters
In every case, the sample is drawn from people who already have a strong interest in the topic, making the results unrepresentative.
Example — Biased Subgroup
A university wanted to assess all students' opinions about building an esports arena. It surveyed 400 students who are members of the campus gaming club. The majority favored the arena. Which is true about this survey?
A) It shows that the majority of all university students favor the arena.
B) The sample is biased because it is not representative of all students.
C) The sample should have consisted entirely of non-gamers.
D) The sample should have included more gaming-club members.The target population is all students, but only gaming-club members were surveyed. Gamers are more likely to favor an esports arena.
Answer: B
The Four Classic Traps
-
Overgeneralization (Choice A): Claims the biased result applies to the whole population. Wrong — a biased sample cannot support conclusions about the full group.
-
Sample the opposite group (Choice C): Suggests surveying only non-gamers instead. This would be equally biased, just in the other direction.
-
Add more of the same biased group (Choice D): A larger biased sample is still biased. Size doesn't fix a fundamental sampling flaw.
-
The 50/50 misconception: In self-selected poll questions, one wrong answer claims results are invalid because they're not split 50/50. Valid surveys can have any distribution.
Example — Self-Selected Poll
A radio host asked listeners to vote via an app: "Do you support the transit funding plan?" Results: 34% Yes, 65% No. Why are these results unlikely to represent all residents?
A) The percentages don't add to 100%.
B) There weren't 50% Yes and 50% No.
C) Respondents were not a random sample of residents.
D) The poll was open for only 15 minutes.This is a voluntary response poll — only listeners who felt strongly enough to open the app responded. They're self-selected, not randomly sampled.
Answer: C
Choice A is wrong because percentages can fail to sum to 100% due to rounding or a third option. Choice B reflects a common misconception that fair polls must be 50/50. Choice D is a procedural complaint that doesn't address the core issue.
What to Do on Test Day
- Ask: "Who was the target population, and who was actually sampled?" If they don't match, the sample is biased.
- The fix for bias is always random sampling from the target population — not surveying a different biased group and not surveying more of the same biased group.
- For self-selected polls (website, app, QR code at an event), the problem is always that respondents chose to participate rather than being randomly selected.
- "Percentages don't add to 100%" is never the right answer. Rounding or additional response options can cause this.
- "Results aren't 50/50" is never the right answer. Valid polls can show any distribution.
More Evaluating Statistical Claims Patterns