Murray believes that even reporters who attend the most prestigious colleges often write stories with statistics that are incomplete, misleading, or just plain wrong.
One of the most common journalistic errors is to compare percentages between two sets of data in which the percentages are not comparable because they are not a percentage of the same base number. Those types of percentage comparisons are completely misleading when they are placed close to one another in a story without explanation.
For example, consider the following paragraph from a story that was published by The New York Times, November 8 ("Racial Imbalance Persists at Elite Public Schools"):
"Among the 21,490 public school students who last year took the exam, the single gateway to eight high schools, 6 percent of blacks and 7 percent of Hispanics were offered admission, compared with 35 percent of Asians and 31 percent of white students. The disparities were the worst at Stuyvesant, where 2 percent of blacks, 3 percent of Hispanics, 24 percent of whites and 72 percent of Asians were accepted. (Over all, 1 in 5 test-takers is offered a spot; racial data is not available on private school students.)"
But is that really what happened? No. The two sets of percentages are using different base numbers. The first set of percentages refers to the percentages of each ethnic group offered admission to any of the eight public schools. Notice that the number does not equal (or approximate) 100% because although 100% of the students attending the high schools will come from the four groups (since those are the four racial categories the City uses), the percentage of any one group accepted can be up to 100%, irrespective of the percentage of any other group.
In the second set of percentages, the writer is reflecting the breakdown of the students offered admission to Stuyvesant: 72% Asian; 24 % white; etc. That number must equal or approximate 100%. (Here it is 101%, likely because a few students have listed more than one ethnicity). But that does not mean that 72% of Asians were accepted at Stuyvesant as the reporter wrote.
The author's last reference in the paragraph to 1 in 5 test-takers uses the same base rate as the first set of percentages which confuses things further.
The question is, how would someone reading the story realize that the author was jumbling percentages and thereby make sense of the story for herself. In two ways. First, if the reader notices that the two groups of percentages are not equal (one totals approximately 100% and the other does not), she would likely understand that an apples and oranges comparison was being made.
Second, a person reading the entire article would realize that it didn't make sense that 72% of Asians were accepted at Stuyvesant when only 35% of Asians were accepted overall, especially since Stuyvesant has the highest admission's standard of any school. In fact, the reader should realize that Stuyvesant would have accepted well below 35% of the Asian applicants, even though that group made up 72% of the accepted students.
We must all become more wary consumers of statistical information and insist that our media do a better job vetting it.