"And the precision of this measure is very much tied to the size of the population that you are studying and the number of exposed people," she explained.
"In other words," she said, "if we were to go into a large population and do the same study a hundred times, how many times out of a hundred would we find the same exact answer?"
"It is similar to tossing a coin," she noted. "If you are looking at the proportion of heads and tails in a coin toss, and you toss that coin a thousand times ... you are going to come up with that 50/50 proportion pretty much all the time."
"That's a very precise answer," she pointed out.
"So if you think about it that way," Kramer said, "the larger the sample size, the larger the number of people that you study, the more precise your study estimate of that relative risk is."
"And we estimate the precision of this relative risk by calculating something called confidence interval," she told the jury. "If you were to repeat this study, let's say 95 times out of a hundred, what would that range be?"
For instance, where the relative risk in a study is 2, and they calculate statistically a 95 percent confidence interval with a range of between 1.5 and 2.5, the actual relative risk would fall somewhere in this range. That "means 95 trials out of a hundred would generate results in this range," Kramer stated.
A test that is not statistically significant should not be discarded, she said. The "practice of statistical significance testing has been very much rejected in epidemiology because it was never developed really to study health or biomedical or human health problems."
Next Page 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).