Tuesday, 31 July 2012

Retraction

An interesting blog that reports on flawed papers that are retracted from journals. I wonder what is the proportion of published bad science that actually ends up retracted. Must be quite small given that detecting flaws is not so easy and the authors are surely not interested on retracting their work...

Paying Survey Respondents

Here is an interesting article about the surveys in US and whether or not the respondents should be payed.

Most of the time respondents are payed in Marketing Research and there has long been a concern about whether or not paying respondents will bias the surveys. What happens in official surveys is that respondents are really selected at random and therefore they do not choose to participate, they spend their time filing a non anticipated survey and therefore it seem to make sense that they should be recompensed. The argument goes that it is also fair that all tax payer pay for these official survey and the unlucky ones that are selected, the real contributors, should be rewarded, for they are making possible that all the official results from surveys are available.

Now, in Marketing Research, even Opinion Pools, things are different. Sample is usually on line and respondents choose to participate, often for the reward offered. If folks that are attracted by these rewards are somehow different from the others, then we may have a bias in the survey, which is quite complicated to be quantified. While the recompense may bring selection biases, it seems fair to assume that no recompense would no be fair and surveys would hardly be possible. Besides it also seems that in any case, payed respondents will answer surveys more reliably than if they were no payed as they get a sense of commitment if they are receiving money.

As the on line era dominates and change sampling theory we statisticians need to be more and more creative to handle the new challenges involved in releasing reliable results.

Saturday, 21 July 2012

Observed Power

I want to comment quickly on an interesting paper published by The American Statistician journal in 2001 about observed power.

Observed power is calculated after the fact, when the sample is being analyzed and the results are at hand. It is calculated just like one would calculate power usually, but using the observed difference in means (considering a test of means) and observed variability. Usually the observed power is calculated when the null hypothesis fails to be rejected likely because the researcher wants to have an idea whether s/he can or cannot interpret the results as evidence of the truthfulness of the null hypothesis. In these cases, the higher the observed power, more one would take the fail of rejection as acceptance. As the paper well advise, this type of power calculation is non sense just because just because it is directly associated with p-valor - the lower the p-value, the higher the power. Therefore if two tests fail to reject the Null, the one with lower p-value (more evidence against the Null) will have higher observed power (more evidence in favor of the Null according to the usual interpretation above). Therefore this type of power calculation is not only useless but leads to misleading conclusions.

I have lately involved myself into some debates about the role of statisticians especially in teaching statistics, spreading the power of the tool we use and also correcting the many misuses of statistics be it by statisticians or not. I believe this is the sort of information where we need to make the difference, this is the sort of information that differentiates who press buttons from those who do real statistics.