Originally Posted by
Jonathon Sullivan
This is a very important observation, and one I have been known to bitch about. The finding of statistical significance does not necessarily mean clinical or practical significance.
But, perhaps since I'm statisically challenged, I'm under the very strong impression that this phenomenon (stat sig in the absence of pract sig) does not require large sample sizes to occur.
For example, if I test two exercise protocols, A and B with a small cohort, say 10 bros for each sample, I may very well find that A bros report DOMS pain of 4.3 on a VAS, while B bros report DOMS pain of 4.7. If I look over the data, I may find that this variance is not due to any outliers, that my data is normally distributed, and that p < .o5. In other words, statistically speaking, there's really a tiny difference there in the data from my sample (which might very well revert to the mean with a larger sample). The difference is statistically significant. But definitely not practically significant.
But please correct me if I'm wrong. I talk a good game, but my grasp of statistics, at the end of the day, is pretty rudimentary. When I did research, I always had to farm that shit out.
All of which is neither here nor there, I guess.