starting strength gym
Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: Hang clean vs power clean - RFD and GRFs

  1. #11
    Join Date
    Jun 2010
    Location
    Yesler's Palace, Seattle, WA
    Posts
    13,992

    Default

    • starting strength seminar jume 2024
    • starting strength seminar august 2024
    • starting strength seminar october 2024
    Quote Originally Posted by Sullydog View Post
    Well, yeah, not to mention that the endpoints studied had nothing to do with actual athletic performance. Which version actually helps these "21-year old elite rugby players" play ruby better? That's the real question. No answers here.

    Also, look at the confidence intervals they're huge. This means that the actual mean midthigh clean could have been as low as about 10,000 N/s and the actual mean standard power clean could have been as high as 11,750 N/s. And what's the practical difference of, say, 100 N -150 in Fz, for a rugby player? I'd guess not much, but I don'know. And they don't tell us. And without reading the paper, I don't know something else: did they calculate RFD off the entire movement? Because, you know, a clean from the floor takes longer than a hang clean, and since t is the denominator, and small, that's a big deal. I'd like to think the investigators wouldn't overlook such a glaring bias, but I've seen far more heinous things in the literature, and it wouldn't surprise me. Not to mention that from the abstract we don't know if these kids were even doing proper cleans.
    It also appears to me that they might have only achieved statistical significance by pooling the data groups in that particular way... and Bonferroni analysis is not the best way to perform a post hoc test after ANOVA, by a long shot.

  2. #12
    Join Date
    Feb 2011
    Location
    Farmington Hills, MI
    Posts
    4,689

    Default

    Good observation, tertius.

  3. #13
    Join Date
    Apr 2010
    Location
    Orlando
    Posts
    2,933

    Default

    Quote Originally Posted by Sullydog View Post
    Also, look at the confidence intervals they're huge. This means that the actual mean midthigh clean could have been as low as about 10,000 N/s and the actual mean standard power clean could have been as high as 11,750 N/s.
    Confidence intervals were not reported (in the abstract). We can discuss the merits of the findings, but there is no need to speculate what the actual mean of the power clean was, it was stated in the abstract (8,839.7 ± 2,940.4 N·s−1)

  4. #14
    Join Date
    Feb 2011
    Location
    Farmington Hills, MI
    Posts
    4,689

    Default

    The mean is clearly reported, Limie, as you say. But the mean doesn't tell us the whole story. When the authors report (8,839.7 ± 2,940.4 N·s−1), I assume that they're reporting the mean +/- either (a) a 95% confidence interval (since alpha was set to p< .05 in this study) or (b) an error such as the standard dev or the standard error. But I've just pulled the paper, and they don't actually specify what error statistic they're using. The data is presented as bar graphs (not bar-and-whisker, which would have given a better picture) with some sort of error bar, but the legends do not specify whether the bars indicate std dev, std err, or CI, or whatever. So I'll give up and concede that it may very well not be CI. But they are reporting a range of values around the mean, however it is that they derived that range, and it looks to me like the ranges they reported are pretty huge. Help me out here if you can; I'm statistically challenged.

    Also, in regard to my other critique re RFD, here is the sum total of the methods described in the paper:

    Instantaneous RFD was determined by dividing the differencee in consecutive vertical force reading by the time interval between the readings.

    This is outside my field of study, but it seems pretty skimpy. Unless I missed something, I don't see any indication that the investigators corrected for the fact that t would be longer for a pull from the floor than for a midthigh or hang clean. Wouldn't they have to, to obtain a meaningful comparison?

  5. #15
    Join Date
    Nov 2009
    Posts
    1,130

    Default

    Quote Originally Posted by Sullydog View Post
    The mean is clearly reported, Limie, as you say. But the mean doesn't tell us the whole story. When the authors report (8,839.7 ± 2,940.4 N·s−1), I assume that they're reporting the mean +/- either (a) a 95% confidence interval (since alpha was set to p< .05 in this study) or (b) an error such as the standard dev or the standard error. But I've just pulled the paper, and they don't actually specify what error statistic they're using. The data is presented as bar graphs (not bar-and-whisker, which would have given a better picture) with some sort of error bar, but the legends do not specify whether the bars indicate std dev, std err, or CI, or whatever. So I'll give up and concede that it may very well not be CI. But they are reporting a range of values around the mean, however it is that they derived that range, and it looks to me like the ranges they reported are pretty huge. Help me out here if you can; I'm statistically challenged.
    Pretty sure those are 95% confidence intervals. In fact:
    No significant (p > 0.05) differences were found when comparing the RFD between either the midthigh power clean (14,655.8 ± 4,535.1 N·s-1) and the midthigh clean pull (15,320.6 ± 3,533.3 N·s-1), and between the hang-power clean (9,768.9 ± 4,012.4 N·s-1) compared to the power clean (8,839.7 ± 2,940.4 N·s-1)
    By the way, they define 'hang power clean' as a power clean from the knees, and 'mid-thigh power clean' as a clean from the jumping position; no stretch reflex.

  6. #16
    Join Date
    Apr 2010
    Location
    Orlando
    Posts
    2,933

    Default

    Quote Originally Posted by Sullydog View Post
    The mean is clearly reported, Limie, as you say. But the mean doesn't tell us the whole story. When the authors report (8,839.7 ± 2,940.4 N·s−1), I assume that they're reporting the mean +/- either (a) a 95% confidence interval (since alpha was set to p< .05 in this study) or (b) an error such as the standard dev or the standard error. But I've just pulled the paper, and they don't actually specify what error statistic they're using. The data is presented as bar graphs (not bar-and-whisker, which would have given a better picture) with some sort of error bar, but the legends do not specify whether the bars indicate std dev, std err, or CI, or whatever. So I'll give up and concede that it may very well not be CI. But they are reporting a range of values around the mean, however it is that they derived that range, and it looks to me like the ranges they reported are pretty huge. Help me out here if you can; I'm statistically challenged.
    There are a fair few reasons to question the validity and/or applicability of this paper, but I don’t understand the issue you have taken. You complained about not knowing the actual mean of the groups, yet the mean was unambiguously reported. You stated the confidence intervals were huge yet the variability around the mean is clearly not reported as a CI.

    I don’t mean to especially get on your case, but sometimes on here criticisms of anything considered main stream in health or exercise gets out of hand. Rip has done a good job of bringing up things that should be considered be evaluating the usefulness of the information presented by this paper. Without thinking about it too much, your thought about the difference in t might also be decent (although, without having read the paper, I suspect in this instance t does not refer to the entire duration of the pull, but probably a predefined interval between successive measurements taken by the plate).

    As for the variability issue I dont know enough force plates and their use to be familiar enough with the measures reported. It’s approx 25% of the mean, which could indeed be large, but for some measures it wouldn’t necessarily be. If someone with more knowledge about this than you or I could say either way then that might be another avenue of criticism about these findings to explore. Such a thing could be suggestive of poor testing procedures. However, the main worry with excess variability is type II error, which by definition is not a problem in this report as they still had sufficient statistical power to observe differences among the groups (the same could be said about the use of the overly conservative Bonferonni correction for multiple comparisons).

  7. #17
    Join Date
    Feb 2011
    Location
    Farmington Hills, MI
    Posts
    4,689

    Default

    You complained about not knowing the actual mean of the groups, yet the mean was unambiguously reported. You stated the confidence intervals were huge yet the variability around the mean is clearly not reported as a CI.
    The mean is reported as the mean plus/minus a range of values. What is that range? For a number of reasons, I assumed it was CI even though it wasn't labeled as such; you called me on it. Fair enough; I think you were right to do so, because my assumption was just that. So I checked. And they don't tell us what that range of values represents, nor do they explain the error bars on their graphical presentation of the data. That's just bullshit.

    I suspect in this instance t does not refer to the entire duration of the pull, but probably a predefined interval between successive measurements taken by the plate
    Except they don't tell us that. This would have been easy; even easier and more elegant would have been to include representative force-time curves for each group as figures and show us where they cut off t. But on my reading they don't tell us how they defined and normalized the interval at all. And that's just bullshit.

    And my primary objection to the paper remains that the endpoint is a laboratory endpoint, not a performance endpoint. Not actually bullshit, but it means the paper just isn't that interesting. Really, what are we to do with this information, especially since it's unclear how they obtained it and reported it?

    Limie, I completely understand what you're saying about people piling on over the literature and the medical establishment. And sometimes Rip's bias against my profession stings; I'll admit it. But the bitter truth is that the biomedical literature is 90% shit by weight. I knew that long before I stumbled into this community.

  8. #18
    Join Date
    Jun 2010
    Location
    Yesler's Palace, Seattle, WA
    Posts
    13,992

    Default

    starting strength coach development program
    Limie, your points are fairish.

    However, I cannot take seriously a researcher who doesn't properly label their goddamn axes, and doesn't tell me what they're reporting when reporting intervals.
    I would have assumed that those were mean and std dev, but it could just as easily be standard error. Same with the whiskers on the graphs. Not to mention that N here = 11. Which is, I assume, why they used the particular post hoc test they did.

    And Sully's point about the curves is excellent. That would have been an excellent way to provide comparisons, particularly since we'd get to see the area under the curve as well as instantaneous force. Again, Rip has picked apart why it's a bad paper in many ways, but seriously... stuff like poorly labeled figures (and no tables? What the shit? All those means and intervals broken across lines is just unreadable, and difficult to compare. It's like it's awful on purpose) shouldn't get past reviewers.

    It doesn't speak well to the state of a research community when this is the case.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •