starting strength gym
Page 6 of 6 FirstFirst ... 456
Results 51 to 60 of 60

Thread: Mark - bad science

  1. #51
    Join Date
    May 2016
    Posts
    357

    Default

    • starting strength seminar jume 2024
    • starting strength seminar august 2024
    • starting strength seminar october 2024
    Quote Originally Posted by cfreetenor View Post
    I was thinking the trajectory of progress for the phase one 5s and 3s and the phase two 5s and 3s would be interesting to observe.

    It would only be two groups, but four groups of data. But scientifically, I can see how the deconditioning period would introduce an unnecessary layer of variance even if the trial is random.

    So maybe take out the deconditioning period, and observe the second phase for interest. And have the groups split into 4- with one from each continuing on their current programming. If one group progresses faster than the others in that phase in a statistically significant way, I think that would be scientifically significant. Same for the first phase, honestly. But the real core issue brought up is the relationship between rep volume and strength - repeated submaximal loads and one rep maximums. I think what is called for before progress like that can continue is a faithful and confident average derivation of, say, 5rm and 3rm. Does one exist with any real integrity?
    Yes, you can have all kinds of designs depending on what youre interested in researching or what havent been researched sufficiently: washout periods (of arbitrary length) before and/or after mid-term (X - 3 - X - 5), the four-group-design I posted, even more groups with three intervention blocks (3- 3 - 3; 3 - 3 - 5; 3 - 5 - 5 etc), a control group with higher reps and so on. There is no real theoretical fundamental impossibility to study something - every field of science has its obstacles, and yes, I know that surprisingly whatever you take, people think that exactly in their favority hobby science "is not able to investigate our field because..." (you can go to hi-fi audio forums, running forums, automobile forums - the same arguments), but thats more of the uneasiness of one´s hobby getting disturbed by cold, evil science. So yes...strength training doesnt preclude itself from being studied either.

    The thing is that knowledge is accumulated in multiple studies; thats why you dont need to manipulate EVERY factor in EVERY study - a reproach which is often made that "why didnt they have females/younger/older/more experienced/less experienced/SS-program style/rest-pause schemes/high volume/blue-eyed/African Americans/different sexual leanings in the study? - They didnt - I dont buy it and the findings surely dont apply to me!". Well, one after another, and look at all the evidence.

    re accupuncture

    Quote Originally Posted by Jonathon Sullivan View Post
    There is a quite glaring problem with this approach.
    An interesting ressource: Search Results for ?acupuncture? ? Science-Based Medicine

  2. #52
    Join Date
    Apr 2010
    Location
    Orlando
    Posts
    2,933

    Default

    Quote Originally Posted by Marenghi View Post

    The thing is that knowledge is accumulated in multiple studies; thats why you dont need to manipulate EVERY factor in EVERY study
    This is the opposite of what I said. Someone who claims the above simply doesnt understand what it is like to conduct a study. The issue I raised is that in resistance training and performance research you cannot manipulate just one factor. That does not mean that it is a field that is precluded from scientific investigation, but that the results of said investigations are so lacking in generalizability as to be practically (as in, to put into practice) worthless

  3. #53
    Join Date
    May 2016
    Posts
    357

    Default

    You made good points.

    Quote Originally Posted by mgilchrest View Post
    How many people are involved in said hypothetical study? In order to get good convergence on a problem with this level of random variation across multiple dimensions, N will be rather large.
    Indeed, thats why I also wrote:
    The thing is that knowledge is accumulated in multiple studies; thats why you dont need to manipulate EVERY factor in EVERY study.
    BTW, I dont know if you have a statistical background, but often people without one overestimate the need for N: someone even told me that you needed to have hundreads of participants... These days, a power analysis for estimating N should be standard, and this analysis should be reported in the actual study.


    This introduces another factor: compliance.
    One other thing to consider is that compliance could indicate a bias regarding predisposition to being strong.
    Compliance itself is not that of a problem in these studies: Training is not a thing strength trainees dislike - quite the contrary: In many studies, they do better than before due to better training, higher motivation etc. Imagine that often they train on their own in a rather mediocre program - motivated and educated people like in this forum are a minority. So its rare that the less gifted drop out because they often experience quite good progress.

    Your second remark is interesting and I agree: theres always selection and self-selection at work. The people who wander into a gym/consult a coach/take part in a study can be biased towards strong, towards weak, towards little potential, towards great potential. Very depending on the gym/coach/study/institution. Good coaches often have a bimodal client population: very good - and very bad/desperate clients, who come to the coach as "a last try". Im not that knowledgable about internet forums, but Id agree with you theres a clear self-selection for gifted and motivated people.

    Back to studies: We cant avoid self-selection and dropouts: we cant retrieve a true general population sample because youd need a totalitarian state for that (company/health insurance programs in European countries are the closest you can get for voluntary participation).
    For example, some researchers have investigated the variance of hypertrophy. Most dedicated strength trainees strongly contest the very similar findings, because they simply cant believe there are such unsuccessful people (and they blame the training programs for lacking results). So youre absolutely right that changes perceptions and results.

    But we can look closer at the dropouts: Why did they drop out? Injuries? Bad results (unlikely, see above)? Too little time? Program too demanding physically/mentally? We can also simply ask them. Who were they? Pre-strength values, age, gender, training age, body composition?

    That is done in every good study - not only in sports science.

    This is a good discussion. I just dont always have the time to adress all arguments as these points come up very often and Ive adressed them many times in (offline) discussions. So if you or other guys are interested in the pitfalls (and solutions) of study designs, I recommend reading a boring, plain old textbook. This one is easy to digest Amazon.com: Statistics for Sport and Exercise Studies: An Introduction (9780415595575): Peter O'Donoghue: Books as is this one:
    Amazon.com: Statistical Methods for Psychology (9780495597841): David C. Howell: Books and this: Statistics for Sports and Exercise Science: A Practical Approach: John Newell, Tom Aitchison, Stanley Grant: 9780132042543: Amazon.com: Books

    Quote Originally Posted by LimieJosh View Post
    This is the opposite of what I said. Someone who claims the above simply doesnt understand what it is like to conduct a study. The issue I raised is that in resistance training and performance research you cannot manipulate just one factor. That does not mean that it is a field that is precluded from scientific investigation, but that the results of said investigations are so lacking in generalizability as to be practically (as in, to put into practice) worthless
    Because I know how to conduct a study I also know why I told you this: If you cant avoid manipulating two or more factors inadvertently (one independent variable being the ideal picture of an experiment), you still have the background of several other studies that give you information which effects this inadvertently manipulated factor has. So you can adjust for it. Lets say you compare BFAs but happen to have both women and men in your study and of different weight. You know how BFA is different in sexes and how body weight influences relative strength gains etc, so you can interpret or even statistically control for these factors.

    Also crossover designs and multiple-group designs allow to explore interaction effects and to control for multiple factors.

    So lots of applicable results, and with the progress of a scientific field beyond the fundamentals, they get more and more applicable. Sports science and especially strength training is a relatively young field and thats exactly what is happening.

  4. #54
    Join Date
    Feb 2011
    Location
    Farmington Hills, MI
    Posts
    4,689

    Default

    Quote Originally Posted by Marenghi View Post

    Quote Originally Posted by Jonathon Sullivan View Post
    There is a quite glaring problem with this approach.
    re accupuncture

    An interesting ressource: Search Results for ?acupuncture? ? Science-Based Medicine
    Yeah...I wasn't talking about accupuncture. I'm talking about meta-analysis. I think that pooling data sets obtained from studies designed to address hypothesis x and using the combined data to test hypothesis y is problematic. Extremely. This approach might have some minimal value from an observational perspective, and for hypothesis generation, but I would regard any conclusions proceeding from such an enterprise with a very jaundiced eye.

    Here is a crude analogy I picked up somewhere (can't remember where). I think of any particular data set arising out of an experiment as a sort of n-dimensional mathematical object. The study was hopefully designed to generate this data set to be analyzed in a specific manner to address a specific question. The data set-object is designed to be "sliced" or analyzed in that pre-determined, correct and statistically appropriate manner, established during study design and before data collection. When we "slice" our data set-object along that predetermined "plane," the resulting "shape" of the data, if it has the power, either supports or fails to support the tested hypothesis.

    All well and good. The problem is that, if we want to inappropriately exploit the data, be sloppy, or just plain lie, we can retrospectively slice the data set in any way we choose, along any number of other "planes," hacking at it with all kinds of post-hoc analyses, statistical ledgerdemain, manipulation of groups, "data-mining," you name it, and get a different family of distorted "shapes" for the data--some of which, not incidentally, might serve our purposes (publication of a paper or renewal of a grant, say) better than the orthogonal projection of the data anticipated by the original study design. Again, if conducted transparently and with acknowledgement of the limitations, this might have some minimal value for hypothesis generation, but I believe it will actually mislead us most of the time.

    And doing that to a bunch of different data set-objects from a bunch of different studies and then mixing them together to address a question none of them were specifically designed to answer? I think that's a recipe for shit on a shingle.

  5. #55
    Join Date
    Aug 2014
    Posts
    1,077

    Default

    Quote Originally Posted by Jonathon Sullivan View Post
    Yeah...I wasn't talking about accupuncture. I'm talking about meta-analysis. I think that pooling data sets obtained from studies designed to address hypothesis x and using the combined data to test hypothesis y is problematic. Extremely. This approach might have some minimal value from an observational perspective, and for hypothesis generation, but I would regard any conclusions proceeding from such an enterprise with a very jaundiced eye.

    Here is a crude analogy I picked up somewhere (can't remember where). I think of any particular data set arising out of an experiment as a sort of n-dimensional mathematical object. The study was hopefully designed to generate this data set to be analyzed in a specific manner to address a specific question. The data set-object is designed to be "sliced" or analyzed in that pre-determined, correct and statistically appropriate manner, established during study design and before data collection. When we "slice" our data set-object along that predetermined "plane," the resulting "shape" of the data, if it has the power, either supports or fails to support the tested hypothesis.

    All well and good. The problem is that, if we want to inappropriately exploit the data, be sloppy, or just plain lie, we can retrospectively slice the data set in any way we choose, along any number of other "planes," hacking at it with all kinds of post-hoc analyses, statistical ledgerdemain, manipulation of groups, "data-mining," you name it, and get a different family of distorted "shapes" for the data--some of which, not incidentally, might serve our purposes (publication of a paper or renewal of a grant, say) better than the orthogonal projection of the data anticipated by the original study design. Again, if conducted transparently and with acknowledgement of the limitations, this might have some minimal value for hypothesis generation, but I believe it will actually mislead us most of the time.

    And doing that to a bunch of different data set-objects from a bunch of different studies and then mixing them together to address a question none of them were specifically designed to answer? I think that's a recipe for shit on a shingle.
    And yet, this (the meta-analysis) is what is presented as the pinnacle of scientific evidence. And probably the majority of the time, it's not even done in good faith to begin with. GIGO.

  6. #56
    Join Date
    Jun 2017
    Posts
    181

    Default

    Quote Originally Posted by Jonathon Sullivan View Post
    Yeah...I wasn't talking about accupuncture. I'm talking about meta-analysis. I think that pooling data sets obtained from studies designed to address hypothesis x and using the combined data to test hypothesis y is problematic. Extremely. This approach might have some minimal value from an observational perspective, and for hypothesis generation, but I would regard any conclusions proceeding from such an enterprise with a very jaundiced eye.

    Here is a crude analogy I picked up somewhere (can't remember where). I think of any particular data set arising out of an experiment as a sort of n-dimensional mathematical object. The study was hopefully designed to generate this data set to be analyzed in a specific manner to address a specific question. The data set-object is designed to be "sliced" or analyzed in that pre-determined, correct and statistically appropriate manner, established during study design and before data collection. When we "slice" our data set-object along that predetermined "plane," the resulting "shape" of the data, if it has the power, either supports or fails to support the tested hypothesis.

    All well and good. The problem is that, if we want to inappropriately exploit the data, be sloppy, or just plain lie, we can retrospectively slice the data set in any way we choose, along any number of other "planes," hacking at it with all kinds of post-hoc analyses, statistical ledgerdemain, manipulation of groups, "data-mining," you name it, and get a different family of distorted "shapes" for the data--some of which, not incidentally, might serve our purposes (publication of a paper or renewal of a grant, say) better than the orthogonal projection of the data anticipated by the original study design. Again, if conducted transparently and with acknowledgement of the limitations, this might have some minimal value for hypothesis generation, but I believe it will actually mislead us most of the time.

    And doing that to a bunch of different data set-objects from a bunch of different studies and then mixing them together to address a question none of them were specifically designed to answer? I think that's a recipe for shit on a shingle.
    I don't understand the problem, assuming you have full access to the methodology. It seems if you really wanted to, it would be easier to produce a favorable outcome in a study you engineered yourself - assuming you have the operational bias to data mine in the first place.

  7. #57
    Join Date
    May 2016
    Posts
    357

    Default

    Quote Originally Posted by Pluripotent View Post
    And yet, this (the meta-analysis) is what is presented as the pinnacle of scientific evidence. And probably the majority of the time, it's not even done in good faith to begin with. GIGO.
    Yes you can stab a man with a knife - or use it to cut your bread. I agrre with both of you Pluripotent and Sully, about the potential misuse of meta-analysis. A proper MA shouldnt be done this way - and most arent.

    Just be aware, Pluripotent, that your obvious negative bias against science, well ...biases everything the way you perceive it. A statement like
    "And probably the majority of the time, it's not even done in good faith to begin with."
    is not based on facts regarding methodology. And probably heavily influenced by your experience of pharmacology studies. Thank god this is not were talking about.

    So, assume good faith.

  8. #58
    Join Date
    Jan 2016
    Posts
    398

    Default

    Quote Originally Posted by Jonathon Sullivan View Post
    Yeah...I wasn't talking about accupuncture. I'm talking about meta-analysis. I think that pooling data sets obtained from studies designed to address hypothesis x and using the combined data to test hypothesis y is problematic. Extremely. This approach might have some minimal value from an observational perspective, and for hypothesis generation, but I would regard any conclusions proceeding from such an enterprise with a very jaundiced eye.

    Here is a crude analogy I picked up somewhere (can't remember where). I think of any particular data set arising out of an experiment as a sort of n-dimensional mathematical object. The study was hopefully designed to generate this data set to be analyzed in a specific manner to address a specific question. The data set-object is designed to be "sliced" or analyzed in that pre-determined, correct and statistically appropriate manner, established during study design and before data collection. When we "slice" our data set-object along that predetermined "plane," the resulting "shape" of the data, if it has the power, either supports or fails to support the tested hypothesis.

    All well and good. The problem is that, if we want to inappropriately exploit the data, be sloppy, or just plain lie, we can retrospectively slice the data set in any way we choose, along any number of other "planes," hacking at it with all kinds of post-hoc analyses, statistical ledgerdemain, manipulation of groups, "data-mining," you name it, and get a different family of distorted "shapes" for the data--some of which, not incidentally, might serve our purposes (publication of a paper or renewal of a grant, say) better than the orthogonal projection of the data anticipated by the original study design. Again, if conducted transparently and with acknowledgement of the limitations, this might have some minimal value for hypothesis generation, but I believe it will actually mislead us most of the time.

    And doing that to a bunch of different data set-objects from a bunch of different studies and then mixing them together to address a question none of them were specifically designed to answer? I think that's a recipe for shit on a shingle.
    I can see what you mean. But what if the point being analyzed was one of the variables measured during the study, but maybe not necessarilly part of what was being analyzed directly by the study?

    The reason I ask is because there is a meta-analysis study evaluating the risk of a cardiovascular event from different delivery vehicles of testosterone. This study refutes the mainstream thought that it does increase the risk. It also discerns the difference between delivery methods as well. I'm not saying it proves anything, but at least raises the question of whether the prevailing wisdom about one of the risks of this therapy is correct.

  9. #59
    Join Date
    Feb 2011
    Location
    Farmington Hills, MI
    Posts
    4,689

    Default

    Quote Originally Posted by cfreetenor View Post
    I don't understand the problem, assuming you have full access to the methodology.
    Okay. We'll chalk that up to me being a lousy communicator.

    Quote Originally Posted by Pluripotent View Post
    And yet, this (the meta-analysis) is what is presented as the pinnacle of scientific evidence. And probably the majority of the time, it's not even done in good faith to begin with. GIGO.
    It's a tool that, when used and interpreted properly, can be useful. "Used and interpreted properly" is the critical issue.

  10. #60
    Join Date
    Jul 2007
    Location
    North Texas
    Posts
    53,685

    Default

    starting strength coach development program
    Quote Originally Posted by Marenghi View Post
    So, assume good faith.
    That would be rather naive.

Page 6 of 6 FirstFirst ... 456

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •