starting strength gym
Page 9 of 9 FirstFirst ... 789
Results 81 to 90 of 90

Thread: Training with high blood pressure

  1. #81
    Join Date
    Jul 2007
    Location
    North Texas
    Posts
    53,669

    Default

    • starting strength seminar jume 2024
    • starting strength seminar august 2024
    • starting strength seminar october 2024
    Quote Originally Posted by Subsistence View Post
    just expressing suprise that people are so eager to call it garbage despite that expert acclaim. Probably because it doesn't fit the narrative, but maybe I'm reading into it too much.
    Perhaps they are recognizing a pattern.

  2. #82
    Join Date
    Feb 2010
    Posts
    815

    Default

    Quote Originally Posted by Subsistence View Post
    I am saying that it seemed weird that an opinion "commonly accepted" by experts seemed held to a higher standard than opinions with much less support. I'm not saying it is true because it is commonly accepted, just expressing suprise that people are so eager to call it garbage despite that expert acclaim. Probably because it doesn't fit the narrative, but maybe I'm reading into it too much.
    People are not eager to call it garbage despite of expert acclaim. They are calling it garbage because the evidence suggests that it is garbage and it is still acclaimed by experts.

    Did you actually read the resonses to the studies you posted? This is spot on:

    Quote Originally Posted by dmworking View Post
    Statistical significance does not mean the null hypothesis is true, it just means the null hypothesis is likely to be true. And with a larger alpha (0.15 instead of 0.05), that means the researchers are okay with a 15% likelihood of error rather than a 5% likelihood of error. I wouldn't say that's "extremely unlikely."

    Your second point is telling as well-- given a 15% likelihood of error (or even 5%, like what most studies use), as you perform regression analysis with more and more variables, it starts to be more and more likely that SOMETHING will pop up as being "significant." Clinical trials are built on the idea that if you measure enough things in a study of a few hundred people, then you'll be able to show statistical significance with something.
    Assuming the hypothesized effect even exists, which you would not just assume with a p-value so high that it is not even a trend in my book, the effect size would be ridiculously low considering that you had such a large sample and still did not get below the 5% threshold. Add to that the fact that 15 variables were controlled for in the process to come to this number and the conclusion of the authors, that the current practice is justified by data, becomes completely indefensible.

  3. #83
    Join Date
    Jun 2010
    Location
    Yesler's Palace, Seattle, WA
    Posts
    13,992

    Default

    Quote Originally Posted by steven-miller View Post
    Assuming the hypothesized effect even exists, which you would not just assume with a p-value so high that it is not even a trend in my book, the effect size would be ridiculously low considering that you had such a large sample and still did not get below the 5% threshold. Add to that the fact that 15 variables were controlled for in the process to come to this number and the conclusion of the authors, that the current practice is justified by data, becomes completely indefensible.
    Right. And the study is of "prehypertensive" people.

    Very few people here are saying that the sodium-blood pressure relationship is bunko if you're a member of the population that has actual, legit high blood pressure issue (perhaps related to the ability to excrete sodium).

    What's been said is that the relationship probably doesn't hold much water for those outside of that population, and that it hasn't been well demonstrated that the OP is a member of that population.

    Just as you shouldn't say that because eating a donut might kill a diabetic, no one should eat donuts.

  4. #84
    Join Date
    Jul 2010
    Posts
    1,946

    Default

    Quote Originally Posted by steven-miller View Post
    People are not eager to call it garbage despite of expert acclaim. They are calling it garbage because the evidence suggests that it is garbage and it is still acclaimed by experts.

    Did you actually read the resonses to the studies you posted? This is spot on:



    Assuming the hypothesized effect even exists, which you would not just assume with a p-value so high that it is not even a trend in my book, the effect size would be ridiculously low considering that you had such a large sample and still did not get below the 5% threshold. Add to that the fact that 15 variables were controlled for in the process to come to this number and the conclusion of the authors, that the current practice is justified by data, becomes completely indefensible.
    I was under the impression statistical significance generally intended to reject the null, not accept it. Null hypothesis would be no link between salt and BP. Alternate hypothesis would be that salt increases BP. We try to reject the null.

    We have to control for variables. If they didn't you would complain that correlation =/= causation. That argument has already come up in this thread.

    But I haven't even been able to read the study and my crit appraisal is rusty and undertrained so I'm not trying to claim the thing is amazing. I asked for the evidence showing no link between salt and BP, people asked the evidence showing a link, I pulled the first couple that came up.

  5. #85
    Join Date
    Sep 2010
    Posts
    10,199

    Default

    I pulled the paper- there are many short comings, none of which really matter to the argument at hand and truth be told, I don't have time to get into this.

    Statistical significance doesn't intend to do anything. The results of the study can either support the null hypothesis or reject it (and thus suggest that the alternative hypothesis is correct). The null hypothesis does not always have to be there is no link, although it usually is by convention. Some would say that the null hypothesis should be the opposite of whatever you're trying to show, if you have a known bias.

    That many variables cannot be controlled for in these types of studies unless the study population is housed/fed/monitored in house, but the are "corrected" for using various algorithms. In any event, to say that high salt intake = high BP is a bit misleading, would be quite an overstatement. There are many moving parts in this story, things that cannot be "corrected" for via an algorithm (see insulin sensitivity/glucose intake/weight change/genetics).

    And finally, a shitty paper showing high salt intake with no increase in BP:

    http://www.ncbi.nlm.nih.gov/pubmed/8510085

  6. #86
    Join Date
    Mar 2013
    Location
    Fairbanks, Alaska
    Posts
    1,933

    Default

    Quote Originally Posted by Subsistence View Post
    I was under the impression statistical significance generally intended to reject the null, not accept it. Null hypothesis would be no link between salt and BP. Alternate hypothesis would be that salt increases BP. We try to reject the null.
    Correct. 'Statistically significant' means the effect (in this case difference in means between treatments) is extreme enough that the probability of it occurring given a true null hypothesis is equal to or less than a pre-determined threshold, typically, 0.05. This balances rejecting lots of actual real-world effects if a p of something like 0.01 or 0.001 is used, and calling a bunch of expected random variation between samples actual effects when you use a p of, say, 0.2, which is common among the social, ahem, sciences.
    The null hypothesis is the "no effect" hypothesis. The status quo, if you will. It is that the effect that you are testing for does not exist. It is also a great place to look for arguments against hypotheses, fwiw.
    There is another effect to be careful for, in addition to repeated tests on the same sample. That is that with large enough samples, you can find actual, statistically significant effects that have no practical significance. A <=5-pt mean improvement in systolic BP after reducing daily sodium intake by 2,400 mg probably falls into that category. Significant? Yes. Meaningful? Seems unlikely.

  7. #87
    Join Date
    Feb 2011
    Location
    Farmington Hills, MI
    Posts
    4,689

    Default

    Quote Originally Posted by paterfamilias View Post
    There is another effect to be careful for, in addition to repeated tests on the same sample. That is that with large enough samples, you can find actual, statistically significant effects that have no practical significance. A <=5-pt mean improvement in systolic BP after reducing daily sodium intake by 2,400 mg probably falls into that category. Significant? Yes. Meaningful? Seems unlikely.
    This is a very important observation, and one I have been known to bitch about. The finding of statistical significance does not necessarily mean clinical or practical significance.

    But, perhaps since I'm statisically challenged, I'm under the very strong impression that this phenomenon (stat sig in the absence of pract sig) does not require large sample sizes to occur.

    For example, if I test two exercise protocols, A and B with a small cohort, say 10 bros for each sample, I may very well find that A bros report DOMS pain of 4.3 on a VAS, while B bros report DOMS pain of 4.7. If I look over the data, I may find that this variance is not due to any outliers, that my data is normally distributed, and that p < .o5. In other words, statistically speaking, there's really a tiny difference there in the data from my sample (which might very well revert to the mean with a larger sample). The difference is statistically significant. But definitely not practically significant.

    But please correct me if I'm wrong. I talk a good game, but my grasp of statistics, at the end of the day, is pretty rudimentary. When I did research, I always had to farm that shit out.

    All of which is neither here nor there, I guess.

  8. #88
    Join Date
    Mar 2013
    Location
    Fairbanks, Alaska
    Posts
    1,933

    Default

    Quote Originally Posted by Jonathon Sullivan View Post
    This is a very important observation, and one I have been known to bitch about. The finding of statistical significance does not necessarily mean clinical or practical significance.

    But, perhaps since I'm statisically challenged, I'm under the very strong impression that this phenomenon (stat sig in the absence of pract sig) does not require large sample sizes to occur.

    For example, if I test two exercise protocols, A and B with a small cohort, say 10 bros for each sample, I may very well find that A bros report DOMS pain of 4.3 on a VAS, while B bros report DOMS pain of 4.7. If I look over the data, I may find that this variance is not due to any outliers, that my data is normally distributed, and that p < .o5. In other words, statistically speaking, there's really a tiny difference there in the data from my sample (which might very well revert to the mean with a larger sample). The difference is statistically significant. But definitely not practically significant.

    But please correct me if I'm wrong. I talk a good game, but my grasp of statistics, at the end of the day, is pretty rudimentary. When I did research, I always had to farm that shit out.

    All of which is neither here nor there, I guess.
    With sample sizes that small, you would need standard deviations on the order of 0.3 for each sample to get significance in a t-test at alpha=0.05. Which is pretty small for that kind of thing. Assuming normality, that means that more than two-thirds of your data values would be within 0.3 of the relative means, and no one else could be more than a few tenths farther. In that case, you'd have a bunch of bros in one group reporting mostly between 4.0 to 4.6, and a bunch in the other reporting mostly between 4.4 to 5.0. Pretty unlikely. But if you take 3,200 bros, 4.4 and 4.45 might be significantly different.

  9. #89
    Join Date
    Feb 2010
    Posts
    815

    Default

    Quote Originally Posted by Subsistence View Post
    I was under the impression statistical significance generally intended to reject the null, not accept it. Null hypothesis would be no link between salt and BP. Alternate hypothesis would be that salt increases BP. We try to reject the null.
    As Jordan already said, statistical significance is not intended to reject either hypothesis. It is a threshold, determined by convention, to deceide whether the null hypothesis can or cannot be rejected. Notice that in the study you linked the usual convention in human sciences for an alpha-level of 5% was not being used. Instead a p-value of .13 apparently lead to the rejection of the Null hypothesis, something that is not done unless for very good reason. A good reason to loosen the alpha-level would be for exploration in new fields of research to generate hypotheses that then can be tested against observation. Here however we are talking about "established" phenomena that apparently cannot be shown to hold true when this is done.

    Quote Originally Posted by Subsistence View Post
    We have to control for variables. If they didn't you would complain that correlation =/= causation. That argument has already come up in this thread.
    I am not against correcting (thanks, Jordan, for pointing out the difference) for certain variables. My argument is that failing to demonstrate an already accepted-for-true effect by the conventional standard after doing A LOT of that correcting, is pretty impressive. It also shows that the effect, the existence of which the data does not support so far, would be very, very tiny.

    Quote Originally Posted by Subsistence View Post
    But I haven't even been able to read the study and my crit appraisal is rusty and undertrained so I'm not trying to claim the thing is amazing. I asked for the evidence showing no link between salt and BP, people asked the evidence showing a link, I pulled the first couple that came up.
    You cannot prove that something does not exist. Therefore in science one assumes something does not exist unless proven otherwise. Therefore producing evidence would be on you.

    If you were just asking for data points suggesting no relationship then this is easy. Conveniently you linked to one study yourself and Jordan linked another.

  10. #90
    Join Date
    Jul 2010
    Posts
    1,946

    Default

    starting strength coach development program
    Quote Originally Posted by tertius View Post
    Right. And the study is of "prehypertensive" people.

    Very few people here are saying that the sodium-blood pressure relationship is bunko if you're a member of the population that has actual, legit high blood pressure issue (perhaps related to the ability to excrete sodium).

    What's been said is that the relationship probably doesn't hold much water for those outside of that population, and that it hasn't been well demonstrated that the OP is a member of that population.

    Just as you shouldn't say that because eating a donut might kill a diabetic, no one should eat donuts.
    If this is the case, I don't have much beef. I was under the impression that the popular opinion being expressed here was that a hypertensive patient should not be advised to avoid sodium as a treatment or adjuctive treatment to treating their blood pressure.

Page 9 of 9 FirstFirst ... 789

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •