starting strength gym
Page 63 of 3004 FirstFirst ... 13536162636465731131635631063 ... LastLast
Results 621 to 630 of 30035

Thread: COVID19 Factors We Should Consider/Current Events

  1. #621
    Join Date
    Dec 2019
    Posts
    177

    Default

    • starting strength seminar april 2024
    • starting strength seminar jume 2024
    • starting strength seminar august 2024
    Well just confirmed...the coronavirus is RACIST! Dr Fauci just said the Chinese are seeing a spike in cases after relaxing their incoming visitors and returning citizens and warned us to lock it down.

    So, the racist virus is causing us to shut down ALL the borders! That bastard...

  2. #622
    Join Date
    Jul 2007
    Location
    North Texas
    Posts
    53,559

    Default

    Quote Originally Posted by MWM View Post
    Yet more doubt cast on the death statistics, via the exact problem I mentioned earlier:

    Italy: Only 12% of “Covid19 deaths” list Covid19 as cause
    From the article:

    The president of the Italian Civil Protection Service actually went out of his way to remind people of the nature of Italy’s fatality figures in a morning briefing on 20/03:

    "I want you to remember these people died WITH the coronavirus and not FROM the coronavirus”

    What does this actually mean?

    It means that the Italian death toll figures could have been artificially inflated by up to 88%. If true, this would mean the total number of Italians who have actually died of Covid19 could be as low as ~700. Which would bring Italy, currently a statistical outlier in terms of Covid19 fatalities, well in line with the rest of the world.

  3. #623
    Join Date
    Jan 2011
    Posts
    1,123

    Default

    Quote Originally Posted by lazygun37 View Post
    It is the same key point that I've now pointed out several times, to you and others: the type of statistical random errors for which it makes sense to quote confidence intervals are not the dominant source of uncertainty here. Instead, the dominant source are systematics -- i.e. exactly the sort of problems with the data that everybody here, and especially you, are concerned about. The "bounds" these guys used represent an effort to take the worst of these into account -- namely under-reporting.
    I still think you’re not getting my point, so I’ll try again. Let’s start with something we can probably agree on: systematic error is consistent and repeatable. Random error is unpredictable and cannot be replicated.

    Let’s try another thing we can hopefully agree on: systematic errors do not exist when you take a direct measurement of the quantity you are interested in, i.e. I counted the number of people admitted to the ICU and there were 121.
    Where are the systematic errors in that measurement?

    By the way, missing data (what happened in the paper you quoted) and underreporting (what you typed above) are not the same thing. Underreporting can be a source of systematic error. Missing data is just… a smaller data set than you’d like. You shouldn't just make guesses about (or "correct for") data you don't have. I'm sure you knew that too though.

  4. #624
    Join Date
    Nov 2012
    Location
    Toronto, Ontario
    Posts
    1,003

    Default

    Quote Originally Posted by MWM View Post
    Yet more doubt cast on the death statistics, via the exact problem I mentioned earlier:

    Italy: Only 12% of “Covid19 deaths” list Covid19 as cause

    If the death-coding practices in Italy do indeed differ from other countries' death-coding practices, then this is useful information.

    It also brings to light how difficult it is to establish causality. There will always be a fraction of patients who would have died even had they not been infected with COVID-19. And there are also a fraction of patients who would have died even had they not had these comorbidities (i.e. COVID-19 was the "sole" cause).

    The question is how to classify cause of death in a pandemic. I would hope there are guidelines to at least ensure consistency.

    Consistent or not, however, I don't think it's a coincidence that COVID-19 struck at the same time we see this rapid rise in ICU demand.

    Spain suffered 738 deaths over the last day, so perhaps they're also using the same death-coding practices as Italy?

    One technique to tease apart the causal influences would be to look at rate of overall deaths-due-to-illness in these countries, from year to year, and compare that with the current rate. There'd be uncertainty around this estimate, to be sure, but it would be valuable data.

  5. #625
    Join Date
    Jan 2019
    Posts
    660

    Default

    Italy is coding deaths in the same way as every country. That is the standard for infectious disease. The same happens with the flu, even though the flu, in of itself, rarely kills people.

    Does this make a difference in the long-term impact of the disease? Yeah. (I am of the opinion that COVID19 has a low mortality rate, and will merely nag society after a few months.) Does this change the course of treatment, the necessary equipment, and somehow invalidate the (very real) shortages that are occurring? No. If the 70-year-old patient in front of you has COVID19 and heart disease, guess what - he still has an infectious disease, and you still need an N95.

  6. #626
    Join Date
    Mar 2020
    Posts
    311

    Default

    Quote Originally Posted by lazygun37 View Post
    Thanks for confirming, and you're right that I shouldn't have said "irrational" in an absolute sense. What's rational or not depends on the value one attaches to various things or outcomes. Your position is rational from your perspective/value system, almost by definition.

    My point was just that you and I are at an impasse where we simply have to agree to disagree, because our value systems are so different. Knowing that clarifies things and allows people to move on.

    By contrast, what I find frustrating is when people who, I suspect, simply share your value system make incredible logical contortions to avoid having to say that. Rip, for example, has two or three times now asked people to consider data that *he* chose. Yet when those people did, and he didn't like the results, he just moved on to the next justification, without even acknowledging that he was wrong (at least on whatever narrow point was being argued at the time). That's unhelpful.
    These are just weasel words. There's no such thing as agreeing to disagree, your truth, my truth, etc. There's only right or wrong, true or false. Leave post-modernism in Philosophy 101 where it belongs

    This is not solely a data/quantitative based discussion. As many have pointed out (and check that latest Ioannidis paper that was just posted) the numbers will always give an incomplete picture. At best, numbers give a very foggy picture of what's going on. At worst, they are used to serve whatever political purpose a person finds convenient. There is more to making decisions than just looking at numbers. That kind of stuff flies in a lab or a classroom, but not when lives and livlihoods are at stake.

    As Rip et al have said, this is a power grab by politicians.

  7. #627
    Join Date
    Jan 2014
    Posts
    95

    Default

    Quote Originally Posted by Rob Waskis View Post
    I still think you’re not getting my point, so I’ll try again. Let’s start with something we can probably agree on: systematic error is consistent and repeatable. Random error is unpredictable and cannot be replicated.

    Let’s try another thing we can hopefully agree on: systematic errors do not exist when you take a direct measurement of the quantity you are interested in, i.e. I counted the number of people admitted to the ICU and there were 121.
    Where are the systematic errors in that measurement?

    By the way, missing data (what happened in the paper you quoted) and underreporting (what you typed above) are not the same thing. Underreporting can be a source of systematic error. Missing data is just… a smaller data set than you’d like. You shouldn't just make guesses about (or "correct for") data you don't have. I'm sure you knew that too though.

    I appreciate the civil tone of your response, up to the final sentence, anyway. Incidentally, since I'd really like to stay married, this time I'm actually going to stay away from this thread for at least a few weeks. At that point we shouldn't have to argue about where we're headed anymore.

    OK, then... I am pretty sure we disagree on this because one of us is misunderstanding what the numbers in their table actually mean. Here is what I think they mean, and I am pretty certain I am correct:

    • The sample analysed in the table includes 2449 cases (i.e. people with confirmed infections) for which they knew the age.
    • However, they did *not* have information about the *outcomes* (hospitalization, ICU admission or death) for all of these 2449 cases.

    Do you agree with this? If you carefully read column 2 on page 1, as well as the footnote to the table, hopefully you will. If not, I'd certainly be interested to hear why (though I apologize in advance that I won't post on this again either way).

    So, how could you deal with this missing outcome data? One important point here is that "missing data" does *not* just mean "smaller sample size". You can already see this here -- the *sample* is still 2449, but different parts of this sample have one or more kinds of data missing. This is precisely the type of data quality problem you are -- and should be -- concerned about.

    Just to get us on the same page, let's consider a more numerically convenient hypothetical example as an illustration: suppose the total sample was 1000 patients, and we know that 50 of those definitely ended up in the ICU. However, let's also say that for 200 of them, we don't know whether they ended up in the ICU or not. How should we try to estimate the percentage of patients ending up in the ICU? If you think about it, there are two main ways you could reasonably approach this:
    • You could say that there is probably nothing special about those 200 patients for which you don't know the outcome. I.e. they were just as likely or unlikely to end up in the ICU as those 800 people for which you know this.
      • In this case, your estimate would be 50/800 = 0.0625, i.e. 6.25%.
    • Alternatively, you might take the view -- and I think this is the view you and Rip, for example, would lean towards -- that it's far more likely that nothing was reported for those 200 patients because there was nothing to report. I.e. none of them ended up in the ICU.
      • In that case, your estimate would be 50/1000 = 0.05, i.e. 5.00%

    Neither of those estimate is "right" or "wrong". The represent two different, but reasonable assumptions about something we don't know (i.e. outcomes for 200 cases). So the unknown outcomes *are* (or, rather, produce) a systematic uncertainty on the percentage we want to know. But that clearly doesn't mean we don't know anything at all and should give up! Instead, the best way to honestly account for this uncertainty is to consider *both* assumptions. That will give us a solid idea of how large this uncertainty actually is. In our actual case, it turns out that this systematic uncertainty is far larger than the random statistical uncertainty. And in cases like this, people often don't even bother quoting that statistical error then, since it's just a small perturbation on the much larger and more concerning systematics (which we *did* account for).

    So this is exactly what's going on in this table. For example, the 20.7% number in the "total" row and "hospitalization" column is calculated as the number of people who were hospitalized, divided by 2449. By contrast, the 31.4% is calculated as the number of people who were hospitalized, divided by the number of people in the sample for which we *know* whether they were hospitalized (which is smaller than 2449).

    Does that make sense? Assuming we can agree on this, I hope we can also agree that this really *is* a perfectly reasonable way to provide the reader with a sense of the uncertainty. It would be *far* worse to just quote a purely confidence interval and pretend this much larger systematic uncertainy didn't exist.

    Incidentally, and excuse the long post (I'm going all out because it's my last one) -- note that, in principle, there is another extreme possibility. I'll use my simpler hypothetical example to illustrate this:
    • Theoretically, it's possible that *all* of those 200 cases for which I don't know the outcome actually ended up in the ICU. That is, the under-reporting *was* biased, but in the opposite direction from that assumed in the second case above. This doesn't really make much sense -- why the hell would we have better reporting about asymptomatic cases than ICU ones? -- and I imagine neither you, nor Rip would disagree with that. Still, in the interest of completeness, and for my simple numerical example:
      • In this completely unrealistic case, my estimate would be 250/1000 = 0.25, i.e. 25%. But I think nobody in their right mind would do such a thing. (Certainly they didn't, and neither did I.)

    Anyway, I'll leave you all to it now. I sincerely hope you are correct, and I'm wrong. It certainly looks like we're going to find out, especially since the President appears to be on the same page as you.

  8. #628
    Join Date
    Jul 2007
    Location
    North Texas
    Posts
    53,559

    Default

    You continue to behave as though you think this is only about sick people. It's amazing to watch you post like this, over and over.

  9. #629
    Join Date
    Jun 2013
    Posts
    1,110

    Default

    From previously quoted article 12 Experts Questioning the Coronavirus Panic
    …there is a very good example that we all forget: the swine flu in 2009. That was a virus that reached the world from Mexico and until today there is no vaccination against it. But what? At that time there was no Facebook or there maybe was but it was still in its infancy. The coronavirus, in contrast, is a virus with public relations.
    Video is long for this discussion group...the link below skips forward to the clips of "Event 201" from October, 2019 in NYC...a Pandemic Exercise 6 weeks before the real one:

    Meet this Pandemic's Public Relations team...Viral Marketing - Amazing Polly

    Do NOT miss "flood the zone" at 15:20. That is what we've seen...they have flooded the zone with their message at all levels. Let's be clear, they have spared no expense to deploy resistance to Rip's perspective ("misinformation"). Their control/reach is truly frightening and very effective.

    Is this a good thing or not?

  10. #630
    Join Date
    Aug 2017
    Posts
    35

    Default

    starting strength coach development program
    Someone will have to explain to be how the CFR for existing flu is calculated to be roughly 0.1% as banded about. Surely it varies, this said that in 2009 h1n1 varient had a CFR of 0.45%.

    How many people die from influenza?

    I wonder how many people present to emergency with pneumonia complication from an influenza virus but never get tested for influenza virus, and then die. What cause of death is recorded?

    The more I read the less convinced I am. Particularly in relation to Italy.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •