Suppose that at a given time there is data available suggesting that, in the general population, one in a thousand asymptomatic people were likely to have the virus. Such data was never confirmed of course, the infection rate varied over time, but I will use it to explain the Bayesian reasoning.
If I take a PCR test and it’s positive but I’m not feeling ill, I would want to know if I really have the virus. In other words, if I am one — of the one-in-a-thousand people — who is asymptomatic but carrying the virus, I’d want to know the accuracy of the test. There weren’t any reliable studies about the reliability of the PCR test, but the public was assured these tests were very accurate. Let’s suppose there was only a one-in-a-hundred chance that someone who doesn't have the virus will test positive — a 1% false positive rate. Then conversely, if I don't have the virus, there's a 99% chance of a negative test. With this information most people assumed if you tested positive you almost certainly had COVID. But that is not the case.
Think about a group of 10,000 asymptomatic people getting tested. Because we are assuming one-in-a-thousand asymptomatic people have the virus, that means about 10 of the 10,000 really have the virus. Let’s also assume these genuinely infected people test positive. Then we are left with just under 10,000 people — 9990, who do not have the virus. But a PCR test with only a 1%false positive rate still means that about 100 of these people would falsely test positive. So in total there are 110 people testing positive of whom we know only 10 will actually become ill from the virus. So, the actual probability that you've got the virus if you test positive is closer to 10%. This means that with reasonable assumptions about the underlying infection rate and test accuracy, the PCR test used as a standard for life-impacting decisions and mandates, had a 99% inaccuracy rate for asymptomatic people testing positive.