Joseph Candelora in comments pointed to this updated report on the Santa Clara study we discussed last week.
The new report is an improvement on the first version. Here’s what I noticed in a quick look:
1. The summary conclusion, “The estimated population prevalence of SARS-CoV-2 antibodies in Santa Clara County implies that the infection may be much more widespread than indicated by the number of confirmed cases,” is much more moderate. Indeed, given that there had been no widespread testing program, it would’ve been surprising if the infection rate was not much more widespread than indicated by the number of confirmed cases. Still, it’s good to get data.
2. They added more tests of known samples. Before, their reported specificity was 399/401; now it’s 3308/3324. If you’re willing to treat these as independent samples with a common probability, then this is good evidence that the specificity is more than 99.2%. I can do the full Bayesian analysis to be sure, but, roughly, under the assumption of independent sampling, we can now say with confidence that the true infection rate was more than 0.5%. (They report a lower bound for the 95% confidence interval of 0.7%, which seems too high, but I haven’t accounted for the sensitivity yet in this quick calculation. Anyway, 0.5% or 0.7% is not so different.)
3. The section on recruitment says, “We posted our advertisements targeting two populations: ads aimed at a representative population of the county by zip code, and specially targeted ads to balance our sample for under-represented zip codes.” This description seems incomplete, as it does not mention the email sent by the last author’s wife to a listserv for parents at a Los Altos middle school. This email is mentioned in the appendix of the report, though.
4. They provide more details on the antibody tests. I don’t know anything about antibody tests; I’ll leave this to the experts.
5. On page 20 of their report, they given an incorrect response to the criticism that the data in their earlier report were consistent with zero true positives. They write, “suggestions that the prevalence estimates may plausibly include 0% are hard to reconcile with documented data from Santa Clara…” This misses the point. Nobody was claiming that the infection rate was truly zero! The claim was that the data were not sufficient to detect a nonzero infection rate. They’re making the usual confusion between evidence and truth. It does not help that they refer to concerns expressed by “several people” but do not cite any of these concerns. In academic work, or even online, you cite or link to people who disagree with you.
They also say, “for 0 true positives to be a possibility, one needs not only the sample prevalence to be less than (1 – specificity) . . . but also to have no false negatives (100% sensitivity).” I have no idea why they are saying this, as it makes no sense to me.
The most important part of their response, though, is the additional specificity data. I’m in no particular position to judge these data, but this is the real point. They also do a bootstrap computation, but that is neither here nor there. What’s important here is the data, not the specific method used to capture uncertainty (as long as the method is reasonable).
6. I remain suspicious of the weighting they used to match sample to population: the zip code thing bothers me because it is so noisy, and they didn’t adjust for age. But they now present all their unweighted numbers too, so the weighting is less of a concern.
7. They now share some information on symptoms. The previous version of the article said that they’d collected data on symptoms, but no information on symptoms were presented in that earlier report. Here’s what they found: among respondents who reported cough and fever in the past two months, 2% tested positive. Among respondents who did not report cough and fever in the past two months, only 1.3% tested positive. That’s 2% of 692 compared to 1.3% of 2638. That’s a difference of 0.007 with standard error sqrt(0.02*0.98/692 + 0.013*0.987/2638) = 0.006. OK, it’s a noisy estimate, but at least it goes in the right direction; it’s supportive of the hypothesis that the positive tests results represent a real signal.
Table 3 of their appendix gives more detail, showing that people reporting loss of smell and lost of taste were much more likely to test positive. Of the 60 people reporting loss of smell in the two weeks prior to the study, 13 tested positive. Of the 188 reporting loss of smell in the two months prior to the study, 21 tested positive. Subtracting, we find that of the 188 reporting loss of smell in the two months prior but not the two weeks prior, 8 tested positive. That’s interesting: 22% of the people with that symptom in the past two weeks tested positive, but only 4% tested positive among people who had the symptom in the previous two months but not the previous two weeks. That makes sense: I guess you have less antibodies if the infection has already passed through you. Or maybe those people with the symptoms one or two months ago didn’t have COVID-19, they just had the regular flu?
8. Still no data and no code.
Overall, the new report is stronger than the old, because it includes more data summaries and more evidence regarding the all-important specificity number.
Ultimately, this is just one surveys. Whether the infection rate in Santa Clara county in early April was 0.7% or 1.4% or 2.8% or higher, we’ll soon be getting lots more information from all over, so what’s important is for us to learn from what’s worked and what hasn’t in the studies we’ve done so far.
from Statistical Modeling, Causal Inference, and Social Science https://ift.tt/2KLDkBq
via IFTTT
Comments
Post a Comment