- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 5/4/20 at 1:56 pm to texridder
quote:Abbott is probably accurate in its estimate, but it is Abbott's estimate -- 100% sensitivity rating and 99.5% specificity. Abbott is deploying the test while FDA approval is pending.
Who says it is, other than Abbott.
Re: IgM, as I understand it, the main advantage in IgM measurement is not test accuracy, IgM shows up the bloodstream earlier. So it can give verification of immunity more proximal to the disease.
The Roche Ab test just attained approval. I believe it is IgG/IgM, with a 100% sensitivity rating and 99.8% specificity.
Both are an improvement.
This post was edited on 5/4/20 at 1:58 pm
Posted on 5/4/20 at 2:30 pm to texridder
quote:Do you think the test the Stanford Drs. gave was developed by Stanford?
^More phony double-talk from our Big Pharma shill
Nope. It was developed by Hangzhou Biotest Biotech.
Posted on 5/4/20 at 2:47 pm to texridder
quote:Indeed. and although Stanford evaluated the tests for accuracy, Denmark arrived at dramatically different (poorer) results for the same test. Takeaway: There may have been significantly overstated AB positives in Stanford's study.
It was developed by Hangzhou Biotest Biotech.
Posted on 5/4/20 at 4:26 pm to meansonny
quote:
1,045 tests. 35 true positives and 10 false positives. The false positives are equivalent to 28% of the true positives. This is too much of an error to base policy decisions off of.
Your example uses an awfully small sample of 1035 people.
Ramp that up to 1 million. 2.5% sick would give 25,000 sick. From that if 99% accurate, you would get 24,750 true positives and 250 false positives. This would seem reasonable to base decisions from.
This is what I was trying to tell texrider, but he failed to grasp the concept.
Posted on 5/4/20 at 5:03 pm to IslandBuckeye
quote:
IslandBuckeye
I don't think he's coming back.
Posted on 5/4/20 at 5:21 pm to meansonny
quote:Doubtful I'm posting anything you do not know, in fact quite the opposite. But the key element in a low incidence environment is specificity. Sensitivity error is less important. Normally, "accuracy" is a combo of specificity and sensitivity.
a 99% accuracy
The Stanford study likely overstated specificity of its Chinese test capacity. Minor errors there could lead to significant errors in outcome.
Posted on 5/4/20 at 5:35 pm to NC_Tigah
Using terms like sensitivity and specificity will really get texridder upset.
Best be careful, he is one bad hombre.
The other issued I read from opponents of Stanford was sampling. I guess they used facebook and it "may" have introduced bias. While this may be true, I look forward to more testing to jack up the sample size. There will always be naysayers but I hope they can lock down their methodologies and power up the statistics. I am most anxious to see the resulting CFR drawn down into a more realistic number.
Best be careful, he is one bad hombre.
The other issued I read from opponents of Stanford was sampling. I guess they used facebook and it "may" have introduced bias. While this may be true, I look forward to more testing to jack up the sample size. There will always be naysayers but I hope they can lock down their methodologies and power up the statistics. I am most anxious to see the resulting CFR drawn down into a more realistic number.
Posted on 5/4/20 at 6:10 pm to IslandBuckeye
quote:No. As long as the sample is representative of that population (and ideally random) then the results will approximate to the population with a relatively small amount of sampling error (e.g., law of large numbers). In fact, it’s better to have a random, representative sample of 1,000 then a non-random, non-representative sample of 1,000,000.
Your example uses an awfully small sample of 1035 people.
Ramp that up to 1 million. 2.5% sick would give 25,000 sick. From that if 99% accurate, you would get 24,750 true positives and 250 false positives. This would seem reasonable to base decisions from
Regardless, if the test is 99% accurate (both specificity and sensitivity), and there is a 2.5% you’ll get the same distribution of false positives and true positives that meansonny showed with a sample of a million.
For example, with 1,000,000 people tested, and a 2.5% infection rate, there will be 25,000 infected and 975,000 non-infected.
Of that 25,00 correct identify 24,750 true positive (25,000*0.99) and 9,750 false positives (975,000*0.01) for a total of 34,500 positives.
And just like in meansonny’s example, the false positives will represent about 28% of the total positives (9,750/34,500).
Posted on 5/4/20 at 6:57 pm to IslandBuckeye
quote:Well this is why the New York data is by far the best antibody data we have thus far, largely because (a) there is much more randomization (those randomly selected at grocery stores) that is absent in the Stanford study (targeted facebook ads; people were sharing it; and self-selection to volunteer) which should minimize the potential bias (although those grocery shopping may be more likely to be infected).
There will always be naysayers but I hope they can lock down their methodologies and power up the statistics. I am most anxious to see the resulting CFR drawn down into a more realistic number.
In addition, besides the overall accuracy of the test, the biggest statistical problem with these types of studies is the base rate of infections. As that gets larger, even a less accurate test is far less problematic. Because New York (especially NYC) clearly has a significant portion infected, the results are much more reliable and the uncertainty around the test accuracy has only marginal impact relatively speaking.
Finally, after reading the updated Stanford study, besides the potential bias AND the questions surrounding their representation of the test accuracy, is the impact (and their decisions) regarding demographic adjustments. It’s possible the adjustments for one individual’s (Hispanic, male, 18-64) will be equivalent to 4.36 individuals in the sample or the equivalent 2,524 individuals extrapolated to the county data while another individual’s result (white, female, 65+) will be equal to 0.39 individuals in the sample of 227 individuals extrapolated to the county data.
In addition, the groups adjusted upward by a significant amount (Hispanics; 24.9% from 8%) have a significantly higher crude rate (4.9%) while the groups adjusted downward by a significant amount (whites; 35.4% from 64.1%) have a significantly lower crude (1%). And if their adjusting for test accuracy at the group level, this will only amplify the differences, but this could be just amplifying group biases or sampling/measurement error.
In other words, their entire study, especially without the raw data, has so many layers of uncertainty and bias potential, built on top of one another, that it’s impossible to actually know what the approximate infection rate is of the sample itself, let alone how that extrapolates to the county population. But in all likelihood, it’s far lower than their results indicate since it’s so far incompatible right the far better NY data (or another decent study from Germany), that it’s obvious the authors aren’t trying to falsify their hypothesis and are merely aiming to confirm their own biases they had been presenting for weeks before they even started it.
This post was edited on 5/4/20 at 7:03 pm
Posted on 5/4/20 at 7:09 pm to NC_Tigah
quote:Buckeye Jeaux is not going to like that one bit since he pretty much makes most of his arguments using the Stanford studies.
The Stanford study likely overstated specificity of its Chinese test capacity. Minor errors there could lead to significant errors in outcome.
Posted on 5/4/20 at 7:11 pm to NC_Tigah
The same test was us d. In the Miami test.
Posted on 5/4/20 at 7:17 pm to texridder
quote:The Miami study used a test from biomedomics, which according to their clinical analysis, has an 88.66% sensitivity and a 90.63% specificity.
The same test was us d. In the Miami test.
Test Data
However, the fact the Miami study for a 6% infection result across 2 separate weeks makes me think that the specificity was 6% and they were all false positives, or pretty close to it since the second week should have been considerably higher than the first.
All of these studies makes me wonder why these researchers, are willing to rush to complete them knowing that the tests will get more accurate as time progresses AND the infection base rates will rise enough to offset some of the accuracy issues. Instead, other than the NY study and some European studies, the results are basically worthless since they rushed them.
This post was edited on 5/4/20 at 7:22 pm
Posted on 5/4/20 at 10:34 pm to buckeye_vol
quote:biomedomics is the.U.S. distributor for Hangzhou Biotest Biotech, which actually makes the test.
The Miami study used a test from biomedomics
Posted on 5/4/20 at 10:47 pm to IslandBuckeye
quote:
Using terms like sensitivity and specificity will really get texridder upset.
You're a fool. Tell me, why did you apologize to Wednesday and I when you didn't know (or even consider) the significance that false positives would have on her decision to rely on an antibody test to clear the way for her to visit her elderly father.
Now you're embarrassed having been such a dumbass that you post a dig about me almost every post to deflect from your ignorance.
Posted on 5/4/20 at 11:02 pm to Korkstand
quote:
I doubt he is clueless about the study. The giggling emoji is probably because that study recruited via Facebook ads and used inaccurate tests. The results are worthless.
Are you suggesting Stanford docs are running bad studies?
Posted on 5/4/20 at 11:36 pm to texridder
quote:I don’t believe this is true. Their own website says they manufacture it and another company distributes it.
biomedomics is the.U.S. distributor for Hangzhou Biotest Biotech, which actually makes the test.
quote:And the two companies that were distributing the unapproved Chinese tests (Premier’s was used in Stanford and USC study), were distributing it BEFORE BioMedomics test was first released anyways.
The new test, developed and manufactured by BioMedomics, will be available through BD and distributed exclusively by Henry Schein, Inc. to health care providers throughout the United States.
quote:
Two U.S. companies — Premier Biotech of Minneapolis and Aytu Bioscience of Colorado — have been distributing the tests from unapproved Chinese manufacturers, according to health officials, FDA filings and a spokesman for one of the Chinese manufacturers.
Popular
Back to top


0




