Started By
Message

re: Antibodies ...why are they not getting any love?

Posted on 5/4/20 at 1:50 pm to
Posted by moneyg
Member since Jun 2006
62951 posts
Posted on 5/4/20 at 1:50 pm to
quote:

But if you want to use 3.5%


Stop right there. We seem to keep getting stuck on your first premise.

How did you get to 3.5%

Show your work.
Posted by NC_Tigah
Make Orwell Fiction Again
Member since Sep 2003
138714 posts
Posted on 5/4/20 at 1:56 pm to
quote:

Who says it is, other than Abbott.
Abbott is probably accurate in its estimate, but it is Abbott's estimate -- 100% sensitivity rating and 99.5% specificity. Abbott is deploying the test while FDA approval is pending.

Re: IgM, as I understand it, the main advantage in IgM measurement is not test accuracy, IgM shows up the bloodstream earlier. So it can give verification of immunity more proximal to the disease.

The Roche Ab test just attained approval. I believe it is IgG/IgM, with a 100% sensitivity rating and 99.8% specificity.

Both are an improvement.
This post was edited on 5/4/20 at 1:58 pm
Posted by texridder
The Woodlands, TX
Member since Oct 2017
14944 posts
Posted on 5/4/20 at 2:30 pm to
quote:

^More phony double-talk from our Big Pharma shill
Do you think the test the Stanford Drs. gave was developed by Stanford?

Nope. It was developed by Hangzhou Biotest Biotech.
Posted by Jbird
Shoot the tires out!
Member since Oct 2012
90499 posts
Posted on 5/4/20 at 2:38 pm to
Gropin
J
o
e
Biden
Posted by NC_Tigah
Make Orwell Fiction Again
Member since Sep 2003
138714 posts
Posted on 5/4/20 at 2:47 pm to
quote:

It was developed by Hangzhou Biotest Biotech.
Indeed. and although Stanford evaluated the tests for accuracy, Denmark arrived at dramatically different (poorer) results for the same test. Takeaway: There may have been significantly overstated AB positives in Stanford's study.
Posted by Jbird
Shoot the tires out!
Member since Oct 2012
90499 posts
Posted on 5/4/20 at 2:50 pm to
DixRider is gone brah.
Posted by IslandBuckeye
Boca Chica, Panama
Member since Apr 2018
10067 posts
Posted on 5/4/20 at 4:26 pm to
quote:

1,045 tests. 35 true positives and 10 false positives. The false positives are equivalent to 28% of the true positives. This is too much of an error to base policy decisions off of.



Your example uses an awfully small sample of 1035 people.

Ramp that up to 1 million. 2.5% sick would give 25,000 sick. From that if 99% accurate, you would get 24,750 true positives and 250 false positives. This would seem reasonable to base decisions from.

This is what I was trying to tell texrider, but he failed to grasp the concept.
Posted by moneyg
Member since Jun 2006
62951 posts
Posted on 5/4/20 at 5:03 pm to
quote:

IslandBuckeye



I don't think he's coming back.
Posted by NC_Tigah
Make Orwell Fiction Again
Member since Sep 2003
138714 posts
Posted on 5/4/20 at 5:21 pm to
quote:

a 99% accuracy
Doubtful I'm posting anything you do not know, in fact quite the opposite. But the key element in a low incidence environment is specificity. Sensitivity error is less important. Normally, "accuracy" is a combo of specificity and sensitivity.

The Stanford study likely overstated specificity of its Chinese test capacity. Minor errors there could lead to significant errors in outcome.
Posted by IslandBuckeye
Boca Chica, Panama
Member since Apr 2018
10067 posts
Posted on 5/4/20 at 5:35 pm to
Using terms like sensitivity and specificity will really get texridder upset.

Best be careful, he is one bad hombre.

The other issued I read from opponents of Stanford was sampling. I guess they used facebook and it "may" have introduced bias. While this may be true, I look forward to more testing to jack up the sample size. There will always be naysayers but I hope they can lock down their methodologies and power up the statistics. I am most anxious to see the resulting CFR drawn down into a more realistic number.
Posted by buckeye_vol
Member since Jul 2014
35379 posts
Posted on 5/4/20 at 6:10 pm to
quote:

Your example uses an awfully small sample of 1035 people.

Ramp that up to 1 million. 2.5% sick would give 25,000 sick. From that if 99% accurate, you would get 24,750 true positives and 250 false positives. This would seem reasonable to base decisions from
No. As long as the sample is representative of that population (and ideally random) then the results will approximate to the population with a relatively small amount of sampling error (e.g., law of large numbers). In fact, it’s better to have a random, representative sample of 1,000 then a non-random, non-representative sample of 1,000,000.

Regardless, if the test is 99% accurate (both specificity and sensitivity), and there is a 2.5% you’ll get the same distribution of false positives and true positives that meansonny showed with a sample of a million.

For example, with 1,000,000 people tested, and a 2.5% infection rate, there will be 25,000 infected and 975,000 non-infected.

Of that 25,00 correct identify 24,750 true positive (25,000*0.99) and 9,750 false positives (975,000*0.01) for a total of 34,500 positives.

And just like in meansonny’s example, the false positives will represent about 28% of the total positives (9,750/34,500).
Posted by buckeye_vol
Member since Jul 2014
35379 posts
Posted on 5/4/20 at 6:57 pm to
quote:

There will always be naysayers but I hope they can lock down their methodologies and power up the statistics. I am most anxious to see the resulting CFR drawn down into a more realistic number.
Well this is why the New York data is by far the best antibody data we have thus far, largely because (a) there is much more randomization (those randomly selected at grocery stores) that is absent in the Stanford study (targeted facebook ads; people were sharing it; and self-selection to volunteer) which should minimize the potential bias (although those grocery shopping may be more likely to be infected).

In addition, besides the overall accuracy of the test, the biggest statistical problem with these types of studies is the base rate of infections. As that gets larger, even a less accurate test is far less problematic. Because New York (especially NYC) clearly has a significant portion infected, the results are much more reliable and the uncertainty around the test accuracy has only marginal impact relatively speaking.

Finally, after reading the updated Stanford study, besides the potential bias AND the questions surrounding their representation of the test accuracy, is the impact (and their decisions) regarding demographic adjustments. It’s possible the adjustments for one individual’s (Hispanic, male, 18-64) will be equivalent to 4.36 individuals in the sample or the equivalent 2,524 individuals extrapolated to the county data while another individual’s result (white, female, 65+) will be equal to 0.39 individuals in the sample of 227 individuals extrapolated to the county data.

In addition, the groups adjusted upward by a significant amount (Hispanics; 24.9% from 8%) have a significantly higher crude rate (4.9%) while the groups adjusted downward by a significant amount (whites; 35.4% from 64.1%) have a significantly lower crude (1%). And if their adjusting for test accuracy at the group level, this will only amplify the differences, but this could be just amplifying group biases or sampling/measurement error.

In other words, their entire study, especially without the raw data, has so many layers of uncertainty and bias potential, built on top of one another, that it’s impossible to actually know what the approximate infection rate is of the sample itself, let alone how that extrapolates to the county population. But in all likelihood, it’s far lower than their results indicate since it’s so far incompatible right the far better NY data (or another decent study from Germany), that it’s obvious the authors aren’t trying to falsify their hypothesis and are merely aiming to confirm their own biases they had been presenting for weeks before they even started it.
This post was edited on 5/4/20 at 7:03 pm
Posted by Diamondawg
Mississippi
Member since Oct 2006
38326 posts
Posted on 5/4/20 at 7:09 pm to
quote:

The Stanford study likely overstated specificity of its Chinese test capacity. Minor errors there could lead to significant errors in outcome.

Buckeye Jeaux is not going to like that one bit since he pretty much makes most of his arguments using the Stanford studies.
Posted by texridder
The Woodlands, TX
Member since Oct 2017
14944 posts
Posted on 5/4/20 at 7:11 pm to
The same test was us d. In the Miami test.
Posted by buckeye_vol
Member since Jul 2014
35379 posts
Posted on 5/4/20 at 7:17 pm to
quote:

The same test was us d. In the Miami test.
The Miami study used a test from biomedomics, which according to their clinical analysis, has an 88.66% sensitivity and a 90.63% specificity.

Test Data

However, the fact the Miami study for a 6% infection result across 2 separate weeks makes me think that the specificity was 6% and they were all false positives, or pretty close to it since the second week should have been considerably higher than the first.

All of these studies makes me wonder why these researchers, are willing to rush to complete them knowing that the tests will get more accurate as time progresses AND the infection base rates will rise enough to offset some of the accuracy issues. Instead, other than the NY study and some European studies, the results are basically worthless since they rushed them.
This post was edited on 5/4/20 at 7:22 pm
Posted by texridder
The Woodlands, TX
Member since Oct 2017
14944 posts
Posted on 5/4/20 at 7:25 pm to
(no message)
Posted by texridder
The Woodlands, TX
Member since Oct 2017
14944 posts
Posted on 5/4/20 at 10:34 pm to
quote:

The Miami study used a test from biomedomics
biomedomics is the.U.S. distributor for Hangzhou Biotest Biotech, which actually makes the test.
Posted by texridder
The Woodlands, TX
Member since Oct 2017
14944 posts
Posted on 5/4/20 at 10:47 pm to
quote:

Using terms like sensitivity and specificity will really get texridder upset.

You're a fool. Tell me, why did you apologize to Wednesday and I when you didn't know (or even consider) the significance that false positives would have on her decision to rely on an antibody test to clear the way for her to visit her elderly father.

Now you're embarrassed having been such a dumbass that you post a dig about me almost every post to deflect from your ignorance.



Posted by chateaublanc
Member since Apr 2020
1118 posts
Posted on 5/4/20 at 11:02 pm to
quote:

I doubt he is clueless about the study. The giggling emoji is probably because that study recruited via Facebook ads and used inaccurate tests. The results are worthless.


Are you suggesting Stanford docs are running bad studies?
Posted by buckeye_vol
Member since Jul 2014
35379 posts
Posted on 5/4/20 at 11:36 pm to
quote:

biomedomics is the.U.S. distributor for Hangzhou Biotest Biotech, which actually makes the test.
I don’t believe this is true. Their own website says they manufacture it and another company distributes it.
quote:

The new test, developed and manufactured by BioMedomics, will be available through BD and distributed exclusively by Henry Schein, Inc. to health care providers throughout the United States.
And the two companies that were distributing the unapproved Chinese tests (Premier’s was used in Stanford and USC study), were distributing it BEFORE BioMedomics test was first released anyways.
quote:

Two U.S. companies — Premier Biotech of Minneapolis and Aytu Bioscience of Colorado — have been distributing the tests from unapproved Chinese manufacturers, according to health officials, FDA filings and a spokesman for one of the Chinese manufacturers.
first pageprev pagePage 6 of 7Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram