Jump to content
BrainDen.com - Brain Teasers
  • 0


Guest
 Share

Question

Assume that 1% of the members of a particular sports league use a particular proscribed drug. It is decided to test all of the members of the league to find these villains out. The test used has a only a 1% chance of failing to identify an actual drug user and also has only a 1% chance of misidentifying a non-drug user as a drug user. Tom tests positive -- what are the chances that, despite the results of this highly accurate drug test, he is innocent.

Link to comment
Share on other sites

15 answers to this question

Recommended Posts

  • 0
Assume that 1% of the members of a particular sports league use a particular proscribed drug. It is decided to test all of the members of the league to find these villains out. The test used has a only a 1% chance of failing to identify an actual drug user and also has only a 1% chance of misidentifying a non-drug user as a drug user. Tom tests positive -- what are the chances that, despite the results of this highly accurate drug test, he is innocent.

1%

Link to comment
Share on other sites

  • 0

Probability of a positive result = The number of users times the percentage accuracy in positively identifying a "villian" plus the number of non users times the percentage chance that the test will falsely identify a "non-villian" as a "villian":

Probability(positive) = (1/100 * 99/100) + (99/100 * 1/100)

Probability of being a non-villian is 99/100 since only 1/100 are users.

Probability(non-villian) = 99/100

The probability of a positive test and being a non user is

Probability(positive) * Probability(non-villian)

So, I think the answer is .0198 * .99 which equals 1.96%

Link to comment
Share on other sites

  • 0

If 100 test positive, 99 will be users of the drug, and 1 would be a false positive, which would mean there's a 99 percent chance he used it

Edit: and therefore a 1 percent chance that he is innocent (which is what the question was)

Edited by GIJeff
Link to comment
Share on other sites

  • 0
Probability of a positive result = The number of users times the percentage accuracy in positively identifying a "villian" plus the number of non users times the percentage chance that the test will falsely identify a "non-villian" as a "villian":

Probability(positive) = (1/100 * 99/100) + (99/100 * 1/100)

Probability of being a non-villian is 99/100 since only 1/100 are users.

Probability(non-villian) = 99/100

The probability of a positive test and being a non user is

Probability(positive) * Probability(non-villian)

So, I think the answer is .0198 * .99 which equals 1.96%

My probability and statistics teacher would kill me if he saw my post above... :(

A = positive test

B = innocent

P(A and B) = P(A) * P(B|A)

so P(A and B) / P(A) = P(B|A)

P(A and B) = 1%

P(A) = 99/100 * 1/100 + 1/100 * 99/100

so the probability of being innocent given a positive test is:

P(B|A) = .01/.0198 = 50.5%

Heck, I might've messed up again, but I always thought it was pretty cool how 1% chance of bad results can really screw the pooch on the reliability of the whole test.

And remember, people, that there is a 99% chance going in that he was innocent.

Edited by toddpeak
Link to comment
Share on other sites

  • 0
Assume that 1% of the members of a particular sports league use a particular proscribed drug. It is decided to test all of the members of the league to find these villains out. The test used has a only a 1% chance of failing to identify an actual drug user and also has only a 1% chance of misidentifying a non-drug user as a drug user. Tom tests positive -- what are the chances that, despite the results of this highly accurate drug test, he is innocent.

Lets see how rusty my Bayesian reasoning is....

1% of the population takes the drug

P(!drug) = .99

P(drug) = .01

1% error on tests

P(+|drug) = .99

P(-|drug) = .01

P(+|!drug) = .01

P(-|!drug) = .99

P(!drug|+) = P(+|!drug)P(!drug)/P(+)

= P(+|!drug)P(!drug) / ( P(+|!drug)P(!drug) + P(+|drug)P(drug) )

= .01 * .99 / ( .01*.99 + .99*.01 )

= .5

So there is a 50% chance that tom is doping given that his test came out positive.

Link to comment
Share on other sites

  • 0

Very close toddpeak (P(A & B) = .99% not 1%; its P(A|B) that's 1%).

EventHorizon has it right. A less notation based version of the answer, for those without probability training and for those who have forgotten it:

Imagine that the (very big) league has 10000 members.

Of those 1% or 100 are users. Out of those users the test will identify 99 of them.

There are 9900 non-users. Of them, 1% will mistakenly test positive, i.e., 99.

The number of users who test positive is equal to the number of non-users who test positive. Half of those who test positive will be non-users, so it is equally likely that an individual who tests positive will be a user or a non-user.

So the probability that Tom is innocent is 50%.

And yes -- this has real world application.

Link to comment
Share on other sites

  • 0
Very close toddpeak (P(A & B) = .99% not 1%; its P(A|B) that's 1%).

EventHorizon has it right. A less notation based version of the answer, for those without probability training and for those who have forgotten it:

Imagine that the (very big) league has 10000 members.

Of those 1% or 100 are users. Out of those users the test will identify 99 of them.

There are 9900 non-users. Of them, 1% will mistakenly test positive, i.e., 99.

The number of users who test positive is equal to the number of non-users who test positive. Half of those who test positive will be non-users, so it is equally likely that an individual who tests positive will be a user or a non-user.

So the probability that Tom is innocent is 50%.

And yes -- this has real world application.

Dang, I knew I missed a step. I needed to calculate P(A and B) from P(A|B). What a shame... :(

Link to comment
Share on other sites

  • 0
Assume that 1% of the members of a particular sports league use a particular proscribed drug. It is decided to test all of the members of the league to find these villains out. The test used has a only a 1% chance of failing to identify an actual drug user and also has only a 1% chance of misidentifying a non-drug user as a drug user. Tom tests positive -- what are the chances that, despite the results of this highly accurate drug test, he is innocent.

I love probability

P(+|U)=.99 P(-|U)=.01

P(+|D)=.01 P(-|D)=.99

P(U)=.01 P(D)=.99

P(+)=.99*.01+.01*.99=.0198

P(U&+)=.99*.01=.0099

P(U|+)=.0099/.0198=.5

Link to comment
Share on other sites

  • 0

Suppose one million of these players are tested.

10 thousand, a priori, are users, and 99% of them will test positive = 9900 true positives.

990 thousand, a priori, are nonusers, and 1% of them will test positive = 9900 false positives.

Tom tests positive.

What are the odds he's in the false positive group?

Intuitive enough?

The league should retest all the positives.

Link to comment
Share on other sites

  • 0

"The league should retest all the positives."

Assuming that the retest is independent (that a false positive on the initial test doesn't mean that a false positive is likely on the retest) and that there isn't a cost to the individual for testing positive in the first place ("He had to give another sample -- obviously he tested positive: the cheat"). This is, of course, exactly what is done in many drug tests for sports.

The problem is that human intuition made those 1% estimates seem reasonable. The important idea is that unless your test is very, very accurate and specific, and/or the percentage of the condition in the population is high random or broad screenings are a bad idea. You should find a way of selecting from your overall population so that a substantial proportion that you run tests on have the "condition" (in this case, are drug users) in question. The retest should not be treated as a "confirmation", rather the initial test should be seen as a way of enriching the population tested but meaningless in itself.

Too many people like politicians or school principals don't understand this, as does the public at large.

Why are some forms of cancer screenings only given to "high-risk" populations? The financial cost of a broader screening? Not necessarily. If a false positive means unnecessary potentially dangerous surgery for a biopsy (the retest) you really want to keep down the false positives.

Link to comment
Share on other sites

  • 0
Assume that 1% of the members of a particular sports league use a particular proscribed drug. It is decided to test all of the members of the league to find these villains out. The test used has a only a 1% chance of failing to identify an actual drug user and also has only a 1% chance of misidentifying a non-drug user as a drug user. Tom tests positive -- what are the chances that, despite the results of this highly accurate drug test, he is innocent.

if he actually uses drugs hes not inoccent- but mathematically

99percent if the test is wrong

1 percent if its right

100 percent if he is inoccent

Link to comment
Share on other sites

  • 0

Lets see..

total 10000 members.

actual drug user (+) (1%)=100

fail rate on +ve class(f-)(1%)=1

success rate on +ve class(t+)=99

non-drug users(-)=9900

wrongly identified (f+)(1%)=99

Tom is on +ve class. So, it might be t+ or f+......and the chnace is 50%

Link to comment
Share on other sites

  • 0
"The league should retest all the positives."

Assuming that the retest is independent (that a false positive on the initial test doesn't mean that a false positive is likely on the retest) and that there isn't a cost to the individual for testing positive in the first place ("He had to give another sample -- obviously he tested positive: the cheat"). This is, of course, exactly what is done in many drug tests for sports.

The problem is that human intuition made those 1% estimates seem reasonable. The important idea is that unless your test is very, very accurate and specific, and/or the percentage of the condition in the population is high random or broad screenings are a bad idea. You should find a way of selecting from your overall population so that a substantial proportion that you run tests on have the "condition" (in this case, are drug users) in question. The retest should not be treated as a "confirmation", rather the initial test should be seen as a way of enriching the population tested but meaningless in itself.

Too many people like politicians or school principals don't understand this, as does the public at large.

Why are some forms of cancer screenings only given to "high-risk" populations? The financial cost of a broader screening? Not necessarily. If a false positive means unnecessary potentially dangerous surgery for a biopsy (the retest) you really want to keep down the false positives.

The OP was a double-header: a puzzle and a consciousness raiser.

Kudos.

The puzzle part teaches that you need a test that has at least an order of magnitude better accuracy than the needle in a haystack that you're looking for. If, that is, you are going to rely on a single test. The puzzle made them equal [1% offenders and 1% error rate] so the results ended up a coin toss. Thus it was noted, if you're working under those statistical conditions, your strategy must include retesting. [Not to quibble over terminology.]

Testing strategies undoubtedly vary among the organizations that employ them. Some do a worse job than others. It's a reach, however, to throw out screenings, solely on that basis. Also noted, if you can't reduce the false positive rate, then enriching the population is indicated -something that retesting accomplishes.

On these points, Major League Baseball's Drug Policy and Prevention Program is an interesting case to examine.

Recognizing that better accuracy may be bought at higher cost, MLB employs a two-tiered test analysis strategy: a more economical and speedier one for screening, followed by a more accurate and costly definitive test on the positive screen results. Here's an excerpt from

Section 6. Laboratories:

If the screening test gives a presumptive positive result, the drug's presence must be confirmed by a second definitive test using the gas chromatography/mass spectrometry (GC/MS) technique.

Regarding the consequences of an initial positive test, MLB does not "rush to judgment," either, as seen from this excerpt from

Section 7. Discipline:

An initial positive test result, the admission of drug use or the identification of drug use through other means will not immediately result in discipline for the player of Baseball personnel involved other than being required to participate in Baseball's testing program. Again, enriching the test population.

This strategy could serve as a model for other organizations.

Bottom line: know what you're doing, but then go ahead.

Kudos to Topher for a great post. B))

Link to comment
Share on other sites

  • 0
Assume that 1% of the members of a particular sports league use a particular proscribed drug. It is decided to test all of the members of the league to find these villains out. The test used has a only a 1% chance of failing to identify an actual drug user and also has only a 1% chance of misidentifying a non-drug user as a drug user. Tom tests positive -- what are the chances that, despite the results of this highly accurate drug test, he is innocent.

I didnt find this problem until now :P I didnt look at any of the posts other than the top one so I hope there was no changes made that werent edited onto the OP

anyway

it has a 1% chance to switch your result if you are a drug-user or a non-drug user, in either case it has a 1% chance to misidentify

Tom has a 99% chance of being innocent. When the test came up positive, that meant either:

1) he's a drug-user, and the test's 99% chance of accuracy worked, showing him as a drug user

2) he's innocent (99% chance overall) but the test misidentified him as a drug user, which it had a 1% chance to do so

the chance that he's a drug user is 1%

the chance that the test was correct is 99%

the chance that he's NOT a drug user is 99%

the chance that the test was inccorect is 1%

clearly I dont need to do the math, each chance- innocent and drug-using, have equal numbers, just in different order. Therefore his chances of being a drug user is 1/2, and his chances of being innocent are 1/2

Link to comment
Share on other sites

  • 0
"The league should retest all the positives."

Assuming that the retest is independent (that a false positive on the initial test doesn't mean that a false positive is likely on the retest) and that there isn't a cost to the individual for testing positive in the first place ("He had to give another sample -- obviously he tested positive: the cheat"). This is, of course, exactly what is done in many drug tests for sports.

The problem is that human intuition made those 1% estimates seem reasonable. The important idea is that unless your test is very, very accurate and specific, and/or the percentage of the condition in the population is high random or broad screenings are a bad idea. You should find a way of selecting from your overall population so that a substantial proportion that you run tests on have the "condition" (in this case, are drug users) in question. The retest should not be treated as a "confirmation", rather the initial test should be seen as a way of enriching the population tested but meaningless in itself.

Too many people like politicians or school principals don't understand this, as does the public at large.

Why are some forms of cancer screenings only given to "high-risk" populations? The financial cost of a broader screening? Not necessarily. If a false positive means unnecessary potentially dangerous surgery for a biopsy (the retest) you really want to keep down the false positives.

Another problem of course is that the tests are not necessarily independent. Since drug testing is time-dependent, multiple samples are collected at the same time, and then stored. After all, if the person is using some type of drug, by giving the first drug test, the person has been warned, and will stop using the drug. Depending on the life of the drug in the blood stream, and how long it takes to perform a test, the person could be clean on a retest.

As a result, the same test on multiple samples of the same fluid, could all yield the same false positives (or false negatives) due to the artifact in the blood stream that produced the results (high white cell count, chemical, etc).

My chemistry teacher's wife would fail a breathalyzer test after drinking milk, due to how her body broke the milk down. If someone took multiple samples of her breath for analysis, found that there was evidence she was "over the limit", and then proved it with the rest of the samples, what type of proof would that be?

So what would the probability be of having two false positives in a row be?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...