-
Notifications
You must be signed in to change notification settings - Fork 85
/
Copy path11-BayesianStatistics.Rmd
623 lines (431 loc) · 37.2 KB
/
11-BayesianStatistics.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
---
output:
pdf_document: default
bookdown::gitbook:
lib_dir: "book_assets"
includes:
in_header: google_analytics.html
html_document: default
---
# Bayesian statistics {#bayesian-statistics}
```{r echo=FALSE,warning=FALSE,message=FALSE}
library(tidyverse)
library(ggplot2)
library(cowplot)
library(boot)
library(MASS)
library(BayesFactor)
library(knitr)
set.seed(123456) # set random seed to exactly replicate results
# load the NHANES data library
library(NHANES)
# drop duplicated IDs within the NHANES dataset
NHANES <-
NHANES %>%
dplyr::distinct(ID, .keep_all = TRUE)
NHANES_adult <-
NHANES %>%
drop_na(Weight) %>%
subset(Age >= 18)
```
In this chapter we will take up the approach to statistical modeling and inference that stands in contrast to the null hypothesis testing framework that you encountered in Chapter \@ref(hypothesis-testing). This is known as "Bayesian statistics" after the Reverend Thomas Bayes, whose theorem you have already encountered in Chapter \@ref(probability). In this chapter you will learn how Bayes' theorem provides a way of understanding data that solves many of the conceptual problems that we discussed regarding null hypothesis testing, while also introducing some new challenges.
## Generative models
Say you are walking down the street and a friend of yours walks right by but doesn't say hello. You would probably try to decide why this happened -- Did they not see you? Are they mad at you? Are you suddenly cloaked in a magic invisibility shield? One of the basic ideas behind Bayesian statistics is that we want to infer the details of how the data are being generated, based on the data themselves. In this case, you want to use the data (i.e. the fact that your friend did not say hello) to infer the process that generated the data (e.g. whether or not they actually saw you, how they feel about you, etc).
The idea behind a generative model is that a *latent* (unseen) process generates the data we observe, usually with some amount of randomness in the process. When we take a sample of data from a population and estimate a parameter from the sample, what we are doing in essence is trying to learn the value of a latent variable (the population mean) that gives rise through sampling to the observed data (the sample mean). Figure \@ref(fig:GenerativeModel) shows a schematic of this idea.
```{r GenerativeModel, echo=FALSE,fig.cap="A schematic of the idea of a generative model.",fig.width=6, out.width="80%"}
knitr::include_graphics("images/BayesianInference.png")
```
If we know the value of the latent variable, then it's easy to reconstruct what the observed data should look like. For example, let's say that we are flipping a coin that we know to be fair, such that we would expect it to land on heads 50% of the time. We can describe the coin by a binomial distribution with a value of $P_{heads}=0.5$, and then we could generate random samples from such a distribution in order to see what the observed data should look like. However, in general we are in the opposite situation: We don't know the value of the latent variable of interest, but we have some data that we would like to use to estimate it.
## Bayes' theorem and inverse inference
The reason that Bayesian statistics has its name is because it takes advantage of Bayes' theorem to make inferences from data about the underlying process that generated the data. Let's say that we want to know whether a coin is fair. To test this, we flip the coin 10 times and come up with 7 heads. Before this test we were pretty sure that the $P_{heads}=0.5$, but finding 7 heads out of 10 flips would certainly give us pause if we believed that $P_{heads}=0.5$. We already know how to compute the conditional probability that we would flip 7 or more heads out of 10 if the coin is really fair ($P(n\ge7|p_{heads}=0.5)$), using the binomial distribution.
```{r echo=FALSE}
# *TBD: MOTIVATE SWITCH FROM 7 To 7 OR MORE*
```
The resulting probability is `r I(sprintf("%.3f",pbinom(7, 10, .5, lower.tail = FALSE)))`. That is a fairly small number, but this number doesn't really answer the question that we are asking -- it is telling us about the likelihood of 7 or more heads given some particular probability of heads, whereas what we really want to know is the true probability of heads for this particular coin. This should sound familiar, as it's exactly the situation that we were in with null hypothesis testing, which told us about the likelihood of data rather than the likelihood of hypotheses.
Remember that Bayes' theorem provides us with the tool that we need to invert a conditional probability:
$$
P(H|D) = \frac{P(D|H)*P(H)}{P(D)}
$$
We can think of this theorem as having four parts:
- *prior* ($P(Hypothesis)$): Our degree of belief about hypothesis H before seeing the data D
- *likelihood* ($P(Data|Hypothesis)$): How likely are the observed data D under hypothesis H?
- *marginal likelihood* ($P(Data)$): How likely are the observed data, combining over all possible hypotheses?
- *posterior* ($P(Hypothesis|Data)$): Our updated belief about hypothesis H, given the data D
In the case of our coin-flipping example:
- *prior* ($P_{heads}$): Our degree of belief about the likelhood of flipping heads, which was $P_{heads}=0.5$
- *likelihood* ($P(\text{7 or more heads out of 10 flips}|P_{heads}=0.5)$): How likely are 7 or more heads out of 10 flips if $P_{heads}=0.5)$?
- *marginal likelihood* ($P(\text{7 or more heads out of 10 flips})$): How likely are we to observe 7 heads out of 10 coin flips, in general?
- *posterior* ($P_{heads}|\text{7 or more heads out of 10 coin flips})$): Our updated belief about $P_{heads}$ given the observed coin flips
Here we see one of the primary differences between frequentist and Bayesian statistics. Frequentists do not believe in the idea of a probability of a hypothesis (i.e. our degree of belief about a hypothesis) -- for them, a hypothesis is either true or it isn't. Another way to say this is that for the frequentist, the hypothesis is fixed and the data are random, which is why frequentist inference focuses on describing the probability of data given a hypothesis (i.e. the p-value). Bayesians, on the other hand, are comfortable making probability statements about both data and hypotheses.
## Doing Bayesian estimation {#doing-bayesian-estimation}
We ultimately want to use Bayesian statistics to make decisions about hypotheses, but before we do that we need to estimate the parameters that are necessary to make the decision. Here we will walk through the process of Bayesian estimation. Let's use another screening example: Airport security screening. If you fly a lot, it's just a matter of time until one of the random explosive screenings comes back positive; I had the particularly unfortunate experience of this happening soon after September 11, 2001, when airport security staff were especially on edge.
What the security staff want to know is what is the likelihood that a person is carrying an explosive, given that the machine has given a positive test. Let's walk through how to calculate this value using Bayesian analysis.
### Specifying the prior
To use Bayes' theorem, we first need to specify the prior probability for the hypothesis. In this case, we don't know the real number but we can assume that it's quite small. According to the [FAA](https://www.faa.gov/air_traffic/by_the_numbers/media/Air_Traffic_by_the_Numbers_2018.pdf), there were 971,595,898 air passengers in the U.S. in 2017. Let's say that one of those travelers was carrying an explosive in their bag --- that would give a prior probability of 1 out of 971 million, which is very small! The security personnel may have reasonably held a stronger prior in the months after the 9/11 attack, so let's say that their subjective belief was that one out of every million flyers was carrying an explosive.
```{r echo=FALSE}
bayes_df = data.frame(prior=NA,
likelihood=NA,
marginal_likelihood=NA,
posterior=NA)
bayes_df$prior <- 1/1000000
nTests <- 3
nPositives <- 3
sensitivity <- 0.99
specificity <- 0.99
bayes_df$likelihood <- dbinom(nPositives, nTests, 0.99)
bayes_df$marginal_likelihood <-
dbinom(
x = nPositives,
size = nTests,
prob = sensitivity
) * bayes_df$prior +
dbinom(
x = nPositives,
size = nTests,
prob = 1 - specificity
) *
(1 - bayes_df$prior)
bayes_df$posterior <- (bayes_df$likelihood * bayes_df$prior) / bayes_df$marginal_likelihood
```
### Collect some data
The data are composed of the results of the explosive screening test. Let's say that the security staff runs the bag through their testing apparatus `r I(nTests)` times, and it gives a positive reading on `r I(nPositives)` of the `r I(nTests)` tests.
### Computing the likelihood
We want to compute the likelihood of the data under the hypothesis that there is an explosive in the bag. Let's say that we know (from the machine's manufacturer) that the sensitivity of the test is `r I(sprintf('%.2f',sensitivity))` -- that is, when a device is present, it will detect it `r I(sprintf('%.0f%%',sensitivity*100))` of the time. To determine the likelihood of our data under the hypothesis that a device is present, we can treat each test as a Bernoulli trial (that is, a trial with an outcome of true or false) with a probability of success of `r I(sprintf('%.2f',sensitivity))`, which we can model using a binomial distribution.
### Computing the marginal likelihood
We also need to know the overall likelihood of the data -- that is, finding `r I(nPositives)` positives out of `r I(nTests)` tests. Computing the marginal likelihood is often one of the most difficult aspects of Bayesian analysis, but for our example it's simple because we can take advantage of the specific form of Bayes' theorem for a binary outcome that we introduced in Section \@ref(bayestheorem):
$$
P(E|T) = \frac{P(T|E)*P(E)}{P(T|E)*P(E) + P(T|\neg E)*P(\neg E)}
$$
where $E$ refers to the presence of explosives, and $T$ refers to a postive test result.
The marginal likelihood in this case is a weighted average of the likelihood of the data under either presence or absence of the explosive, multiplied by the probability of the explosive being present (i.e. the prior). In this case, let's say that we know (from the manufacturer) that the specificity of the test is `r I(sprintf('%.2f', specificity))`, such that the likelihood of a positive result when there is no explosive ($P(T|\neg E)$) is `r I(sprintf('%.2f', 1 - specificity))`.
### Computing the posterior
We now have all of the parts that we need to compute the posterior probability of an explosive being present, given the observed `r I(nPositives)` positive outcomes out of `r I(nTests)` tests.
This result shows us that the posterior probability of an explosive in the bag given these positive tests (`r I(sprintf('%.3f', bayes_df$posterior))`) is just under 50%, again highlighting the fact that testing for rare events is almost always liable to produce high numbers of false positives, even when the specificity and sensitivity are very high.
An important aspect of Bayesian analysis is that it can be sequential. Once we have the posterior from one analysis, it can become the prior for the next analysis!
## Estimating posterior distributions {#estimating-posterior-distributions}
In the previous example there were only two possible outcomes -- the explosive is either there or it's not -- and we wanted to know which outcome was most likely given the data. However, in other cases we want to use Bayesian estimation to estimate the numeric value of a parameter. Let's say that we want to know about the effectiveness of a new drug for pain; to test this, we can administer the drug to a group of patients and then ask them whether their pain was improved or not after taking the drug. We can use Bayesian analysis to estimate the proportion of people for whom the drug will be effective using these data.
### Specifying the prior
```{r echo=FALSE}
# *TBD: MH: USE PRIOR BIASED TOWARDS ZERO?*
```
In this case, we don't have any prior information about the effectiveness of the drug, so we will use a *uniform distribution* as our prior, since all values are equally likely under a uniform distribution. In order to simplify the example, we will only look at a subset of 99 possible values of effectiveness (from .01 to .99, in steps of .01). Therefore, each possible value has a prior probability of 1/99.
### Collect some data
```{r echo=FALSE}
# create a table with results
nResponders <- 64
nTested <- 100
drugDf <- tibble(
outcome = c("improved", "not improved"),
number = c(nResponders, nTested - nResponders)
)
```
We need some data in order to estimate the effect of the drug. Let's say that we administer the drug to 100 individuals, we find that `r I(nResponders)` respond positively to the drug.
### Computing the likelihood
We can compute the likelihood of the observed data under any particular value of the effectiveness parameter using the binomial density function. In Figure \@ref(fig:like2) you can see the likelihood curves over numbers of responders for several different values of $P_{respond}$. Looking at this, it seems that our observed data are relatively more likely under the hypothesis of $P_{respond}=0.7$, somewhat less likely under the hypothesis of $P_{respond}=0.5$, and quite unlikely under the hypothesis of $P_{respond}=0.3$. One of the fundamental ideas of Bayesian inference is that we should upweight our belief in values of our parameter of interest in proportion to how likely the data are under those values, balanced against what we believed about the parameter values before having seen the data (our prior knowledge).
```{r like2,echo=FALSE,fig.cap='Likelihood of each possible number of responders under several different hypotheses (p(respond)=0.5 (solid), 0.7 (dotted), 0.3 (dashed). Observed value shown in the vertical line',fig.width=4,fig.height=4,out.height='50%'}
likeDf <-
tibble(resp = seq(1,99,1)) %>%
mutate(
presp=resp/100,
likelihood5 = dbinom(resp,100,.5),
likelihood7 = dbinom(resp,100,.7),
likelihood3 = dbinom(resp,100,.3)
)
ggplot(likeDf,aes(resp,likelihood5)) +
geom_line() +
xlab('number of responders') + ylab('likelihood') +
geom_vline(xintercept = drugDf$number[1],color='blue') +
geom_line(aes(resp,likelihood7),linetype='dotted') +
geom_line(aes(resp,likelihood3),linetype='dashed')
```
### Computing the marginal likelihood
In addition to the likelihood of the data under different hypotheses, we need to know the overall likelihood of the data, combining across all hypotheses (i.e., the marginal likelihood). This marginal likelihood is primarily important because it helps to ensure that the posterior values are true probabilities. In this case, our use of a set of discrete possible parameter values makes it easy to compute the marginal likelihood, because we can just compute the likelihood of each parameter value under each hypothesis and add them up.
```{r ,echo=FALSE}
# *MH:*not sure there’s a been clear discussion of the marginal likelihood up this point. it’s a confusing and also very deep construct.. the overall likelihood of the data is the likelihood of the data under each hypothesis, averaged together (weighted by) the prior probability of those hypotheses. it is how likely the data is under your prior beliefs about the hypotheses.
# might be worth thinking of two examples, where the likelihood of the data under that hypothesis of interest is the same, but where the marginal likelihood changes i.e., the hypothesis is pretty good at predicting the data, while other hypothese are bad vs. other hypotheses are always good (perhaps better)
```
```{r echo=FALSE}
# compute marginal likelihood
likeDf <-
likeDf %>%
mutate(uniform_prior = array(1 / n()))
# multiply each likelihood by prior and add them up
marginal_likelihood <-
sum(
dbinom(
x = nResponders, # the number who responded to the drug
size = 100, # the number tested
likeDf$presp # the likelihood of each response
) * likeDf$uniform_prior
)
```
### Computing the posterior
We now have all of the parts that we need to compute the posterior probability distribution across all possible values of $p_{respond}$, as shown in Figure \@ref(fig:posteriorDist).
```{r echo=FALSE}
# Create data for use in figure
bayesDf <-
tibble(
steps = seq(from = 0.01, to = 0.99, by = 0.01)
) %>%
mutate(
likelihoods = dbinom(
x = nResponders,
size = 100,
prob = steps
),
priors = dunif(steps) / length(steps),
posteriors = (likelihoods * priors) / marginal_likelihood
)
```
```{r posteriorDist,echo=FALSE,fig.cap="Posterior probability distribution for the observed data plotted in solid line against uniform prior distribution (dotted line). The maximum a posteriori (MAP) value is signified by the diamond symbol.",fig.width=4,fig.height=4,out.height='50%'}
# compute MAP estimate
MAP_estimate <-
bayesDf %>%
arrange(desc(posteriors)) %>%
slice(1) %>%
pull(steps)
# compute likelihoods for the observed data under all values of p(heads). here we use the quantized values from .01 to .99 in steps of 0.01
ggplot(bayesDf,aes(steps,posteriors)) +
geom_line() +
geom_line(aes(steps,priors),color='black',linetype='dotted') +
xlab('p(respond)') + ylab('posterior probability of the observed data') +
annotate(
"point",
x = MAP_estimate,
y = max(bayesDf$posteriors), shape=9,
size = 3
)
```
### Maximum a posteriori (MAP) estimation
Given our data we would like to obtain an estimate of $p_{respond}$ for our sample. One way to do this is to find the value of $p_{respond}$ for which the posterior probability is the highest, which we refer to as the *maximum a posteriori* (MAP) estimate. We can find this from the data in \@ref(fig:posteriorDist) --- it's the value shown with a marker at the top of the distribution. Note that the result (`r I(MAP_estimate)`) is simply the proportion of responders from our sample -- this occurs because the prior was uniform and thus didn't influence our estimate.
### Credible intervals
Often we would like to know not just a single estimate for the posterior, but an interval in which we are confident that the posterior falls. We previously discussed the concept of confidence intervals in the context of frequentist inference, and you may remember that the interpretation of confidence intervals was particularly convoluted: It was an interval that will contain the the value of the parameter 95% of the time. What we really want is an interval in which we are confident that the true parameter falls, and Bayesian statistics can give us such an interval, which we call a *credible interval*.
```{r ,echo=FALSE}
# *TBD: USE POSTERIOR FROM ABOVE*
```
The interpretation of this credible interval is much closer to what we had hoped we could get from a confidence interval (but could not): It tells us that there is a 95% probability that the value of $p_{respond}$ falls between these two values. Importantly, in this case it shows that we have high confidence that $p_{respond} > 0.0$, meaning that the drug seems to have a positive effect.
In some cases the credible interval can be computed *numerically* based on a known distribution, but it's more common to generate a credible interval by sampling from the posterior distribution and then to compute quantiles of the samples. This is particularly useful when we don't have an easy way to express the posterior distribution numerically, which is often the case in real Bayesian data analysis. One such method (rejection sampling) is explained in more detail in the Appendix at the end of this chapter.
### Effects of different priors
In the previous example we used a *flat prior*, meaning that we didn't have any reason to believe that any particular value of $p_{respond}$ was more or less likely. However, let's say that we had instead started with some previous data: In a previous study, researchers had tested 20 people and found that 10 of them had responded positively. This would have lead us to start with a prior belief that the treatment has an effect in 50% of people. We can do the same computation as above, but using the information from our previous study to inform our prior (see panel A in Figure \@ref(fig:posteriorDistPrior)).
```{r ,echo=FALSE}
# *MH:* i wonder what you’re doing here: is this the same thing as doing a bayesian inference assuming 10 / 20 data and using the posterior from that as the prior for this analysis? that is what woud normally be the straightfoward thing to do.
```
```{r echo=FALSE}
# compute likelihoods for data under all values of p(heads)
# using a flat or empirical prior.
# here we use the quantized values from .01 to .99 in steps of 0.01
df <-
tibble(
steps = seq(from = 0.01, to = 0.99, by = 0.01)
) %>%
mutate(
likelihoods = dbinom(nResponders, 100, steps),
priors_flat = dunif(steps) / sum(dunif(steps)),
priors_empirical = dbinom(10, 20, steps) / sum(dbinom(10, 20, steps))
)
marginal_likelihood_flat <-
sum(dbinom(nResponders, 100, df$steps) * df$priors_flat)
marginal_likelihood_empirical <-
sum(dbinom(nResponders, 100, df$steps) * df$priors_empirical)
df <-
df %>%
mutate(
posteriors_flat =
(likelihoods * priors_flat) / marginal_likelihood_flat,
posteriors_empirical =
(likelihoods * priors_empirical) / marginal_likelihood_empirical
)
p1 <- ggplot(df, aes(steps, posteriors_flat)) +
geom_line(color = "blue") +
xlab("p(heads)") + ylab("Posterior probability") +
geom_line(aes(steps, posteriors_empirical), color = "red") +
geom_line(aes(steps, priors_empirical), linetype = "dotted")
```
Note that the likelihood and marginal likelihood did not change - only the prior changed. The effect of the change in prior to was to pull the posterior closer to the mass of the new prior, which is centered at 0.5.
Now let's see what happens if we come to the analysis with an even stronger prior belief. Let's say that instead of having previously observed 10 responders out of 20 people, the prior study had instead tested 500 people and found 250 responders. This should in principle give us a much stronger prior, and as we see in panel B of Figure \@ref(fig:posteriorDistPrior) , that's what happens: The prior is much more concentrated around 0.5, and the posterior is also much closer to the prior. The general idea is that Bayesian inference combines the information from the prior and the likelihood, weighting the relative strength of each.
```{r echo=FALSE}
# compute likelihoods for data under all values of p(heads) using strong prior.
df <-
df %>%
mutate(
priors_strong = dbinom(250, 500, steps) / sum(dbinom(250, 500, steps))
)
marginal_likelihood_strong <-
sum(dbinom(nResponders, 100, df$steps) * df$priors_strong)
df <-
df %>%
mutate(
posteriors_strongprior = (likelihoods * priors_strong) / marginal_likelihood_strong
)
p2 <- ggplot(df,aes(steps,posteriors_empirical)) +
geom_line(color='blue') +
xlab('p(heads)') + ylab('Posterior probability') +
geom_line(aes(steps,posteriors_strongprior),color='red') +
geom_line(aes(steps,priors_strong),linetype='dotted')
```
This example also highlights the sequential nature of Bayesian analysis -- the posterior from one analysis can become the prior for the next analysis.
Finally, it is important to realize that if the priors are strong enough, they can completely overwhelm the data. Let's say that you have an absolute prior that $p_{respond}$ is 0.8 or greater, such that you set the prior likelihood of all other values to zero. What happens if we then compute the posterior?
```{r echo=FALSE}
# compute likelihoods for data under all values of p(respond) using absolute prior.
df <-
df %>%
mutate(
priors_absolute = array(data = 0, dim = length(steps)),
priors_absolute = if_else(
steps >= 0.8,
1, priors_absolute
),
priors_absolute = priors_absolute / sum(priors_absolute)
)
marginal_likelihood_absolute <-
sum(dbinom(nResponders, 100, df$steps) * df$priors_absolute)
df <-
df %>%
mutate(
posteriors_absolute =
(likelihoods * priors_absolute) / marginal_likelihood_absolute
)
```
```{r posteriorDistPrior,echo=FALSE,fig.cap="A: Effects of priors on the posterior distribution. The original posterior distribution based on a flat prior is plotted in blue. The prior based on the observation of 10 responders out of 20 people is plotted in the dotted black line, and the posterior using this prior is plotted in red. B: Effects of the strength of the prior on the posterior distribution. The blue line shows the posterior obtained using the prior based on 50 heads out of 100 people. The dotted black line shows the prior based on 250 heads out of 500 flips, and the red line shows the posterior based on that prior. C: Effects of the strength of the prior on the posterior distribution. The blue line shows the posterior obtained using an absolute prior which states that p(respond) is 0.8 or greater. The prior is shown in the dotted black line.",fig.width=8,fig.height=8,out.width='80%'}
p3 <- ggplot(df,aes(steps,posteriors_absolute)) +
geom_line(color='blue') +
xlab('p(heads)') +
ylab('Posterior probability') +
ylim(0,max(df$posteriors_absolute)*1.1) +
geom_line(aes(steps,
priors_absolute*max(df$posteriors_absolute)*20),
linetype='dotted',
size=1)
plot_grid(p1, p2,p3, labels='AUTO')
```
In panel C of Figure \@ref(fig:posteriorDistPrior) we see that there is zero density in the posterior for any of the values where the prior was set to zero - the data are overwhelmed by the absolute prior.
## Choosing a prior
The impact of priors on the resulting inferences are the most controversial aspect of Bayesian statistics. What is the right prior to use? If the choice of prior determines the results (i.e., the posterior), how can you be sure you results are trustworthy? These are difficult questions, but we should not back away just because we are faced with hard questions. As we discussed previously, Bayesian analyses give us interpretable results (credible intervals, etc.). This alone should inspire us to think hard about these questions so that we can arrive with results that are reasonable and interpretable.
There are various ways to choose one's priors, which (as we saw above) can impact the resulting inferences. Sometimes we have a very specific prior, as in the case where we expected our coin to lands heads 50% of the time, but in many cases we don't have such strong a starting point. *Uninformative priors* attempt to influence the resulting posterior as little as possible, as we saw in the example of the uniform prior above. It's also common to use *weakly informative priors* (or *default priors*), which influence the result only very slightly. For example, if we had used a binomial distribution based on one heads out of two coin flips, the prior would have been centered around 0.5 but fairly flat, influencing the posterior only slightly. It is also possible to use priors based on the scientific literature or pre-existing data, which we would call *empirical priors*. In general, however, we will stick to the use of uninformative/weakly informative priors, since they raise the least concern about influencing our results.
## Bayesian hypothesis testing
Having learned how to perform Bayesian estimation, we now turn to the use of Bayesian methods for hypothesis testing. Let's say that there are two politicians who differ in their beliefs about whether the public is in favor an extra tax to support the national parks. Senator Smith thinks that only 40% of people are in favor of the tax, whereas Senator Jones thinks that 60% of people are in favor. They arrange to have a poll done to test this, which asks 1000 randomly selected people whether they support such a tax. The results are that 490 of the people in the polled sample were in favor of the tax. Based on these data, we would like to know: Do the data support the claims of one senator over the other,and by how much? We can test this using a concept known as the [Bayes factor](https://bayesfactor.blogspot.com/2014/02/the-bayesfactor-package-this-blog-is.html), which quantifies which hypothesis is better by comparing how well each predicts the observed data.
### Bayes factors {#Bayes-factors}
```{r echo=FALSE}
# compute Bayes factor for Smith vs. Jones
bf <-
dbinom(
x = 490,
size = 1000,
prob = 0.4 #Smith's hypothesis
) / dbinom(
x = 490,
size = 1000,
prob = 0.6 #Jones' hypothesis
)
```
The Bayes factor characterizes the relative likelihood of the data under two different hypotheses. It is defined as:
$$
BF = \frac{p(data|H_1)}{p(data|H_2)}
$$
for two hypotheses $H_1$ and $H_2$. In the case of our two senators, we know how to compute the likelihood of the data under each hypothesis using the binomial distribution; let's assume for the moment that our prior probability for each senator being correct is the same ($P_{H_1} = P_{H_2} = 0.5$). We will put Senator Smith in the numerator and Senator Jones in the denominator, so that a value greater than one will reflect greater evidence for Senator Smith, and a value less than one will reflect greater evidence for Senator Jones. The resulting Bayes Factor (`r I(bf)`) provides a measure of the evidence that the data provides regarding the two hypotheses - in this case, it tells us the data support Senator Smith more than 3000 times more strongly than they support Senator Jones.
### Bayes factors for statistical hypotheses
In the previous example we had specific predictions from each senator, whose likelihood we could quantify using the binomial distribution. In addition, our prior probability for the two hypotheses was equal. However, in real data analysis we generally must deal with uncertainty about our parameters, which complicates the Bayes factor, because we need to compute the marginal likelihood (that is, an integrated average of the likelihoods over all possible model parameters, weighted by their prior probabilities). However, in exchange we gain the ability to quantify the relative amount of evidence in favor of the null versus alternative hypotheses.
Let's say that we are a medical researcher performing a clinical trial for the treatment of diabetes, and we wish to know whether a particular drug reduces blood glucose compared to placebo. We recruit a set of volunteers and randomly assign them to either drug or placebo group, and we measure the change in hemoglobin A1C (a marker for blood glucose levels) in each group over the period in which the drug or placebo was administered. What we want to know is: Is there a difference between the drug and placebo?
First, let's generate some data and analyze them using null hypothesis testing (see Figure \@ref(fig:bayesTesting)). Then let's perform an independent-samples t-test, which shows that there is a significant difference between the groups:
```{r echo=FALSE}
# create simulated data for drug trial example
set.seed(1234567)
nsubs <- 40
effect_size <- 0.6
# randomize indiviuals to drug (1) or placebo (0)
drugDf <-
tibble(
group = as.integer(runif(nsubs) > 0.5)
) %>%
mutate(
hbchange = rnorm(nsubs) - group * effect_size
)
```
```{r bayesTesting,echo=FALSE,fig.cap="Box plots showing data for drug and placebo groups.",fig.width=4,fig.height=4,out.height='50%'}
drugDf %>%
mutate(
group = as.factor(
recode(
group,
"1" = "Drug",
"0" = "Placebo"
)
)
) %>%
ggplot(aes(group, hbchange)) +
geom_boxplot() +
annotate("segment", x = 0.5, xend = 2.5, y = 0, yend = 0, linetype = "dotted") +
labs(
x = "",
y = "Change in hemoglobin A1C"
)
```
```{r echo=FALSE}
# compute t-test for drug example
drugTT <- t.test(hbchange ~ group, alternative = "greater", data = drugDf)
print(drugTT)
```
This test tells us that there is a significant difference between the groups, but it doesn't quantify how strongly the evidence supports the null versus alternative hypotheses. To measure that, we can compute a Bayes factor using `ttestBF` function from the BayesFactor package in R:
```{r echo=FALSE, message=FALSE,warning=FALSE}
# compute Bayes factor for drug data
bf_drug <- ttestBF(
formula = hbchange ~ group, data = drugDf,
nullInterval = c(0, Inf)
)
bf_drug
```
We are particularly interested in the Bayes Factor for an effect greater than zero, which is listed in the line marked "[1]" in the report. The Bayes factor here tells us that the alternative hypothesis (i.e. that the difference is greater than zero) is about 3 times more likely than the point null hypothesis (i.e. a mean difference of exactly zero) given the data. Thus, while the effect is significant, the amount of evidence it provides us in favor of the alternative hypothesis is rather weak.
#### One-sided tests
We generally are less interested in testing against the null hypothesis of a specific point value (e.g. mean difference = 0) than we are in testing against a directional null hypothesis (e.g. that the difference is less than or equal to zero). We can also perform a directional (or *one-sided*) test using the results from `ttestBF` analysis, since it provides two Bayes factors: one for the alternative hypothesis that the mean difference is greater than zero, and one for the alternative hypothesis that the mean difference is less than zero. If we want to assess the relative evidence for a positive effect, we can compute a Bayes factor comparing the relative evidence for a positive versus a negative effect by simply dividing the two Bayes factors returned by the function:
```{r echo=FALSE}
bf_drug[1]/bf_drug[2]
```
Now we see that the Bayes factor for a positive effect versus a negative effect is substantially larger (almost 30).
#### Interpreting Bayes Factors
How do we know whether a Bayes factor of 2 or 20 is good or bad? There is a general guideline for interpretation of Bayes factors suggested by [Kass & Rafferty (1995)](https://www.andrew.cmu.edu/user/kk3n/simplicity/KassRaftery1995.pdf):
|BF| Strength of evidence|
|---------|---------------------|
|1 to 3 | not worth more than a bare mention|
|3 to 20| positive|
|20 to 150| strong|
|>150 | very strong|
Based on this, even though the statisical result is significant, the amount of evidence in favor of the alternative vs. the point null hypothesis is weak enough that it's hardly worth even mentioning, whereas the evidence for the directional hypothesis is relatively strong.
### Assessing evidence for the null hypothesis
Because the Bayes factor is comparing evidence for two hypotheses, it also allows us to assess whether there is evidence in favor of the null hypothesis, which we couldn't do with standard null hypothesis testing (because it starts with the assumption that the null is true). This can be very useful for determining whether a non-significant result really provides strong evidence that there is no effect, or instead just reflects weak evidence overall.
## Learning objectives
After reading this chapter, should be able to:
* Describe the main differences between Bayesian analysis and null hypothesis testing
* Describe and perform the steps in a Bayesian analysis
* Describe the effects of different priors, and the considerations that go into choosing a prior
* Describe the difference in interpretation between a confidence interval and a Bayesian credible interval
## Suggested readings
- *The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy*, by Sharon Bertsch McGrayne
- *Doing Bayesian Data Analysis: A Tutorial Introduction with R*, by John K. Kruschke
## Appendix:
### Rejection sampling
We will generate samples from our posterior distribution using a simple algorithm known as [*rejection sampling*](https://am207.github.io/2017/wiki/rejectionsampling.html). The idea is that we choose a random value of x (in this case $p_{respond}$) and a random value of y (in this case, the posterior probability of $p_{respond}$) each from a uniform distribution. We then only accept the sample if $y < f(x)$ - in this case, if the randomly selected value of y is less than the actual posterior probability of y. Figure \@ref(fig:rejectionSampling) shows an example of a histogram of samples using rejection sampling, along with the 95% credible intervals obtained using this method (with the values presented in Table \@ref(tab:credInt)).
```{r credInt, echo=FALSE}
# Compute credible intervals for example
nsamples <- 100000
# create random uniform variates for x and y
x <- runif(nsamples)
y <- runif(nsamples)
# create f(x)
fx <- dbinom(x = nResponders, size = 100, prob = x)
# accept samples where y < f(x)
accept <- which(y < fx)
accepted_samples <- x[accept]
credible_interval <- quantile(x = accepted_samples,
probs = c(0.025, 0.975))
kable(credible_interval)
```
```{r rejectionSampling,echo=FALSE,fig.cap="Rejection sampling example.The black line shows the density of all possible values of p(respond); the blue lines show the 2.5th and 97.5th percentiles of the distribution, which represent the 95 percent credible interval for the estimate of p(respond).",fig.width=4,fig.height=4,out.height='50%'}
# plot histogram
p=ggplot(data.frame(samples=accepted_samples),aes(samples)) +
geom_density()
for (i in 1:2) {
p = p + annotate('segment',x=credible_interval[i],xend=credible_interval[i],
y=0,yend=2,col='blue',lwd=1)
}
print(p)
```