# Meta-analysis of binary data with baggr

#### 2022-01-04

This vignette is written for baggr users who want to analyse binary data. For more general introduction to meta-analysis concepts and baggr workflow, please read vignette("baggr").

Here, we will show how to:

• Run meta-analysis of summary odd ratio (OR) or risk ratio (RR) data.
• Prepare standard outputs for the above.
• Convert individual-level data to summary-level and correct for rare events.
• Run meta-analysis directly on individual-level data.
• Model impact of covariates.

## Other questions you should consider

There are certain aspects of modelling binary data that are not yet covered by baggr, but may be important for your data:

# Basic meta-analysis of binary data

Typically, a meta-analysis of binary data is done on summary statistics such as $$\log(OR)$$ or $$\log(RR)$$. The reason for this is two-fold: 1) they are the statistics most commonly reported by studies and 2) they are approximately normally distributed. The second assumption needs to be treated carefully, as we will show later.

For the first example we will use a simple summary data based on (part of) Table 6 in Yusuf et al. (1985), a famous study of impact of beta blockers on occurrence of strokes and mortality.

df_yusuf <- read.table(text="
trial  a n1i  c n2i
Balcon 14  56 15  58
Clausen 18  66 19  64
Multicentre 15 100 12  95
Barber 10  52 12  47
Norris 21 226 24 228
Kahler  3  38  6  31
Ledwich  2  20  3  20
", header=TRUE)

In a typical notation, a (c) are numbers of events in treatment (control) groups, while n1 (n2) are total patients in treatment (control) group. We can also calculate $$b = n_1 - a$$ and $$d = n_2 - c$$, i.e. numbers of “non-events” in treatment and control groups respectively.

In our examples we will use OR metric; $$\log(OR)$$ and its SE can easily be calculated from the values (if you’re not familiar with OR, see here):

# This is a calculation we could do by hand:
# df <- df_yusuf
# df$b <- df$n1i-df$a # df$d <- df$n2i-df$c
# df$tau <- log((df$a*df$d)/(df$b*df$c)) # df$se <- sqrt(1/df$a + 1/df$b + 1/df$c + 1/df$d)

# But prepare_ma() automates these operations:
df_ma <- prepare_ma(df_yusuf, group = "trial", effect = "logOR")
df_ma
#>         group  a  n1  c  n2   b   d         tau        se
#> 1      Balcon 14  56 15  58  42  43 -0.04546237 0.4303029
#> 2      Barber 10  52 12  47  42  35 -0.36464311 0.4855042
#> 3     Clausen 18  66 19  64  48  45 -0.11860574 0.3888993
#> 4      Kahler  3  38  6  31  35  25 -1.02961942 0.7540368
#> 5     Ledwich  2  20  3  20  18  17 -0.46262352 0.9735052
#> 6 Multicentre 15 100 12  95  85  83  0.19933290 0.4169087
#> 7      Norris 21 226 24 228 205 204 -0.13842138 0.3147471

We use tau and se notation for our effect, same as we would for analysing continuous data with baggr(). In fact, the model we use for logOR is the same default “Rubin” model (Rubin (1974)) with partial pooling. Once $$\log(OR)$$ and is SE have been calculated, there are no differences between this process and analysis continuous quantities in baggr.

bg_model_agg <- baggr(df_ma, effect = "logarithm of odds ratio")

The argument we specified, effect, does not impact results, but it makes outputs nicer, as we will see in the next section.

As with continuous quantities, the prior is chosen automatically. However, it is important to review the automatic prior choice when analysing quantities on log scale, because the priors may be needlessly diffuse (in other words, when using the command above baggr does not “know” that data are logged, although it does try to adjust the scale of priors).

# Choice of summary statistic

We summarised our data as log(OR). Alternatively, we could work with log(RR). Risk difference is also a simple model that could be useful in some cases. Each model makes different assumptions on how the treatment of interest works. How do we choose the appropriate model for our data?

The basic tool that can help us choose is a labbe plot (introduced by L’Abbé, Detsky, and O’Rourke (1987)), which is built into baggr:

labbe(df_ma, plot_model = TRUE, shade_se = "or")
#> Warning: There were 4 divergent transitions after warmup. See
#> http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems
#> Warning: There were 8 divergent transitions after warmup. See
#> http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems
#> Warning: Removed 1 row(s) containing missing values (geom_path).

When we make the plot, two Bayesian models are run, one for OR and one for RR. The mean treatment effect from these 2 lines is then used to plot the dotted/dashed lines corresponding to OR and RR estimates. In our case we can see that there is no meaningful difference between the two models, given the size of the effect and estimates.

## Using risk ratios instead of odds ratios

Risk ratio is also normally distributed and can be easily calculated from contingency tables (i.e. a, b, c, d columns). All of the analysis flow would be identical in that case – with exception of interpretation of treatment effect, of course.

Before we present the results for the OR model, a reminder on how risk and odd ratios compare. If an event is rare (rule of thumb: up to 10%), OR and RR will be similar. For really rare events there is no difference. The higher the event rate, the more discrepancy, e.g.

a <- 9; b <- 1; c <- 99; d <- 1
cat("Risk ratio is", (a/(a+b))/(c/(c+d)), "\n" )
#> Risk ratio is 0.9090909
cat("Odds ratio is", a*d/(b*c), "\n")
#> Odds ratio is 0.09090909
a <- 10; b <- 20; c <- 100; d <- 100
cat("Risk ratio is", (a/(a+b))/(c/(c+d)), "\n" )
#> Risk ratio is 0.6666667
cat("Odds ratio is", a*d/(b*c), "\n")
#> Odds ratio is 0.5

We can illustrate the similarities and differences of OR and RR with one example:

par(mfrow = c(2,3), oma = rep(2,4))
for(es in c(1, .9, .8, .5, .25, .1)){
p_bsl    <- seq(0,1,length=100)
p_trt_rr <- es*p_bsl
odds_trt <- es*(p_bsl/(1-p_bsl))
p_trt_or <- odds_trt / (1 + odds_trt)
plot(p_trt_or ~ p_bsl, type = "l",
xlab = "control event rate", ylab = "treatment event rate", main = paste0("RR=OR=",es))
lines(p_trt_rr ~ p_bsl, lty = "dashed")
}
title(outer = TRUE, "Compare RR (dashed) and OR (solid) of the same magnitude")

A very good discussion of these choices and intuitive plots are provided by Deeks (2002) (see especially Figures 1-3).

# Visualising and criticising model results

All of the outputs are explained in more detail in the “main” package vignette, vignette("baggr"). Here, I simply illustrate their behaviour.

bg_model_agg
#> Model type: Rubin model with aggregate data
#> Pooling of effects: partial
#>
#> Aggregate treatment effect (on logarithm of odds ratio):
#> Hypermean (tau) =  -0.16 with 95% interval -0.57 to 0.24
#> Hyper-SD (sigma_tau) = 0.2449 with 95% interval 0.0067 to 0.8717
#> Total pooling (1 - I^2) = 0.83 with 95% interval 0.31 to 1.00
#>
#> Treatment effects on logarithm of odds ratio:
#>               mean   sd pooling
#> Balcon      -0.130 0.26    0.77
#> Barber      -0.201 0.28    0.80
#> Clausen     -0.151 0.24    0.74
#> Kahler      -0.263 0.33    0.88
#> Ledwich     -0.194 0.33    0.92
#> Multicentre -0.074 0.26    0.76
#> Norris      -0.156 0.22    0.68

Default plot (plot(bg_model_agg)) will show pooled group results. If you prefer a typical forest plot style, you can use forest_plot, which can also plot comparison with source data (via print argument):

forest_plot(bg_model_agg, show = "both", print = "inputs")

Hypothetical treatment effect in a new trial is obtained through effect_plot.

effect_plot(bg_model_agg)

We can transform printed and plotted outputs to show exponents (or other transforms); for example:

gridExtra::grid.arrange(
plot(bg_model_agg, transform = exp) + xlab("Effect on OR"),
effect_plot(bg_model_agg, transform = exp) + xlim(0, 3) + xlab("Effect on OR"),
ncol = 2)

## Model comparison

We can compare with no pooling and full pooling models using baggr_compare

# Instead of writing...
# bg1 <- baggr(df_ma, pooling = "none")
# bg2 <- baggr(df_ma, pooling = "partial")
# bg3 <- baggr(df_ma, pooling = "full")

# ...we use this one-liner
bg_c <- baggr_compare(df_ma, effect = "logarithm of odds ratio")
plot(bg_c)

We can also examine the compared models directly, by accessing them via $models. This is useful e.g. for effect_plot: effect_plot( "Partial pooling, default prior" = bg_c$models$partial, "Full pooling, default prior" = bg_c$models$full) + theme(legend.position = "bottom") You can use leave-one-out cross-validation to compare the models quantitatively. See documentation of ?loocv for more details of calculation. a <- loocv(df_ma, pooling = "partial") b <- loocv(df_ma, pooling = "full") #a; b; #you can print out individual loocv() calculations loo_compare(a,b) #...but typically we compare them to each other # Model for individual-level binary data Let’s assume that you have access to underlying individual-level data. In our example above, we do not, but we can create a data.frame with individual level data since we know the contingency table. In practice this is not necessary as baggr will do the appropriate conversions behind the scenes, but for our example we will convert our data by hand. Fortunately, a built-in function, binary_to_individual can do the conversion for us: df_ind <- binary_to_individual(df_yusuf, group = "trial") head(df_ind) #> group treatment outcome #> 1 Balcon 1 1 #> 2 Balcon 1 1 #> 3 Balcon 1 1 #> 4 Balcon 1 1 #> 5 Balcon 1 1 #> 6 Balcon 1 1 We can now use a logistic regression model bg_model_ind <- baggr(df_ind, model = "logit", effect = "logarithm of odds ratio") Note: as in other cases, baggr() will detect model appropriately (in this case by noting that outcome has only 0’s and 1’s) and notify the user, so in everyday use, it is not necessary to set model = "logit" . Alternatively, if we wrote baggr(df_yusuf, model = "logit"), the conversion to individual-level data would be done automatically behind the scenes. Note: The results of this vignette hosted on CRAN have very short run time (iter = 500, chains = 2) due to server constraints. We recommend replicating this example yourself. If we denote binary outcome as $$y$$, treatment as $$z$$ (both indexed over individual units by $$i$$), under partial pooling the logistic regression model assumes that $y_i \sim \text{Bernoulli}(\text{logit}^{-1}[\mu_{\text{group}(i)} + \tau_{\text{group}(i)}z_i])$ where $$\mu_k$$’s and $$\tau_k$$’s are parameters to estimate. The former is a group-specific mean probability of event in untreated units. We are primarily interested in the latter, the group-specific treatment effects. Under this formulation, odds ratio of event between treatment and non-treatment are given by $$\tau_k$$. This means, the modelling assumption is the same as in the model of summary data we saw above. Moreover, baggr’s default prior for $$\tau_k$$ is set in the same way for summary and individual-level data (you can see it as bg_model_ind$formatted_prior). One difference is that parameter $$\mu$$ is estimated. However, unless dealing with extreme values of $$\mu$$, i.e. with rare or common events (which we discuss below), this should not impact the modelled result.

Therefore, the result we get from the two models will be close (albeit with some numerical variation):

baggr_compare(bg_model_agg, bg_model_ind)
#>
#> Mean treatment effects:
#>              2.5%      mean    97.5%    median       sd
#> Model 1 -0.567656 -0.162055 0.242171 -0.164686 0.207575
#> Model 2 -0.533575 -0.154800 0.272966 -0.156448 0.216460
#>
#> SD for treatment effects:
#>               2.5%     mean    97.5%   median       sd
#> Model 1 0.00670192 0.244933 0.871726 0.181972 0.233034
#> Model 2 0.00641880 0.289877 1.472960 0.184668 0.344787
#>
#> Posterior predictive effects:
#>              2.5%      mean    97.5%    median       sd
#> Model 1 -1.015570 -0.169170 0.622956 -0.166887 0.393235
#> Model 2 -0.992551 -0.156065 0.725285 -0.154934 0.425051

There will be difference in speed – in our example logistic model has to work with 1101 rows of data, while the summary level model only had 7 datapoints. Therefore in “standard” cases you are better off summarising your data and using the faster aggregate data model.

## When to use summary and when to use individual-level data

To sum up:

• If you have no covariates and no particular priors on proportion of events in control groups, you are better off using a summary-level model. We will show how you can summarise your data in the next section.
• If your data includes covariates or events are rare, consider the logistic model. We will show why below.

Note: as shown above, you can only create input data for the logistic model from a, b/n1, c, d/n2 columns. You cannot do it from OR data only, as the probability in the untreated is then unknown.

## How to summarise individual-level data

In the previous example we analysed individual-level data and suggested that typically it is sufficient to work with summarised data, e.g. to speed up modelling. Summarising is also essential as it allows us to better understand the inputs and use diagnostics such as L’Abbe plot.

The generic function for doing this in baggr is prepare_ma. It can be used with both continuous and binary data. In our case we can summarise df_ind to obtain either odds ratios or risk ratios:

prepare_ma(df_ind, effect = "logOR")
#>         group  a  n1  c  n2   b   d         tau        se
#> 1      Balcon 14  56 15  58  42  43 -0.04546237 0.4303029
#> 2      Barber 10  52 12  47  42  35 -0.36464311 0.4855042
#> 3     Clausen 18  66 19  64  48  45 -0.11860574 0.3888993
#> 4      Kahler  3  38  6  31  35  25 -1.02961942 0.7540368
#> 5     Ledwich  2  20  3  20  18  17 -0.46262352 0.9735052
#> 6 Multicentre 15 100 12  95  85  83  0.19933290 0.4169087
#> 7      Norris 21 226 24 228 205 204 -0.13842138 0.3147471
prepare_ma(df_ind, effect = "logRR")
#>         group  a  n1  c  n2   b   d         tau        se
#> 1      Balcon 14  56 15  58  42  43 -0.03390155 0.3209310
#> 2      Barber 10  52 12  47  42  35 -0.28341767 0.3779232
#> 3     Clausen 18  66 19  64  48  45 -0.08483888 0.2782276
#> 4      Kahler  3  38  6  31  35  25 -0.89674614 0.6643991
#> 5     Ledwich  2  20  3  20  18  17 -0.40546511 0.8563488
#> 6 Multicentre 15 100 12  95  85  83  0.17185026 0.3598245
#> 7      Norris 21 226 24 228 205 204 -0.12472076 0.2836811

In both cases the effect (OR or RR) and its SE were renamed to tau and se, so that the resulting data frame can be used directly as input to baggr().

For already summarised data, you can use the same function to move between different effect measures. For example you can take OR measures a <- prepare_ma(df_ind, effect = "logOR") and convert them to RR by using prepare_ma(a, effect = "logRR").

# Rare events

Consider data on four fictional studies:

df_rare <- data.frame(group = paste("Study", LETTERS[1:5]),
a = c(0, 2, 1, 3, 1), c = c(2, 2, 3, 3, 5),
n1i = c(120, 300, 110, 250, 95),
n2i = c(120, 300, 110, 250, 95))

df_rare
#>     group a c n1i n2i
#> 1 Study A 0 2 120 120
#> 2 Study B 2 2 300 300
#> 3 Study C 1 3 110 110
#> 4 Study D 3 3 250 250
#> 5 Study E 1 5  95  95

In Study A you can see that no events occurred in a column.

We have shown above how prepare_ma() can be used to summarise the rates (if we work with individual-level data) and calculate/convert between log OR’s and log RR’s. It can also be used to apply corrections to event rates, which it does automatically:

df_rare_logor <- prepare_ma(df_rare, effect = "logOR")
#> Applied default rare event correction (0.25) in 1 study
# df_rare_logor <- prepare_ma(df_rare_ind, effect = "logOR")
df_rare_logor
#>     group    a     n1    c     n2      b      d       tau        se
#> 1 Study A 0.25 120.25 2.25 120.25 120.25 118.25 -2.213996 2.1121593
#> 2 Study B 2.00 300.00 2.00 300.00 298.00 298.00  0.000000 1.0033501
#> 3 Study C 1.00 110.00 3.00 110.00 109.00 107.00 -1.117131 1.1626923
#> 4 Study D 3.00 250.00 3.00 250.00 247.00 247.00  0.000000 0.8214401
#> 5 Study E 1.00  95.00 5.00  95.00  94.00  90.00 -1.652923 1.1053277

Note how the output of prepare_ma now differs from the original df_rare for “Study A”: a (default) value of 0.25 was added, because there were no events in treatment arm. That means $$\log(OR)=\log(0)=-\infty$$. It is typical to correct for rare events when analysing summary level data. A great overview of the subject and how different meta-analysis methods perform is provided by Bradburn et al. (2007). You can modify the amount of correction by setting the rare_event_correction argument.

pma01 <- prepare_ma(df_rare, effect = "logOR",
rare_event_correction = 0.1)
pma1 <- prepare_ma(df_rare, effect = "logOR",
rare_event_correction = 1)
pma01
#>     group   a    n1   c    n2     b     d       tau        se
#> 1 Study A 0.1 120.1 2.1 120.1 120.1 118.1 -3.061315 3.2392876
#> 2 Study B 2.0 300.0 2.0 300.0 298.0 298.0  0.000000 1.0033501
#> 3 Study C 1.0 110.0 3.0 110.0 109.0 107.0 -1.117131 1.1626923
#> 4 Study D 3.0 250.0 3.0 250.0 247.0 247.0  0.000000 0.8214401
#> 5 Study E 1.0  95.0 5.0  95.0  94.0  90.0 -1.652923 1.1053277

It is important to compare different models with different rare_event_correction values:

bg_correction01 <- baggr(pma01, effect = "logOR")
bg_correction025 <- baggr(df_rare_logor, effect = "logOR")
bg_correction1 <- baggr(pma1, effect = "logOR")
bg_rare_ind <- baggr(df_rare, model = "logit", effect = "logOR")
bgc <- baggr_compare(
"Correct by .10" = bg_correction01,
"Correct by .25" = bg_correction025,
"Correct by 1.0" = bg_correction1,
"Individual data" = bg_rare_ind
)
bgc
#>
#> Mean treatment effects:
#>                     2.5%      mean    97.5%    median       sd
#> Correct by .10  -2.59871 -0.748196 1.678020 -0.761186 0.909677
#> Correct by .25  -2.55474 -0.750149 0.839365 -0.735306 0.809545
#> Correct by 1.0  -2.11097 -0.696460 0.661786 -0.695602 0.699410
#> Individual data -4.51523 -1.206720 0.881381 -1.011330 1.250510
#>
#> SD for treatment effects:
#>                      2.5%    mean   97.5%   median       sd
#> Correct by .10  0.0413799 1.23176 4.66633 0.820396 1.285520
#> Correct by .25  0.0373834 1.20315 5.93600 0.812855 1.409770
#> Correct by 1.0  0.0408039 0.81795 2.88117 0.608827 0.756367
#> Individual data 0.0533372 1.60195 7.10507 1.052940 1.740660
#>
#> Posterior predictive effects:
#>                     2.5%      mean   97.5%    median      sd
#> Correct by .10  -5.46942 -0.780607 3.22903 -0.691027 2.09810
#> Correct by .25  -4.77831 -0.638029 3.95780 -0.671797 2.17416
#> Correct by 1.0  -3.25777 -0.696030 1.78709 -0.693897 1.30541
#> Individual data -7.55936 -1.174400 3.44243 -0.954160 2.58096
plot(bgc) + theme(legend.position = "right")

> Note: The results of this vignette hosted on CRAN have very short run time (iter = 500, chains = 2) due to server constraints. We recommend replicating this example yourself.

Note in the result above that:

1. The heterogeneity estimate (“SD for treatment effects”) is impacted by the corrections
2. Mean treatment effect estimates are less prone to the correction, but there is a sizeable difference between the logistic model and the aggregate data models
3. As corrections tend to 0, the estimates of $$\log(OR)$$ for Study A are more diffuse, but there is still a big difference between the logistic model and the aggregate data model with correction of 0.1.

We will explore the last point in more detail soon. First, however, let us note that the issue with modelling rare events is not limited to zero-events. As mentioned, $$\log(OR)$$ is approximately Gaussian. The quality of the approximation will depend on probabilities in all cells of the contingency table (which we estimate through a, b, c, d). Therefore, treating $$\log(OR)$$ as Gaussian might lead to different results in the individual-level vs summary-level models if the events are rare. With low counts in our example it will definitely be the case.

Let us generate similar data as above but now without the 0 cell:

df_rare <- data.frame(group = paste("Study", LETTERS[1:5]),
a = c(1, 2, 1, 3, 1), c = c(2, 2, 3, 3, 5),
n1i = c(120, 300, 110, 250, 95),
n2i = c(120, 300, 110, 250, 95))

df_rare_logor <- prepare_ma(df_rare, effect = "logOR")
bg_rare_agg <- baggr(df_rare_logor, effect = "logOR")
bg_rare_ind <- baggr(df_rare, effect = "logOR", model = "logit")

Let’s compare again, both on group level but also in terms of hypothetical treatment effect:

bgc <- baggr_compare(
"Summary-level (Rubin model on logOR)" = bg_rare_agg,
"Individual-level (logistic model)"    = bg_rare_ind
)
bgc
#>
#> Mean treatment effects:
#>                                          2.5%      mean    97.5%    median
#> Summary-level (Rubin model on logOR) -1.89549 -0.566422 0.866587 -0.591754
#> Individual-level (logistic model)    -1.94211 -0.640037 0.528789 -0.629074
#>                                            sd
#> Summary-level (Rubin model on logOR) 0.665582
#> Individual-level (logistic model)    0.660901
#>
#> SD for treatment effects:
#>                                           2.5%     mean   97.5%   median
#> Summary-level (Rubin model on logOR) 0.0280888 0.833691 3.06108 0.607280
#> Individual-level (logistic model)    0.0569244 0.850082 2.98758 0.629803
#>                                            sd
#> Summary-level (Rubin model on logOR) 0.831368
#> Individual-level (logistic model)    0.804258
#>
#> Posterior predictive effects:
#>                                          2.5%      mean   97.5%    median
#> Summary-level (Rubin model on logOR) -3.31014 -0.563576 2.29733 -0.582140
#> Individual-level (logistic model)    -2.71191 -0.598662 2.51609 -0.586422
#>                                           sd
#> Summary-level (Rubin model on logOR) 1.39320
#> Individual-level (logistic model)    1.24492
plot(bgc)

The results are still a bit different.

# Pooling baseline event rates

So far we’ve been using the logistic model with default priors (that are zero-centered and scaled to data). Our justification was that the priors on treatment effects are the same for both individual-level and aggregate-level data models. However, recall the logistic model includes additional $$K$$ parameters, log odds of event in the control arms. (For simplicity we refer to them as baselines.) There are different ways in which we can assign them priors:

1. Default independent prior for each baseline, which is scaled to input data
2. Custom independent prior for baseline event rates, specified by using prior_control argument when callin baggr, using the same syntax as for other priors, e.g. normal(0, 5).
3. Assuming hierarchical structure on baselines and specifying a prior using prior_control (which is now interpreted as hypermean, rather than $$K$$ independent priors) and prior_control_sd (hyper-SD for baseline parameters). To enable the hierarchical structure of baselines we need to specify pooling_control = "partial" (analogously to specifying pooling argument for treatment effects).

Choosing between the three is especially important in the case of rare events, as this will have a potentially large impact on the results. Let’s continue with the example from previous section and consider three models:

• default settings (already fitted above, bg_rare_ind),
• strongly informative independent prior on baselines: centered on 1% event rate, $$\mathcal{N}(-4.59, 2^2)$$, which translates 95% confidence that baseline event rate is between about 0.02% and 55%
• hierarchical prior on baselines (mean $$\mathcal{N}(-4.59, 1)$$, SD $$\mathcal{N}(0, 2^2)$$).

We exaggerate our prior choices on purpose, to best illustrate the differences.

bg_rare_pool_bsl <- baggr(df_rare, effect = "logOR", model = "logit",
pooling_control = "partial",
prior_control = normal(-4.59, 1), prior_control_sd = normal(0, 2))
bg_rare_strong_prior <- baggr(df_rare, effect = "logOR", model = "logit",
prior_control = normal(-4.59, 10))
bgc <- baggr_compare(
"Rubin model" = bg_rare_agg,
"Independent N(0,10^2)" = bg_rare_ind,
# "Prior N(-4.59, 2^2)" = bg_rare_prior2,
"Hierarchical prior" = bg_rare_pool_bsl,
"Independent N(-4.59, 10^2)" = bg_rare_strong_prior
)

bgc
#>
#> Mean treatment effects:
#>                                2.5%      mean    97.5%    median       sd
#> Rubin model                -1.89549 -0.566422 0.866587 -0.591754 0.665582
#> Independent N(0,10^2)      -1.94211 -0.640037 0.528789 -0.629074 0.660901
#> Hierarchical prior         -1.81117 -0.636893 0.917046 -0.658369 0.671343
#> Independent N(-4.59, 10^2) -2.15037 -0.703816 0.501447 -0.670453 0.676682
#>
#> SD for treatment effects:
#>                                 2.5%     mean   97.5%   median       sd
#> Rubin model                0.0280888 0.833691 3.06108 0.607280 0.831368
#> Independent N(0,10^2)      0.0569244 0.850082 2.98758 0.629803 0.804258
#> Hierarchical prior         0.0134100 0.780184 3.18842 0.500430 0.851956
#> Independent N(-4.59, 10^2) 0.0419077 0.898302 3.22814 0.670258 0.822285
#>
#> Posterior predictive effects:
#>                                2.5%      mean   97.5%    median      sd
#> Rubin model                -3.24316 -0.571784 1.95272 -0.563410 1.28231
#> Independent N(0,10^2)      -2.58365 -0.458471 2.38100 -0.533112 1.41790
#> Hierarchical prior         -3.34352 -0.695390 2.08774 -0.648809 1.49058
#> Independent N(-4.59, 10^2) -3.98138 -0.795605 2.23039 -0.761720 1.39184
plot(bgc) + theme(legend.position = "right")

Note: The results of this vignette hosted on CRAN have very short run time (iter = 500, chains = 2) due to server constraints. We recommend replicating this example yourself. In particular for hierarchical prior on baselines, we recommend longer runs.

We can see that

1. The hierarchical prior offers us balance between the rare event corrections on aggregate data and the logit models without any corrections.
2. The default prior choice can have different behaviour in tails (for specific studies) than the prior that explicitly moved the baseline to 1%.
3. The differences between different logistic models may be smaller than between all logistic models and the aggregate data model, but there are still differences. Remember that all of the logistic models above make exactly the same assumptions on treatment effect and they differ only in their priors concerning baseline probabilities.

Each of the models considered could now be compared using loocv() (see the example earlier in the vignette) and their out-of-sample performance can be assessed.

# Accounting for covariates: meta-regression and mixed models

This section provides only a short example. To learn more about meta-regression see e.g. Baker et al. (2009)

Two types of covariates may be present in your data:

• Covariates that change according to group unit. In that case, the model accounting for the group covariates is a meta-regression model. It can be modelled on summary-level data.
• Covariates that change according to individual unit. Then, the model can be called a mixed model. It has to be fitted to individual-level data. Note that the first case can also be accounted for by using a mixed model.

In both cases we only need to name one extra argument to baggr: covariates=, followed by a vector of column names in input data. You should remember that your treatment effect estimate will vary a lot not only with choice of covariates, but also the contrasts that you use.

#let's use the data.frame we created from Yusuf et al earlier
df_ma$study_grouping <- c(1,1,1,0,0,0,0) df_ma$different_contrasts <- c(1,1,1,0,0,0,0) - .5
bg_cov1 <- baggr(df_ma, covariates = c("study_grouping"), effect = "logarithm of odds ratio")
baggr_compare("No covariate" = bg_model_agg,
"With covariates, 0-1 coding" = bg_cov1)
#> Compared models use different covariates. Hyperparameters are not directly comparable.
#>
#> Mean treatment effects:
#>                                  2.5%      mean    97.5%    median       sd
#> No covariate                -0.567656 -0.162055 0.242171 -0.164686 0.207575
#> With covariates, 0-1 coding -0.860247 -0.190052 0.394848 -0.171708 0.316325
#>
#> SD for treatment effects:
#>                                   2.5%     mean    97.5%   median       sd
#> No covariate                0.00670192 0.244933 0.871726 0.181972 0.233034
#> With covariates, 0-1 coding 0.00907804 0.309960 1.064040 0.230412 0.286191
#>
#> Posterior predictive effects:
#>                                  2.5%      mean    97.5%    median       sd
#> No covariate                -0.970039 -0.155947 0.634838 -0.159142 0.398915
#> With covariates, 0-1 coding -1.416150 -0.200869 0.722640 -0.174854 0.516518
#>
#> Mean (SD) for covariates:
#>         model       study_grouping
#>  No covariate 0.0373989 (0.464356)

To access the covariate estimates, see fixed_effects() function. They are also printed out by default:

bg_cov1
#> Model type: Rubin model with aggregate data
#> Pooling of effects: partial
#>
#> Aggregate treatment effect (on logarithm of odds ratio):
#> Hypermean (tau) =  -0.19 with 95% interval -0.86 to 0.39
#> Hyper-SD (sigma_tau) = 0.3100 with 95% interval 0.0091 to 1.0640
#> Total pooling (1 - I^2) = 0.78 with 95% interval 0.23 to 1.00
#>
#> Treatment effects on logarithm of odds ratio:
#>               mean   sd pooling
#> Balcon      -0.114 0.31    0.70
#> Barber      -0.202 0.33    0.74
#> Clausen     -0.142 0.29    0.67
#> Kahler      -0.311 0.43    0.84
#> Ledwich     -0.220 0.43    0.88
#> Multicentre -0.056 0.31    0.69
#> Norris      -0.149 0.26    0.61
#>
#> Covariate (fixed) effects on logarithm of odds ratio:
#>       [,1]
#> mean 0.037
#> sd   0.464

Note: in the example above we did not manually set priors for $$\beta$$ coefficients. Users can do it by passing argument prior_beta to baggr().

# References

Baker, W. L., C. Michael White, J. C. Cappelleri, J. Kluger, C. I. Coleman, and Health Outcomes, Policy, and Economics (HOPE) Collaborative Group. 2009. “Understanding Heterogeneity in Meta-Analysis: The Role of Meta-Regression.” International Journal of Clinical Practice 63 (10): 1426–34. https://doi.org/10.1111/j.1742-1241.2009.02168.x.
Bradburn, Michael J., Jonathan J. Deeks, Jesse A. Berlin, and A. Russell Localio. 2007. “Much Ado about Nothing: A Comparison of the Performance of Meta-Analytical Methods with Rare Events.” Statistics in Medicine 26 (1): 53–77. https://doi.org/10.1002/sim.2528.
Deeks, Jonathan J. 2002. “Issues in the Selection of a Summary Statistic for Meta-Analysis of Clinical Trials with Binary Outcomes.” Statistics in Medicine 21 (11): 1575–1600. https://doi.org/10.1002/sim.1188.
L’Abbé, K. A., A. S. Detsky, and K. O’Rourke. 1987. “Meta-Analysis in Clinical Research.” Annals of Internal Medicine 107 (2): 224–33. https://doi.org/10.7326/0003-4819-107-2-224.
Rubin, Donald B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology. https://doi.org/10.1037/h0037350.
Yusuf, S., R. Peto, J. Lewis, R. Collins, and P. Sleight. 1985. “Beta Blockade During and After Myocardial Infarction: An Overview of the Randomized Trials.” Progress in Cardiovascular Diseases 27 (5): 335–71. https://doi.org/10.1016/s0033-0620(85)80003-7.