The Coxmos R package includes the following basic analysis blocks:
Cross-validation and Modeling for High-Dimensional Data: The user can use the Coxmos package to select the optimal parameters for survival models in high-dimensional datasets. The package provides tools for estimating the best values for the parameters.
Comparing Classical and High-Dimensional Survival Models: After obtaining multiple survival models, the user can compare them to determine which one gives the best results. The package includes several functions for comparing the models.
Interpreting Results: After selecting the best model or models, the user can interpret the results. The package includes several functions for understanding the impact of the original variables on survival prediction, even when working with (s)PLS methods.
Predicting New Patients: Finally, if a new dataset of patients is available, the user can use the model to make predictions for the new patients and compare the variables against the model coefficients to estimate the patients’ risk of an event.
Coxmos can be installed from GitHub using devtools
:
install.packages("devtools")
::install_github("BiostatOmics/Coxmos", build_vignettes = TRUE) devtools
To run the analyses in this vignette, you’ll first need to load
Coxmos
:
# load Coxmos
library(Coxmos)
In addition, we’ll require some additional packages for data
formatting. Most of them are signaled as Coxmos
dependencies, so they will already be installed in your system.
To generate plots, we make use of the RColorConesa
R
package. After install:
# install.packages("RColorConesa")
library(RColorConesa)
The Coxmos pipeline requires two matrices as input. First one must be the features under study, and the second one a matrix with two columns called time and event for survival information.
After quality control, the data contains expression data for miRNAs and Protein information. The data could be load as follows:
# load dataset
data("X_multiomic", package = "Coxmos")
data("Y_multiomic", package = "Coxmos")
<- X_multiomic
X <- Y_multiomic
Y
rm(X_multiomic, Y_multiomic)
These files contain a list
of two blocks and a
data.frame
object. Our toy example has a total of
150 observations, 642 miRNAs and 369 proteins:
hsa-let-7a-2-3p | hsa-let-7a-3p | hsa-let-7a-5p | hsa-let-7b-3p | hsa-let-7b-5p | |
---|---|---|---|---|---|
TCGA-A2-A0SV-01A | 9.344533 | 228.00660 | 56661.51 | 155.1192 | 57564.19 |
TCGA-A2-A0YT-01A | 25.031847 | 134.54618 | 56991.26 | 103.2564 | 64478.91 |
TCGA-BH-A1F0-01A | 34.725664 | 122.98673 | 68110.05 | 195.3319 | 37871.23 |
TCGA-B6-A0IK-01A | 44.962897 | 163.66495 | 89585.87 | 183.4486 | 121455.58 |
TCGA-E2-A1LE-01A | 15.392269 | 87.54353 | 188649.57 | 109.6699 | 54892.68 |
7529 | 7531 | 7534 | 1978 | 7158 | |
---|---|---|---|---|---|
TCGA-A2-A0SV-01A | 0.025273 | -0.030661 | 0.458350 | 1.369000 | -0.40781 |
TCGA-A2-A0YT-01A | 0.348210 | 0.347870 | 0.891570 | -0.188540 | -0.13896 |
TCGA-BH-A1F0-01A | 0.132340 | -0.348210 | 0.078923 | 0.227890 | -0.49063 |
TCGA-B6-A0IK-01A | 0.352670 | 0.376670 | -0.285750 | -0.380980 | -1.43080 |
TCGA-E2-A1LE-01A | 0.110140 | 0.094990 | -0.159960 | -0.098148 | 0.40160 |
time | event | |
---|---|---|
TCGA-A2-A0SV-01A | 825 | TRUE |
TCGA-A2-A0YT-01A | 723 | TRUE |
TCGA-BH-A1F0-01A | 785 | TRUE |
TCGA-B6-A0IK-01A | 571 | TRUE |
TCGA-E2-A1LE-01A | 879 | TRUE |
As can be observed, clinical variables were transform to binary/dummy variables for factors.
<- plot_events(Y = Y,
ggp_density.event categories = c("Censored","Death"), #name for FALSE/0 (Censored) and TRUE/1 (Event)
y.text = "Number of observations",
roundTo = 0.5,
max.breaks = 15)
$plot ggp_density.event
After loading the data, it may be of interest for the user to perform a survival analysis in order to examine the relationship between explanatory variables and the outcome. However, traditional methods are only applicable for low-dimensional datasets. To address this issue, we have developed a set of functions that make use of (s)PLS techniques in combination with Cox analysis for the analysis of high-dimensional datasets.
Coxmos provides the following methodologies for multi-omic approaches:
More information for each approach could be found in the help section for each function. The function name for each methodology are:
sb.splsicox()
,
cv.isb.splsicox()
, sb.splsdrcox()
and
cv.isb.splsdrcox()
.mb.splsdrcox()
and
mb.splsdacox()
.To perform a survival analysis with our example, we will use methodologies that can work with high-dimensional data. These are the set of methodologies that use sPLS techniques.
The first thing we are going to do is split our data into a train and test set. This split will be made with a proportion of 70% of the data for training and 30% for testing.
We will use the function createDataPartition
from the R
package caret. We will use a 70% - 30% split for
training and testing, respectively, and set a seed for reproducible
results.
set.seed(123)
<- caret::createDataPartition(Y$event,
index_train p = .7, #70 %
list = FALSE,
times = 1)
<- list()
X_train <- list()
X_test for(omic in names(X)){
<- X[[omic]][index_train,,drop=F]
X_train[[omic]] <- X[[omic]][-index_train,,drop=F]
X_test[[omic]]
}
<- Y[index_train,]
Y_train <- Y[-index_train,] Y_test
EVP per block:
<- getEPV.mb(X_train, Y_train)
EPV for(b in names(X_train)){
message(paste0("EPV = ", round(EPV[[b]], 4), ", for block ", b))
}#> EPV = 0.0794, for block mirna
#> EPV = 0.1382, for block proteomic
In order to perform survival analysis with our high-dimensional data, we have implemented a series of methods that utilize techniques such as Cox Elastic Net, to select a lower number of features applying a penalty or partial least squares (PLS) methodology in order to reduce the dimensionality of the input data.
To evaluate the performance of these methods, we have implemented cross-validation, which allows us to estimate the optimal parameters for future predictions based on prediction metrics such as: AIC, C-INDEX, I.BRIER and AUC. By default AUC metric (Area under the ROC curve) is used with the “cenROC” evaluator as it has provided the best results in our tests. However, multiple AUC evaluators could be used: “risksetROC”, “survivalROC”, “cenROC”, “nsROC”, “smoothROCtime_C” and “smoothROCtime_I”.
Furthermore, a mix of multiple metrics could be used to obtain the optimal model. The user has to establish different weight for each metric and all of them will be consider to compute the optimal model (the total weight must sum 1).
In addition, we have included options for normalizing data, filtering variables, and setting the minimum EPV, as well as specific parameters for each method, such as the alpha value for Cox Elastic Net and the number of components for PLS models. Overall, our cross-validation methodology allows us to effectively analyze high-dimensional survival data and optimize our model parameters.
As classical and PLS approach can not be perform in multiomic data, several multiomic methods have been designed. The methods are divided into two categories:
But first, we establish the center and scale for each block:
= c(mirna = T, proteomic = T) #if vector, must be named
x.center = c(mirna = F, proteomic = F) #if vector, must be named x.scale
In our pursuit of addressing the challenges posed by high-dimensional multi-omic data, we have developed SB.sPLS-ICOX, an innovative algorithm that combines the strengths of sPLS-ICOX and integrative analysis. SB.sPLS-ICOX employs a single-block approach, applying sPLS-ICOX individually to each omic, resulting in dimensionality reduction within each dataset. By constructing weights based on univariate cox models, we capture survival information during the reduction process. The reduced omic datasets are then integrated to create a comprehensive survival model using their PLS components. Cross-validation is utilized to determine the optimal number of components and the penalty for variable selection, ensuring robust model optimization.
<- cv.sb.splsicox(X = X_train, Y = Y_train,
cv.sb.splsicox_res max.ncomp = 2, penalty.list = c(0.5,0.9),
n_run = 2, k_folds = 5,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_variance_at_fold_level = F,
remove_non_significant_models = F, alpha = 0.05,
w_AIC = 0, w_c.index = 0, w_AUC = 1, w_BRIER = 0, times = NULL, max_time_points = 15,
MIN_AUC_INCREASE = 0.01, MIN_AUC = 0.8, MIN_COMP_TO_CHECK = 3,
pred.attr = "mean", pred.method = "cenROC", fast_mode = F,
MIN_EPV = 5, return_models = F, remove_non_significant = F, returnData = F,
PARALLEL = F, verbose = F, seed = 123)
cv.sb.splsicox_res
$plot_AUC cv.sb.splsicox_res
We will generate a SB.sPLS-ICOX model with optimal number of principal components and its penalty based on the results obtained from the cross validation.
<- sb.splsicox(X = X_train, Y = Y_train,
sb.splsicox_model n.comp = 1, #cv.sb.splsicox_res$opt.comp,
penalty = 0.9, #cv.sb.splsicox_res$opt.penalty,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_non_significant = F,
alpha = 0.05, MIN_EPV = 5,
returnData = T, verbose = F)
#> As we are working with a multiblock approach with 2 blocks, a maximum of 5 components could be use.
sb.splsicox_model#> The method used is SB.sPLS-ICOX.
#> Survival model:
#> coef exp(coef) se(coef) robust se z
#> comp_1_mirna 0.1077358 1.1137534 0.01865296 0.01661403 6.4846281
#> comp_1_proteomic -0.1416963 0.8678848 0.23679243 0.24770216 -0.5720431
#> Pr(>|z|)
#> comp_1_mirna 8.895092e-11
#> comp_1_proteomic 5.672928e-01
In case some components get a P-Value greater than the cutoff for significant, we can drop them by the parameter “remove_non_significant”.
<- sb.splsicox(X = X_train, Y = Y_train,
sb.splsicox_model n.comp = 1, #cv.sb.splsicox_res$opt.comp,
penalty = 0.9, #cv.sb.splsicox_res$opt.penalty,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_non_significant = T,
alpha = 0.05, MIN_EPV = 5,
returnData = T, verbose = F)
#> As we are working with a multiblock approach with 2 blocks, a maximum of 5 components could be use.
sb.splsicox_model#> The method used is SB.sPLS-ICOX.
#> A total of 1 variables have been removed due to non-significance filter inside cox model.
#> Survival model:
#> coef exp(coef) se(coef) robust se z Pr(>|z|)
#> comp_1_mirna 0.09954254 1.104665 0.01257348 0.01573434 6.326453 2.508607e-10
In this case, we optimized each omic/block to use the same number of components. But there is another methodology that allow to select a different number of components per block call “isb.splsicox”.
In this case, the cross validation returns the model automatically.
<- cv.isb.splsicox(X = X_train, Y = Y_train,
isb.splsicox_model max.ncomp = 2, penalty.list = c(0.5, 0.9),
n_run = 2, k_folds = 5,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_variance_at_fold_level = F,
remove_non_significant_models = F, alpha = 0.05,
w_AIC = 0, w_c.index = 0, w_AUC = 1, w_BRIER = 0, times = NULL, max_time_points = 15,
MIN_AUC_INCREASE = 0.01, MIN_AUC = 0.8, MIN_COMP_TO_CHECK = 3,
pred.attr = "mean", pred.method = "cenROC", fast_mode = F,
MIN_EPV = 5, return_models = F, remove_non_significant = T,
PARALLEL = F, verbose = F, seed = 123)
isb.splsicox_model
We have developed also SB.sPLS-DRCOX, a comprehensive algorithm that employs the individual sPLS-DRCOX approach, executing the algorithm on each omic dataset independently to achieve dimensionality reduction. By integrating the resulting components, a unified survival model is constructed, capturing the collective information from all omics. Similar to SB.sPLS-ICOX, cross-validation is employed to identify the optimal number of components and penalty for variable selection, ensuring robust model optimization.
<- cv.sb.splsdrcox(X = X_train, Y = Y_train,
cv.sb.splsdrcox_res max.ncomp = 2, penalty.list = c(0.5,0.9),
n_run = 2, k_folds = 10,
x.center = x.center, x.scale = x.scale,
#y.center = FALSE, y.scale = FALSE,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_variance_at_fold_level = F,
remove_non_significant_models = F, alpha = 0.05,
w_AIC = 0, w_c.index = 0, w_AUC = 1, w_BRIER = 0, times = NULL, max_time_points = 15,
MIN_AUC_INCREASE = 0.01, MIN_AUC = 0.8, MIN_COMP_TO_CHECK = 3,
pred.attr = "mean", pred.method = "cenROC", fast_mode = F,
MIN_EPV = 5, return_models = F, remove_non_significant = F, returnData = F,
PARALLEL = F, verbose = F, seed = 123)
cv.sb.splsdrcox_res
We will generate a SB.sPLS-DRCOX model with optimal number of principal components and its penalty based on the results obtained from the cross validation.
<- sb.splsdrcox(X = X_train,
sb.splsdrcox_model Y = Y_train,
n.comp = 2, #cv.sb.splsdrcox_res$opt.comp,
penalty = 0.5, #cv.sb.splsdrcox_res$opt.penalty,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_non_significant = T, alpha = 0.05, MIN_EPV = 5,
returnData = T, verbose = F)
#> As we are working with a multiblock approach with 2 blocks, a maximum of 5 components could be use.
sb.splsdrcox_model#> The method used is SB.sPLS-DRCOX.
#> Survival model:
#> coef exp(coef) se(coef) robust se z
#> comp_2_mirna 1.416264e-06 1.000001 5.232575e-07 4.781115e-07 2.962205
#> comp_1_proteomic 9.238047e-01 2.518856 1.594113e-01 1.432416e-01 6.449275
#> comp_2_proteomic 4.563257e-01 1.578264 1.212955e-01 1.157985e-01 3.940686
#> Pr(>|z|)
#> comp_2_mirna 3.054440e-03
#> comp_1_proteomic 1.123867e-10
#> comp_2_proteomic 8.124890e-05
As before, we optimized each omic/block to use the same number of components. But there is another methodology that allow to select a different number of components and penalty per block call “cv.isb.splsdrcox”.
The cross validation returns the model automatically.
<- cv.isb.splsdrcox(X = X_train, Y = Y_train,
isb.splsdrcox_model max.ncomp = 2, penalty.list = c(0.5,0.9),
n_run = 2, k_folds = 10,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_variance_at_fold_level = F,
remove_non_significant_models = F, alpha = 0.05,
w_AIC = 0, w_c.index = 0, w_AUC = 1, w_BRIER = 0, times = NULL, max_time_points = 15,
MIN_AUC_INCREASE = 0.01, MIN_AUC = 0.8, MIN_COMP_TO_CHECK = 3,
pred.attr = "mean", pred.method = "cenROC", fast_mode = F,
MIN_EPV = 5, return_models = F, remove_non_significant = T,
PARALLEL = F, verbose = F, seed = 123)
isb.splsdrcox_model
To enhance the versatility and performance of our survival analysis methods, we have developed full multiblock survival models by combining the sPLS-DRCOX methodology with the multiblock sPLS functions from the mixOmics R package. This allows a full integration for omic by selecting the components to reach the same objective.
In the creation of these methods, we utilized an heuristic variable selection approach along with the MixOmics algorithms for hyperparameter selection. The penalty is determined based on a vector of the number of variables to test. Users have the flexibility to specify a specific value for selecting a fixed number of variables. Alternatively, by setting the value to NULL, the heuristic process automatically selects the optimal number of variables.
Our method empowers users to define the minimum and maximum number of variables to consider, as well as the number of cutpoints to test between these limits. Through an iterative process, the algorithm identifies the optimal number of variables and further explores the performance of existing cutpoints compared to the selected optimal value.
This integration of sPLS-DRCOX and multiblock sPLS provides researchers with a powerful tool for conducting comprehensive multivariate survival analysis.
<- cv.mb.splsdrcox(X = X_train, Y = Y_train,
cv.mb.splsdrcox_res max.ncomp = 2, vector = NULL, #NULL - autodetection
MIN_NVAR = 10, MAX_NVAR = 1000, n.cut_points = 10, EVAL_METHOD = "AUC",
n_run = 2, k_folds = 4,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_variance_at_fold_level = F,
remove_non_significant_models = F, alpha = 0.05,
w_AIC = 0, w_c.index = 0, w_AUC = 1, w_BRIER = 0, times = NULL, max_time_points = 15,
MIN_AUC_INCREASE = 0.01, MIN_AUC = 0.8, MIN_COMP_TO_CHECK = 3,
pred.attr = "mean", pred.method = "cenROC", fast_mode = F,
MIN_EPV = 5, return_models = F, remove_non_significant = F, returnData = F,
PARALLEL = F, verbose = F, seed = 123)
cv.mb.splsdrcox_res
After getting the cross-validation, the full model could be obtain by passing the optimized number of components and the list of the number of variables per omic.
<- mb.splsdrcox(X = X_train, Y = Y_train,
mb.splsdrcox_model n.comp = 2, #cv.mb.splsdrcox_res$opt.comp,
vector = list("mirna" = 326, "proteomic" = 369), #cv.mb.splsdrcox_res$opt.nvar,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = T, toKeep.zv = NULL,
remove_non_significant = T, alpha = 0.05,
MIN_AUC_INCREASE = 0.01,
pred.method = "cenROC", max.iter = 200,
times = NULL, max_time_points = 15,
MIN_EPV = 5, returnData = T, verbose = F)
#> As we are working with a multiblock approach with 2 blocks, a maximum of 5 components could be use.
mb.splsdrcox_model#> The method used is MB.sPLS-DRCOX.
#> A total of 3 variables have been removed due to non-significance filter inside cox model.
#> Survival model:
#> coef exp(coef) se(coef) robust se z
#> comp_2_mirna 3.711288e-07 1 3.431286e-07 3.145653e-07 1.179815
#> Pr(>|z|)
#> comp_2_mirna 0.2380739
In some cases, if any component is significant, the model will keep at least the one with the lower P-Value.
We also have extended our methodological repertoire with the development of MB.sPLS-DACOX. Building upon the foundation of sPLS-DACOX, this novel approach incorporates the powerful MultiBlock functions from the MixOmics R package, further enhancing its capabilities and performance.
# run cv.splsdrcox
<- cv.mb.splsdacox(X = X_train, Y = Y_train,
cv.mb.splsdacox_res max.ncomp = 2, vector = NULL, #NULL - autodetection
n_run = 2, k_folds = 4,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = F, toKeep.zv = NULL,
remove_variance_at_fold_level = F,
remove_non_significant_models = F, alpha = 0.05,
w_AIC = 0, w_c.index = 0, w_AUC = 1, w_BRIER = 0, times = NULL, max_time_points = 15,
MIN_AUC_INCREASE = 0.01, MIN_AUC = 0.8, MIN_COMP_TO_CHECK = 3,
pred.attr = "mean", pred.method = "cenROC", fast_mode = F,
MIN_EPV = 5, return_models = F, remove_non_significant = F, returnData = F,
PARALLEL = F, verbose = F, seed = 123)
cv.mb.splsdacox_res
After getting the cross-validation, the full model could be obtain by passing the optimized number of components and the list of the number of variables per omic.
<- mb.splsdacox(X = X_train, Y = Y_train,
mb.splsdacox_model n.comp = 2, #cv.mb.splsdacox_res$opt.comp,
vector = list("mirna" = 326, "proteomic" = 10), #cv.mb.splsdacox_res$opt.nvar,
x.center = x.center, x.scale = x.scale,
remove_near_zero_variance = T, remove_zero_variance = T, toKeep.zv = NULL,
remove_non_significant = T, alpha = 0.05,
MIN_AUC_INCREASE = 0.01,
pred.method = "cenROC", max.iter = 200,
times = NULL, max_time_points = 15,
MIN_EPV = 5, returnData = T, verbose = F)
#> As we are working with a multiblock approach with 2 blocks, a maximum of 5 components could be use.
mb.splsdacox_model#> The method used is MB.sPLS-DACOX.
#> A total of 3 variables have been removed due to non-significance filter inside cox model.
#> Survival model:
#> coef exp(coef) se(coef) robust se z Pr(>|z|)
#> comp_2_proteomic 0.08603784 1.089848 0.07477169 0.07753615 1.109648 0.2671507
Next, we will analyze the results obtained from the multiple models to see which one obtains the best predictions based on our data. To do this, we will use the test set that has not been used for the training of any model.
Initially, we will compare the area under the curve (AUC) for each of the methods according to the evaluator we want. The function is developed to simultaneously evaluate multiple evaluators. However, we will continue working with a single evaluator. In this case “cenROC”. On the other hand, we must provide a list of the different models as well as the X and Y test set we want to evaluate.
When evaluating survival model results, we must indicate at which temporal points we want to perform the evaluation. As we already specified a NULL value for the “times” variable in the cross-validation, we are going to let the algorithm to compute them again.
<- list("SB.sPLS-ICOX" = sb.splsicox_model,
lst_models #"iSB.sPLS-ICOX" = isb.splsicox_model,
"SB.sPLS-DRCOX" = sb.splsdrcox_model,
#"iSB.sPLS-DRCOX" = isb.splsdrcox_model,
"MB.sPLS-DRCOX" = mb.splsdrcox_model,
"MB.sPLS-DACOX" = mb.splsdacox_model)
<- eval_Coxmos_models(lst_models = lst_models,
eval_results X_test = X_test, Y_test = Y_test,
pred.method = "cenROC",
pred.attr = "mean",
times = NULL, max_time_points = 15,
PARALLEL = F)
In case you prefer to test multiple AUC evaluators, a list of them could be proportionate by using the library “purrr”.
<- c(cenROC = "cenROC", risksetROC = "risksetROC")
lst_evaluators
<- purrr::map(lst_evaluators, ~eval_Coxmos_models(lst_models = lst_models,
eval_results X_test = X_test, Y_test = Y_test,
pred.method = .,
pred.attr = "mean",
times = NULL,
max_time_points = 15,
PARALLEL = F))
We can print the results obtained in the console where we can see, for each of the selected methods, the training time and the time it took to be evaluated, as well as their AIC, C-Index and AUC metrics and at which time points it was evaluated.
eval_results#> Evaluation performed for methods: SB.sPLS-ICOX, SB.sPLS-DRCOX, MB.sPLS-DRCOX, MB.sPLS-DACOX.
#> SB.sPLS-ICOX:
#> training.time: 0.1067
#> evaluating.time: 0.0033
#> AIC: 310.0196
#> c.index: 0.8146
#> time: 116, 369.786, 623.572, 877.358, 1131.143, 1384.929, 1638.715, 1892.5, 2146.286, 2400.072, 2653.858, 2907.643, 3161.429, 3415.215, 3669
#> AUC: 0.5267
#> brier_time: 116, 186, 322, 377, 385, 454, 477, 489, 506, 518, 522, 606, 614, 616, 678, 703, 723, 747, 759, 825, 847, 904, 1015, 1048, 1150, 1174, 1189, 1233, 1275, 1324, 1508, 1528, 1611, 1694, 1742, 1793, 1935, 1972, 2632, 2854, 2866, 3001, 3472, 3669
#> I.Brier: 0.24817
#>
#> SB.sPLS-DRCOX:
#> training.time: 0.0323
#> evaluating.time: 0.0032
#> AIC: 323.6618
#> c.index: 0.7997
#> time: 116, 369.786, 623.572, 877.358, 1131.143, 1384.929, 1638.715, 1892.5, 2146.286, 2400.072, 2653.858, 2907.643, 3161.429, 3415.215, 3669
#> AUC: 0.64498
#> brier_time: 116, 186, 322, 377, 385, 454, 477, 489, 506, 518, 522, 606, 614, 616, 678, 703, 723, 747, 759, 825, 847, 904, 1015, 1048, 1150, 1174, 1189, 1233, 1275, 1324, 1508, 1528, 1611, 1694, 1742, 1793, 1935, 1972, 2632, 2854, 2866, 3001, 3472, 3669
#> I.Brier: 0.22888
#>
#> MB.sPLS-DRCOX:
#> training.time: 0.0316
#> evaluating.time: 0.0068
#> AIC: 368.6329
#> c.index: 0.5319
#> time: 116, 369.786, 623.572, 877.358, 1131.143, 1384.929, 1638.715, 1892.5, 2146.286, 2400.072, 2653.858, 2907.643, 3161.429, 3415.215, 3669
#> AUC: 0.48048
#> brier_time: 116, 186, 322, 377, 385, 454, 477, 489, 506, 518, 522, 606, 614, 616, 678, 703, 723, 747, 759, 825, 847, 904, 1015, 1048, 1150, 1174, 1189, 1233, 1275, 1324, 1508, 1528, 1611, 1694, 1742, 1793, 1935, 1972, 2632, 2854, 2866, 3001, 3472, 3669
#> I.Brier: 0.17298
#>
#> MB.sPLS-DACOX:
#> training.time: 0.0407
#> evaluating.time: 0.0036
#> AIC: 368.9653
#> c.index: 0.5307
#> time: 116, 369.786, 623.572, 877.358, 1131.143, 1384.929, 1638.715, 1892.5, 2146.286, 2400.072, 2653.858, 2907.643, 3161.429, 3415.215, 3669
#> AUC: 0.55949
#> brier_time: 116, 186, 322, 377, 385, 454, 477, 489, 506, 518, 522, 606, 614, 616, 678, 703, 723, 747, 759, 825, 847, 904, 1015, 1048, 1150, 1174, 1189, 1233, 1275, 1324, 1508, 1528, 1611, 1694, 1742, 1793, 1935, 1972, 2632, 2854, 2866, 3001, 3472, 3669
#> I.Brier: 0.17008
#>
However, we can also obtain graphical results where we can compare each method over time, as well as their average scores using the function “plot_evaluation” or “plot_evaluation.list” if multiple evaluators have been tested. The user could choose to plot the AUC for time prediction points or Brier Score. In case of use Brier Score, instead of uses the Integrative Brier Score for the boxplots, the mean or median is used (plot_evaluation parameter).
<- plot_evaluation(eval_results, evaluation = "AUC")
lst_eval_results <- plot_evaluation(eval_results, evaluation = "Brier") lst_eval_results_brier
After performing the cross-validation, we obtain a list in R that contains two new lists. The first of these refers to the evaluation over time for each of the methods used, as well as a variant where the average value of each of them is shown. On the other hand, we can compare the mean results of each method using: T-test, Wilcoxon-test, anova or Kruskal-Wallis.
$lst_plots$lineplot.mean lst_eval_results
$lst_plot_comparisons$t.test lst_eval_results
# lst_eval_results$cenROC$lst_plots$lineplot.mean
# lst_eval_results$cenROC$lst_plot_comparisons$t.test
Another possible comparison is related to the computation times for cross-validation, model creation, and evaluation. In this case, cross-validations and methods are loaded.
<- list(#cv.sb.splsicox_res,
lst_models_time
sb.splsicox_model,#isb.splsicox_model,
#cv.sb.splsdrcox_res,
sb.splsdrcox_model,#isb.splsdrcox_model,
#cv.mb.splsdrcox_res,
mb.splsdrcox_model,#cv.mb.splsdrcox_res,
mb.splsdacox_model, eval_results)
<- plot_time.list(lst_models_time) ggp_time
ggp_time
Following the cross validation, we have selected the sPLS-DACOX methodology as the most suitable model for our data. We will now study and interpret its results based on the study variables or latent variables used. In this case, we will examine some graphs of the model.
A forest plot can be obtained as the first graph using the survminer
R package. However, the function has been restructured to allow for
simultaneous launch of an Coxmos class model or a list of Coxmos models
using the plot_forest()
or plot_forest.list()
function.
#lst_forest_plot <- plot_forest.list(lst_models)
<- plot_forest(lst_models$`SB.sPLS-DRCOX`) lst_forest_plot
#lst_forest_plot$`SB.sPLS-DRCOX`
lst_forest_plot
The following graph is related to one of the assumptions of the Cox models, called proportional hazard.
In a Cox proportional hazards model, the proportional hazards assumption states that the hazard ratio (the risk of experiencing the event of interest) is constant over time for a given set of predictor variables. This means that the effect of the predictors on the hazard ratio does not change over time. This assumption is important because it allows for the interpretation of the model’s coefficients as measures of the effect of the predictors on the hazard ratio. Violations of the proportional hazards assumption can occur if the effect of the predictors on the hazard ratio changes over time or if there is an interaction between the predictors and time. In these cases, the coefficients of the model may not accurately reflect the effect of the predictors on the hazard ratio and the results of the model may not be reliable.
In this way, to visualize and check if the assumption is violated,
the function plot_proportionalHazard.list()
or
plot_proportionalHazard()
can be called, depending on
whether a list of models or a specific model is to be evaluated.
#lst_ph_ggplot <- plot_proportionalHazard.list(lst_models)
<- plot_proportionalHazard(lst_models$`SB.sPLS-DRCOX`) lst_ph_ggplot
Variables or components with a significant P-Value indicate that the assumption is being violated.
#lst_ph_ggplot$`SB.sPLS-DRCOX`
lst_ph_ggplot
Another type of graph implemented for all models, whether they belong to the classical branch or to PLS-based models, is the visualization of observations by event according to the values predicted by the Cox models.
The R package “coxph” allows for several types of predictions to be made on a Cox model that we use in our function, which are:
Linear predictors “lp”: are the expected values of the response variable (in this case, time until the event of interest) for each observation, based on the Cox model. These values can be calculated from the mean of the predictor variable values and the constant term of the model.
Risk of experiencing an event “risk”: is a measure of the probability that an event will occur for each observation, based on the Cox model. The risk value can be calculated from the predictor values and the constant term of the model.
Number of events expected to be experienced over time with these specific individual characteristics “expected”: are the expected number of events that would occur for each observation, based on the Cox model and a specified period of time.
Terms: are the variables included in the Cox model.
Survival probability “survival”: is the probability that an individual will not experience the event of interest during a specified period of time, based on the Cox model. The survival probability can be calculated from the predictor values and the constant term of the model.
According to the predicted value, we can classify the observations along their possible values and see their distribution for each of the different models.
#density.plots.lp <- plot_cox.event.list(lst_models, type = "lp")
<- plot_cox.event(lst_models$`SB.sPLS-DRCOX`, type = "lp") density.plots.lp
$plot.density density.plots.lp
$plot.histogram density.plots.lp
For those models based on PLS components, the PLS could be studied in
terms of loadings/scores. In order to get the plots, the function
plot_PLS_Coxmos()
has been developed where the user could
specify to see “scores”, “loadings” or a “biplot” for a couple of latent
variables. By default, if no factor is given, samples are grouped by
event.
<- plot_PLS_Coxmos(model = lst_models$`SB.sPLS-DRCOX`,
ggp_scores comp = c(1,2), mode = "scores")
#> The model has only 1 component
$plot_block
ggp_scores#> $mirna
#>
#> $proteomic
<- plot_PLS_Coxmos(model = lst_models$`SB.sPLS-DRCOX`,
ggp_loadings comp = c(1,2), mode = "loadings",
top = 10)
#> The model has only 1 component
$plot_block
ggp_loadings#> $mirna
#>
#> $proteomic
<- plot_PLS_Coxmos(model = lst_models$`SB.sPLS-DRCOX`,
ggp_biplot comp = c(1,2), mode = "biplot",
top = 15,
only_top = T)
#> The model has only 1 component
$plot_block
ggp_biplot#> $mirna
#>
#> $proteomic
When a PLS-COX model is computed, the final survival model is based on the PLS latent variables. Although those new variables can explain the survival, in order to understand the disease, new coefficients could be computed in terms of the original variables.
Coxmos is able to transfer the component beta coefficient to original variables by decomposing the coefficients by using the weight of the variables in that latent variables.
However, before studying the original variables, if a PLS model is computed. Coxmos also proportionates plots to see how many % of AUC is computed per each component in order to see which components or latent variables are more related to the observation survival.
The % of AUC explanation per component could be calculated for TRAIN or TEST data. Train data was used to compute the best model, so the sum of variables/components will generate a better LP (linear predictor) performance. However, when test data is used, could happen that a specific variable or a component could be a better predictor than the full model.
<- eval_Coxmos_model_per_variable(model = lst_models$`SB.sPLS-DRCOX`,
variable_auc_results X_test = lst_models$`SB.sPLS-DRCOX`$X_input,
Y_test = lst_models$`SB.sPLS-DRCOX`$Y_input,
pred.method = "cenROC", pred.attr = "mean",
times = NULL, max_time_points = 15,
PARALLEL = FALSE)
<- plot_evaluation(variable_auc_results, evaluation = "AUC") variable_auc_plot_train
$lst_plots$lineplot.mean variable_auc_plot_train
The plot shows the AUC for the full model (called LP), and then, the AUC per each variable (for classical methods) or components (for PLS methods).
In order to improve the interpretability of a PLS model, a subset of the most influential variables can be selected. In this example, the top 20 variables are chosen. Additionally, non-significant PLS components are excluded by setting the “onlySig” parameter to “TRUE”.
# ggp.simulated_beta <- plot_pseudobpenalty.list(lst_models = lst_models,
# error.bar = T, onlySig = T, alpha = 0.05,
# zero.rm = T, auto.limits = T, top = 20,
# show_percentage = T, size_percentage = 2, verbose = F)
<- plot_pseudobeta(model = lst_models$`SB.sPLS-DRCOX`,
ggp.simulated_beta error.bar = T, onlySig = T, alpha = 0.05,
zero.rm = T, auto.limits = T, top = 20,
show_percentage = T, size_percentage = 2)
The iSB.sPLS-DRCOX model was computed using a total of 2 components. Although these components were classified as dangerous for the observations (based on coefficients greater than one), certain variables within the components may still have a protective effect, depending on their individual weights.
The following plot illustrates the pseudo-beta coefficient for the original variables in the iSB.sPLS-DRCOX model. As only the top 20 variables are shown, in some case a percentage of the total linear predictor total value could be shown. To view the complete model, all variables would need to be included in the plot by assigning the value NULL to the “top” parameter.
$plot
ggp.simulated_beta#> $mirna
#>
#> $proteomic
For a more intuitive understanding of the model, the user can also
employ the getAutoKM.list()
or getAutoKM()
functions to generate Kaplan-Meier curves. These functions allow the
user to view the KM curve for the entire model, a specific component of
a PLS model, or for individual variables.
To run the full Kaplan-Meier model, the “type” parameter must be set to “LP” (linear predictors). This means that the linear predictor value for each patient will be used to divide the groups (in this case, the score value multiplied by the Cox coefficient). The “surv_cutpoint” function from the R package “survminer” is used to determine the optimal cut-point. Other parameters are not used in this specific method.
# LST_KM_RES_LP <- getAutoKM.list(type = "LP",
# lst_models = lst_models,
# comp = 1:4,
# top = 10,
# ori_data = T,
# BREAKTIME = NULL,
# only_sig = T, alpha = 0.05)
<- getAutoKM(type = "LP",
LST_KM_RES_LP model = lst_models$`SB.sPLS-DRCOX`,
comp = 1:4,
top = 10,
ori_data = T,
BREAKTIME = NULL,
only_sig = T, alpha = 0.05)
As a result, the Kaplan-Meier curve could be plotted.
$LST_PLOTS$LP LST_KM_RES_LP
After generating a Kaplan-Meier curve for the model, the cutoff
value, which is used to divide the observations into two groups, can be
used to evaluate how the test data is classified. The vector of cutoffs
for multiple models, when getAutoKM.list()
is applied, can
be retrieved by using the function getCutoffAutoKM.list()
and passing the output of getAutoKM.list()
as a
parameter.
Once the vector is obtained, the function
getTestKM.list()
or getTestKM()
can be run
with the list of models, the X test data, Y test data, the list of
cutoffs or a single value, and the desired number of breaks for the new
Kaplan-Meier plot.
A log-rank test will be displayed to determine if the chosen cutoff is an effective way to split the data into groups with higher and lower risk.
# lst_cutoff <- getCutoffAutoKM.list(LST_KM_RES_LP)
# LST_KM_TEST_LP <- getTestKM.list(lst_models = lst_models,
# X_test = X_test, Y_test = Y_test,
# type = "LP",
# BREAKTIME = NULL, n.breaks = 20,
# lst_cutoff = lst_cutoff)
<- getCutoffAutoKM(LST_KM_RES_LP)
lst_cutoff <- getTestKM(model = lst_models$`SB.sPLS-DRCOX`,
LST_KM_TEST_LP X_test = X_test, Y_test = Y_test,
type = "LP",
BREAKTIME = NULL, n.breaks = 20,
cutoff = lst_cutoff)
LST_KM_TEST_LP
To generate a Kaplan-Meier curve for a specific component, the “type” parameter must be set to “COMP” (component). This means that the linear predictor is computed using only one component at a time to split the groups. In this case, the “comp” parameter can be used to specify which component should be computed (if multiple components, each one will be computed separately).
# LST_KM_RES_COMP <- getAutoKM.list(type = "COMP",
# lst_models = lst_models,
# comp = 1:4,
# top = 10,
# ori_data = T,
# BREAKTIME = NULL,
# only_sig = T, alpha = 0.05)
<- getAutoKM(type = "COMP",
LST_KM_RES_COMP model = lst_models$`SB.sPLS-DRCOX`,
comp = 1:4,
top = 10,
ori_data = T,
BREAKTIME = NULL,
only_sig = T, alpha = 0.05)
$LST_PLOTS$mirna$comp_2 LST_KM_RES_COMP
$LST_PLOTS$proteomic$comp_1 LST_KM_RES_COMP
$LST_PLOTS$proteomic$comp_2 LST_KM_RES_COMP
# lst_cutoff <- getCutoffAutoKM.list(LST_KM_RES_COMP)
# LST_KM_TEST_COMP <- getTestKM.list(lst_models = lst_models,
# X_test = X_test, Y_test = Y_test,
# type = "COMP",
# BREAKTIME = NULL, n.breaks = 20,
# lst_cutoff = lst_cutoff)
<- getCutoffAutoKM(LST_KM_RES_COMP)
lst_cutoff <- getTestKM(model = lst_models$`SB.sPLS-DRCOX`,
LST_KM_TEST_COMP X_test = X_test, Y_test = Y_test,
type = "COMP",
BREAKTIME = NULL, n.breaks = 20,
cutoff = lst_cutoff)
$comp_2_mirna LST_KM_TEST_COMP
$comp_1_proteomic LST_KM_TEST_COMP
$comp_2_proteomic LST_KM_TEST_COMP