Chapter 17 Miscellaneous

17.1 Adaptive randomization probabilities and Interim Analysis

This report simulates an FDA-style adaptive clinical trial in R, combining response-adaptive randomization, a single interim sample-size re-estimation, and stopping rules to estimate the treatment effect and empirical power across 500 simulated trials.

17.2 BE Simulation

This BE simulation script uses Monte Carlo–generated concentration–time profiles with between-subject and residual variability to estimate the probability that AUC and Cmax jointly meet the 90% CI bioequivalence limits (0.80–1.25) across scenarios of true GMR, sample size, and uncertainty in variability assumptions.

17.3 Real-World Study

This real-world study workflow simulates an observational dataset, summarizes baseline covariates, uses propensity-score matching (plus balance diagnostics) and an IPTW sensitivity analysis, then fits Cox/KM survival models showing that age and comorbidity strongly increase event risk while the exposure group has no statistically significant association with time-to-event after adjustment.

17.4 ISE/ISS pooling

This ISE/ISS simulation project generates multi-trial patient-level efficacy and safety data, computes study-specific treatment effects, and applies REML random-effects meta-analysis to estimate pooled efficacy (mean difference ≈ 1.44) and safety risk difference (≈ 2.4%) across studies with negligible heterogeneity.

17.5 Diagnostic test

This diagnostic test workflow demonstrates how to compute and interpret 2×2 table metrics (sensitivity, specificity, predictive values, likelihood ratios), update post-test probability via Bayes’ theorem, and compare binary and multi-class ROC/AUC performance—including paired, unpaired, and partial AUC tests—using multiple R packages.

17.6 Statistical modeling

Led end-to-end statistical modeling initiatives spanning adaptive trial simulation, bioequivalence and power analysis, real-world causal inference, ISE/ISS meta-analysis, Bayesian joint modeling, and machine-learning–enhanced treatment effect estimation, delivering reproducible R-based pipelines and regulatory-ready analytical solutions across clinical research domains.

17.7 Principles of Sample Size, Interim Data Reports, and Randomization

This excerpt outlines the clinical statistician’s end-to-end role in a trial: shaping statistical methodology and trial duration, designing randomization (with strict allocation concealment in double-blind settings), monitoring accumulating data, producing interim reports for DSMBs, and supporting final regulatory reporting (e.g., FDA/EMA). It summarizes what a typical protocol must cover (background and rationale, prior-phase evidence, research questions and hypotheses, design features, enrollment and eligibility criteria, treatment and data collection procedures, database management, a statistical plan, and safety monitoring). It then reviews sample size determination as a pre-specified design commitment driven by hypotheses, Type I/II error, target effect size, variability, and distributional assumptions (normal approximations for continuous/percent change endpoints and Poisson modeling for event rates/patient-years), including formulas for mean comparisons and numerical approaches for complication-rate planning. Finally, it introduces interim analysis approaches—classical group sequential testing and Bayesian sequential procedures—highlighting how repeated looks require error-rate control and can enable early stopping, and it reinforces that equal allocation improves power under common assumptions while concealment of assignments is essential to prevent bias.