Funnily, mixed effect regression was the first type of regression analysis I learned (I was given a huge complex data set with no prior R experience as an analysis task). I compiled a collection of papers and link and books that I used to self teach. My goal is to provide the links and a description as to why they were useful/why I needed that information so one can follow along and self teach also.

*I am also continually updating it as new sources arise or links get broken.*

To begin, the following are MUST READ books or papers or tutorials. They really set the foundation for understanding and building multilevel models and what they capture beyond regular ordinary least squares regression (they aren’t necessarily in reading order, links may need to be updated too).

**Gelman and Hill:** Data Analysis Using Regression and Multilevel/Hierarchical Models
- Andrew Gelman is a must. Although the techniques and packages he uses aren’t necessarily what I ended up using in my analysis flow, he explains the intuition behind MLM very well. *
*to download, click the “get” link at the top of the page*

**Nezlek 2008:** Multilevel modeling for social and personality psychology
- Super easy to understand introduction to MLM using the HLM package (which I don’t use), but it helped me understand the formal equations more intuitively.

**Singmann & Kellen:** An Introduction to Mixed Models for Experimental Psychology
- A much more accessible/concise introduction than some of the other sources.

**Barr 2013**: Random effects structure for confirmatory hypothesis testing: Keep it maximal
- Setting the random effect structure can be a confusing and complicated task and the incorrect structure can lead to inflated false positives. Barr provides extremely helpful advice on how to go about this. I find myself rereading this everytime I encounter a new analysis problem.
- The main paper only discusses random main effects, in this paper he adds interactions.
- Also important to read [link, pdf version]. The same authors directly compare ANOVA with mixed regression and clarify misunderstandings about both.

**Baayen 2017:** The Cave of Shadows Addressing the human factor with generalized additive mixed models
- Argues against maximal random effect structures in models (Barr 2013), provides alternative practices. I was just told about this paper and the Bates 2015 below. So maybe the answer isn’t as clean as make the maximal model possible.

**Bates 2015:** Parsimonious mixed models
- Also argues against maximal models and also provides alternative approaches.

**Brauer & Curtin 2017:** Linear Mixed-Effects Models and the Analysis of Nonindependent Data: A Unified Framework to Analyze Categorical and Continuous Independent Variables that Vary Within-Subjects and/or Within-Items
- Very accessible tutorial. What I like most about it is that it has the r code next to the more formal equation so one can learn how both notations relate.

**Matuschek 2017:** Balancing Type I error and power in linear mixed models
- This paper unpacks both viewpoints on maximal models. Here’s a twitter thread on the matter.

**Harrison 2018:** A brief introduction to mixed effects modelling and multi-model inference in ecology
- Newer paper giving an introduction to mixed modeling in ecology, however, it provides some good general tips for modeling decisions that will come up when constructing yours.

**Baayen 2008:** Mixed-effects modeling with crossed random effects for subjects and items.
- Great paper on the necessity of including stimuli as random effects along with subjects.
- Judd 2012 made the same point in that this is typically neglected in social psychology.
- Westfall, Nichols, & Yarkoni made the same point for fMRI analysis.

**West:** LINEAR MIXED MODELS A Practical Guide Using Statistical Software
- This book provides a good overview of using r to run these models. They have model building methods that I use as a source for helping me in exploratory analyses.
- Also, here is a related book by same author on actual R functions for mixed effect modeling.

**Winter:** A very basic tutorial for performing linear mixed effects analyses
- This is quick tutorial but explains the concepts so clearly. He dumbs down the language so it was excellent when I was first learning.
- Here is part 1 of the tutorial

**Knowles:** Getting Started with Multilevel modeling in R
- Another basic tutorial, but it was instrumental in helping me learn and explore the models.
- Here is the second part of the tutorial.

**Nakagawa 2012:** Nested by design: model fitting and interpretation in a mixed model era
- Great paper explaining how to deal with nested/crossed designs in mixed models and generally explains all the components of a mixed model.

**Bolker 2009:** Generalized linear mixed models: a practical guide for ecology and evolution
- Great accessible paper on the mechanics/technical side of the models.

**Schielzeth 2009:** Conclusions beyond support: overconfident estimates in mixed models
- You need random slopes, not just random intercepts, to protect against anti-conservative fixed effect estimates.

- Formulae in R: Anova and other models, mixed and fixed
- Great resource for the syntax of the code in R

**Howell** – Mixed models for missing data with repeated measures
- Part 1
- Part 2
- This series really helped me understand the relationship between anova analyses and mixed effect models in regards to the assumptions (e.g., compound symmetry).

**Hajduk 2017:** Introduction to mixed models – great tutorial with R code.- Choosing R packages for mixed effects modeling based on the car you drive
- Excellent and accessible comparison of the many different R packages for running mixed models.

**Freeman:** Visualization of hierarchical models
- I’m a visual person and this tutorial shows an easy visual way to conceptualize mixed models (and links it to equations – which I find harder to digest)

- Jake Westfall has an introductory reading list that may be helpful here too.

The following are extra materials that are highly relevant but that I didn’t interact much with. They may (or may not) be useful.

Finally, here are some pages that go over some of the basic questions I had during implementation. I will try to cluster them into overarching topics.

**Understanding the analysis**

Of course, the first search I did was to understand mixed regression in general. What does it do? Why do I need it? How is it different from other analyses?

**Confidence Intervals**

After running your regression, how do you get confidence intervals for your betas? Typically you use confint(model) or if you want wald (asymptotic and fast but less precise) confidence intervals, you use confint(model, method=’Wald’). However, here are some links for comparing confidence intervals through other packages or the difference between prediction intervals and confidence intervals.

**(Restricted) Maximum Likelihood Estimation**

An important aspect to understanding these models is how the parameters are estimated (hint: not using least squares). They use Maximum Likelihood (ML) or Restricted Maximum Likelihood (REML).

## Inference

I understand the lack of p values in these models, but I come from traditional labs, so I had to learn how to draw p value based inferences from these models. There are many methods for this: likelihood ratio test (lrt) for model comparison, lmerTest for both anova and predictor style inference, bootstrapping, etc.

- Three ways to get parameter-specific p-values from lmer
- Getting P value with mixed effect with lme4 package
- How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
- Significance Testing in Multilevel Regression
- How to get an “overall” p-value and effect size for a categorical factor in a mixed model (lme4)?
- What is the null model for a likelihood ratio test of a within-subjects factor?
- Good advice for what counts as a null model in lrt.

- F and Wald chi-square tests in mixed-effects models
- If you’re looking for more anova like results.

- lme vs. lmer
- This link advocates for the use of lrt for fixed effects.

- Satterthwaite vs Kenward-Roger approximations for the df in mixed effects models
- If you use the lmerTest package to run your models so the p values are automatically included, there is the option of using Satt or KR approximations, so I wondered what the difference was.

- How are the likelihood ratio, Wald, and Lagrange multiplier (score) tests different and/or similar?
- Depending on the analysis you use, you may be using wald-based inferences (this gives you z statistics instead of t because it’s asymptotic and can’t calculate degrees of freedom needed for t test, typically used for data that would take a long time to compute or in logistic regressions) or likelihood ratio tests. This provides a good comparison of these methods.

- Different p-values for fixed effects in summary() of glmer() and likelihood ratio test comparison in R
- When trying these different inference methods out, some of the time they didn’t agree (as in the same p value). Sometimes it had to do with REML vs ML, but there are also differences in the estimation methods that should be taken into account.

- Should I include this fixed effect? lme4 likelihood ratio test and lmerTest anova disagree
- Shows the horrors of not understanding what goes on under the hood with these functions based on how you process your data.

- DRAFT r-sig-mixed-models FAQ
- Great resource for a more authoritative voice on inference and issues that may come up.

- lsmeans
- I personally use this package the most. It’s flexible in obtaining multiple comparisons (both of averages and slopes) AND estimating slopes/averages across variables and allows p value adjustment if needed.
- Here is another tutorial, and a question on p value adjustment.
- How to grab the estimates and plot them: Link

- Complex analyses/inferences
- Multiple Comparisons for GLMMs using glmer() & glht()
**Gelman:** Why We (Usually) Don’t Have to Worry About Multiple Comparisons- lmer multiple comparisons for interaction between continuous and categorical predictor
- Effect sizes in lmer
- I’ve pulled my hair out (jk) trying to figure out how to estimate effect sizes in lmer (especially with complex models). Model fits like r2 work for assessing some sort of effect size for full models, but there is none I have found for specific betas in the regression. If you know please let me know too.
- Westfall shows a simple example of obtaining the d stat, but not clear how this works for different model types.
- Another page that provides easy to follow instructions for getting effect sizes and their confidence intervals

- Some concerns to consider in standardizing variables in multilevel models
- I had a collaborator who asked about standardizing variables, but I wasn’t sure how this was done and consequences in MLM, this provides some clues.

- Boostrapping
- Cross validation
- Significance testing in Multilevel regression
- provides easy to understand description for how different tests are conducted using different software (HLM, R, etc), helps to understand what kind of information goes into those degrees of freedom or t statistics.

**Logistic Regression**

I ended up modeling trial accuracy data, which is a binary outcome variable and thus requires logistic regression models. The implementation wasn’t difficult, but interpreting the results takes practice and care. These links are general tutorials that helped me understand implementation and coefficient interpretation.

**UCLA:** R data examples: mixed effect logistic regression**UCLA:** Logit Regression**UCLA:** Deciphering Interactions in Logistic Regression
- This was an important link as interactions are a messy thing to interpret

- UCLA: Logistic regression with stata
- not in r language, but provides interpretation intuitions.

- Why use Odds Ratios in Logistic Regression
- One thing I was confused about is what to report from a logistic regression. Do I report log odds, probability, odds ratios? It seems different fields vary, but I stick to odds ratios now.

- Odds Ratios NEED To Be Graphed On Log Scales
- How to create odds ratio and 95 % CI plot in R
- I used this link for the small code at the bottom that I always forget (scale_y_log10) to plot odds ratios.

- ggplot2: stat_smooth for logistic outcomes with facet_wrap returning ‘full’ or ‘subset’ glm models
- You can’t just use ggplot to plot the regression from the data using the ggplot functioning because it will miss the nuances of your model (multiple predictors or random effects). So you have to predict the values from the model to plot.

- Graphing a Probability Curve for a Logit Model With Multiple Predictors
- Output of logistic model in R
- provides information on how to get predicted probability or odds ratios from the model (for plotting).
- Here is another link for this.

- Logistic Regression in R (Odds Ratio)
- Quick understanding of how to get confidence intervals. I don’t necessarily use this method anymore, but still useful.

- Binomial glmm with a categorical variable with full successes
- If your SEs are crazy large (>1000s), there might be complete separation (though you should plot your data first to figure this out).

**Sommet 2017:** Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS
- Super casual tutorial, very useful for step by step understanding of how to run a logistic MLM.

**Van den Noortgate 2003:** Cross-Classification Multilevel Logistic Models in Psychometrics

**Model Building**

I keep getting mixed advice about this approach and its varieties. I was taught by a statistician who said stepwise approaches were ok but I read otherwise. For exploratory work this may be ok (as compared to confirmatory), but do what you want. I’ll just post the materials I used to understand these methods.

**Model Complexity**

When I first started, I wondered how crazy these models can get. Can I just throw every variable in? Are there costs/benefits/limitations to parsimony vs complexity?

**Model Fits**

Diagnosing whether the model fits well and how to do so is important. This typically involves some form of checking unexplained variance along with examining assumptions.

**Convergence**

When the data is not robust enough for the model or the model is too complex, it will not converge. This tends to render your estimates unreliable. So this is an important issues to either fix or look into to see how bad it is.

**Variance Components**

I’m currently working on projects that are more interested in the variance components than the betas. The variance components tell you how much the means vary across units of your random effects, e.g., if participants is a random effect, how much their intercepts vary. Important to this topic are intraclass correlations (ICC) and variance partitioning coefficients (VPC) and their interrelations.

**Reliability**

Related to variance components, the within/between subject variance can give you a sense about the reliability of your measure. The within subject variance would be the residuals that aren’t captured in the model, the between would be the random effect groupings. Not all links are necessarily mixed model related, but may be useful. *Note: this is intimately related to variance components/icc above so those sources will also help.*

## Power Analysis

The hardest part (for me) about starting a study is determining power, especially when your analyses consist of complex mixed models. I haven’t fully read through all of these links, but I am aggregating them to read soon.

**Bayesian**

This is an approach I’m slowly starting to look into, how to make my multilevel models bayesian. Here are some packages that are helpful.

**Longitudinal**

I have started examining how to model longitudinal data.

**Nonlinear**

I have also started examining how to model nonlinear data (e.g., prediction accuracy, psychophysical data, etc).

**Simulation**

Turns out analyzing your own data is only half the applications. Sometimes it’s worth it to be able to simulate data sets with specific variance components, or even specific crossed random effects. Here I’ll be adding links that help thinking through data simulation, especially when it gets complicated in terms of correlations between participants and/or stimuli.

**Miscellaneous**

This is just stuff I learned through the process that may not be directly related to mixed models.

So these are the links I found most useful, and I will update as I continue forward. And when I have more time I will make the links more descriptive as they are cryptic at the moment.