class: center, middle, inverse, title-slide .title[ # lecture 5: generalized linear model review ] .subtitle[ ## FANR 8370 ] .author[ ###
Spring 2025 ] --- # readings > ### Kéry & Schaub 48-55 --- class: inverse, center, middle # from linear models to glm --- # linear models <br/> <br/> `$$\Large response = deterministic\; part+stochastic\; part$$` <br/> <br/> -- `$$\underbrace{\LARGE \mu_i = \beta_0 + \beta_1 \times x_i}_{Deterministic}$$` <br/> <br/> -- `$$\underbrace{\LARGE y_i \sim normal(\mu_i, \sigma^2)}_{Stochastic}$$` ??? Note that the deterministic portion of the model has the same form as the equation for a line: `\(y = a + b \times x\)`, which is why we call these linear models --- class: inverse, middle, center # linear models under the hood ## Variations on the stochastic model --- # stochasticity in the linear model `$$\underbrace{\LARGE \mu_i = -2 + 0.5 \times x_i}_{Deterministic}$$` -- <img src="05_glms_files/figure-html/unnamed-chunk-1-1.png" width="288" style="display: block; margin: auto;" /> --- # stochasticity in the linear model `$$\LARGE \mu_i = -2 + 0.5 \times x_i$$` `$$\underbrace{\LARGE y_i \sim Normal(\mu_i, \sigma^2)}_{Stochastic}$$` -- <img src="05_glms_files/figure-html/unnamed-chunk-2-1.png" width="288" style="display: block; margin: auto;" /> --- # stochasticity in the linear model `$$\LARGE \mu_i = -2 + 0.5 \times x_i$$` <img src="05_glms_files/figure-html/unnamed-chunk-3-1.png" width="288" style="display: block; margin: auto;" /> --- # stochasticity in the linear model `$$\LARGE \mu_i = -2 + 0.5 \times x_i$$` `$$\LARGE y_i \sim Normal(\mu_i, \sigma^2)$$` <img src="05_glms_files/figure-html/unnamed-chunk-4-1.png" width="288" style="display: block; margin: auto;" /> --- class: inverse, center, middle # components of the linear model --- # components of the linear model ### 1) Distribution `$$\LARGE y_i \sim normal(\mu_i, \sigma^2)$$` -- ### 2) Link function `$$\LARGE \mu_i = E(y_i) = linear\;\;predictor$$` -- ### 3) Linear predictor `$$\LARGE \beta_0 + \beta_1 \times x_i$$` --- # stochasticity in the linear model #### What happens if `\(\Large 0 \leq y_i\)`? <br/> <br/> -- <img src="05_glms_files/figure-html/unnamed-chunk-5-1.png" width="288" style="display: block; margin: auto;" /> --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim normal(\mu_i, \sigma^2)$$` --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim Poisson(\lambda_i)$$` -- #### 2) Link function `$$\LARGE \lambda_i = E(y_i) = linear\;\;predictor$$` --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim Poisson(\lambda_i)$$` #### 2) Link function `$$\LARGE log(\lambda_i) = log(E(y_i)) = linear\;\;predictor$$` -- <img src="05_glms_files/figure-html/unnamed-chunk-6-1.png" width="360" style="display: block; margin: auto;" /> --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim Poisson(\lambda_i)$$` #### 2) Link function `$$\LARGE log(\lambda_i) = log(E(y_i)) = linear\;\;predictor$$` --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim Poisson(\lambda_i)$$` #### 2) Link function `$$\LARGE log(\lambda_i) = log(E(y_i)) = linear\;\;predictor$$` #### 3) Linear predictor `$$\LARGE \beta_0 + \beta_1 \times x_i$$` --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim Bernoulli(p_i)$$` -- #### 2) Link function `$$\LARGE logit(p_i) = log \bigg(\frac{p_i}{1-p_0}\bigg) = linear\;\;predictor$$` -- <img src="05_glms_files/figure-html/unnamed-chunk-7-1.png" width="360" style="display: block; margin: auto;" /> --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim Bernoulli(p_i)$$` #### 2) Link function `$$\LARGE logit(p_i) = log \bigg(\frac{p_i}{1-p_0}\bigg) = linear\;\;predictor$$` #### 3) Linear predictor `$$\LARGE \beta_0 + \beta_1 \times x_i$$` --- # components of the generalized linear model #### 1) Distribution `$$\LARGE y_i \sim binomial(N, p_i)$$` #### 2) Link function `$$\LARGE logit(p_i) = log \bigg(\frac{p_i}{1-p_0}\bigg) = linear\;\;predictor$$` #### 3) Linear predictor `$$\LARGE \beta_0 + \beta_1 \times x_i$$` --- # glms in nimble - normal link `$$\mu_i = \beta_0 + \beta_1 \times x_i$$` `$$y_i \sim Normal(\mu_i, \sigma^2)$$` ``` r nimbleCode({ ## Priors beta0 ~ dnorm(0, sd = 10) #mean, sd beta1 ~ dnorm(0, sd = 10) sigma ~ dgamma(.001,.001) # residual sd ## Likelihood for(i in 1:N){ mu[i] <- beta0 + beta1 * x[i] y[i] ~ dnorm(mu[i], sd = sigma) } }) ``` --- # glms in nimble - poisson link `$$y_i \sim Poisson(\lambda_i)$$` `$$log(\lambda_i) = log(E(y_i)) = \beta_0 + \beta_1 \times x_i$$` ``` r nimbleCode({ ## Priors beta0 ~ dnorm(0, sd = 10) #mean, precision beta1 ~ dnorm(0, sd = 10) ## Likelihood for(i in 1:N){ log(lambda[i]) <- beta0 + beta1 * x[i] #Equivalently: #lambda[i] <- exp(beta0 + beta1 * x[i]) y[i] ~ dpois(lambda[i]) } }) ``` --- # glms in nimble - binomial link `$$y_i \sim binomial(N, p_i)$$` `$$logit(p_i) = log \bigg(\frac{p_i}{1-p_0}\bigg) = \beta_0 + \beta_1 \times x_i$$` ``` r nimbleCode({ ## Priors beta0 ~ dnorm(0, sd = 10) #mean, sd beta1 ~ dnorm(0, sd = 10) ## Likelihood for(i in 1:nobs){ logit(p[i]) <- beta0 + beta1 * x[i] #Equivalently: #p[i] <- exp(beta0 + beta1 * x[i])/(1+exp(beta0 + beta1 * x[i])) y[i] ~ dbinom(p[i], J) #J = trials } }) ``` --- class: inverse # generalized linear models <br/> <br/> -- - Flexible method to model observations arising from different probability distributions <br/> <br/> -- - Link many classical tests under unified framework <br/> <br/> -- - Underlie nearly all common ecological models --- # In-class Example In the week 2 homework, we simulated data from a deer population declining because of CWD. Let's fit our simulated data to a model in Nimble! ``` r set.seed(4) #reproducible nocc <- 20 EN <- N <- array(NA, nocc) #create empty arrays with space for 20 years N[1] <- 475 lambda <- .95 #simulate: for(t in 2:nocc){ EN[t] <- N[t-1]*lambda N[t] <- rpois(1, EN[t]) } ``` --- # In-class Example First we need a model: ``` r library(nimble) deermod <- nimbleCode({ N[1] <- 475 for(t in 2:nocc){ EN[t] <- N[t-1]*lambda N[t] ~ dpois(EN[t]) } #end t lambda ~ dgamma(1, 1) }) ``` --- # In-class Example Parameters of interest: ``` r params <- c("lambda") ``` Constants: ``` r nimconsts <- list(nocc = nocc) ``` Data ``` r nimdat <- list(N = N) ``` Inits ``` r niminits <- function(){list(lambda = rgamma(1,1,1))} ``` --- # In-class Example ``` r nimfit <- nimbleMCMC(code = deermod, data = nimdat, constants = nimconsts, inits = niminits(), monitors = params, thin = 1, niter = 5000, nburnin = 1000, nchains = 3, samplesAsCodaMCMC = TRUE ) ``` ``` ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ``` ``` ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ``` ``` ## |-------------|-------------|-------------|-------------| ## |-------------------------------------------------------| ``` --- # In-class Example ``` r library(MCMCvis) MCMCtrace(nimfit, pdf = F, Rhat = T, n.eff = T) ``` <img src="05_glms_files/figure-html/unnamed-chunk-17-1.png" width="504" style="display: block; margin: auto;" /> --- # In-class Example Unsurprisngly accurate! ``` r summary(nimfit)$quantiles ``` ``` ## 2.5% 25% 50% 75% 97.5% ## 0.9286 0.9444 0.9528 0.9613 0.9772 ```