Standard Errors of Standardized Estimates of Regression Coefficients (Yuan and Chan (2011))

sehatslopeshatprimedelta(X, y, adjust = FALSE)

Arguments

X

n by k numeric matrix. The data matrix \(\mathbf{X}\) (also known as design matrix, model matrix or regressor matrix) is an \(n \times k\) matrix of \(n\) observations of \(k\) regressors, which includes a regressor whose value is 1 for each observation on the first column.

y

Numeric vector of length n or n by 1 matrix. The vector \(\mathbf{y}\) is an \(n \times 1\) vector of observations on the regressand variable.

adjust

Logical. Use \(n - 3\) adjustment for small samples.

Details

The \(p\)th estimated standard error is calculated using $$ \mathbf{\widehat{se}}_{\boldsymbol{\hat{\beta}}_{\text{p}}^{\prime}} = \sqrt{ \frac{\hat{\sigma}_{X_{p}}^{2} \hat{c}_{p} \hat{\sigma}_{\hat{\varepsilon}}^{2}}{n \hat{\sigma}_{y}^{2}} + \frac{\hat{\beta}_{p}^{2} \left[ \hat{\sigma}_{X_{p}}^{2} \left( \boldsymbol{\hat{\beta}}^{T} \boldsymbol{\hat{\Sigma}}_{X} \boldsymbol{\hat{\beta}} \right) - \hat{\sigma}_{X_{p}}^{2} \hat{\sigma}_{\hat{\varepsilon}}^{2} - \hat{\sigma}_{y, X_{p}}^{2} \right]}{n \hat{\sigma}_{y}^{4}} } $$ where

  • \(p = \left\{2, 3, \cdots, k \right\}\)

  • \(\hat{\sigma}_{\hat{\varepsilon}}^{2}\) is the estimated residual variance

  • \(\boldsymbol{\hat{\beta}}_{2, 3, \cdots, k} = \left\{ \hat{\beta}_{2}, \hat{\beta}_{3}, \cdots, \hat{\beta}_{k}\right\}^{T}\) is the \(p \times 1\) column vector of estimated regression slopes

  • \(\hat{\sigma}_{y}^{2}\) is the variance of the regressand variable \(y\)

  • \(\boldsymbol{\hat{\Sigma}}_{\mathbf{X}}\) is the \(p \times p\) estimated covariance matrix of the regressor variables \(X_2, X_3, \cdots, X_k\)

  • \(\hat{\sigma}_{X_p}^{2}\) is the variance of the corresponding \(p\)th regressor variable.

  • \(\hat{\sigma}_{y, X_{p}}^{2}\) is the covariance of the regressand variable \(y\) and the regressor variables \(X_2, X_3, \cdots, X_k\)

  • \(c_p\) is the diagonal element that corresponds to the regressor variable in \(\boldsymbol{\Sigma}_{\mathbf{X}}^{-1}\)

  • \(n\) is the sample size

References

Yuan, K., Chan, W. (2011). Biases and Standard Errors of Standardized Regression Coefficients. Psychometrika 76, 670-690. doi:10.1007/s11336-011-9224-6.

See also

Other standard errors of estimates of regression coefficients functions: .sehatbetahatbiased(), .sehatbetahat(), .sehatslopeshatprimedelta(), .sehatslopeshatprimetb(), sehatbetahatbiased(), sehatbetahat(), sehatslopeshatprimetb()

Author

Ivan Jacob Agaloos Pesigan

Examples

# Simple regression------------------------------------------------ X <- jeksterslabRdatarepo::wages.matrix[["X"]] X <- X[, c(1, ncol(X))] y <- jeksterslabRdatarepo::wages.matrix[["y"]] sehatslopeshatprimedelta(X = X, y = y)
#> sehatslopeshatprime #> [1,] 0.02556128
# Multiple regression---------------------------------------------- X <- jeksterslabRdatarepo::wages.matrix[["X"]] # age is removed X <- X[, -ncol(X)] sehatslopeshatprimedelta(X = X, y = y)
#> sehatslopeshatprime #> [1,] 0.02282716 #> [2,] 0.02317122 #> [3,] 0.02342286 #> [4,] 0.02113537 #> [5,] 0.02330714