Calculates y-hat \(\left( \mathbf{\hat{y}} \right)\), that is, the predicted value of \(\mathbf{y}\) given \(\mathbf{X}\) using $$ \mathbf{\hat{y}} = \mathbf{X} \boldsymbol{\hat{\beta}} $$ where $$ \boldsymbol{\hat{\beta}} = \left( \mathbf{X}^{T} \mathbf{X} \right)^{-1} \left( \mathbf{X}^{T} \mathbf{y} \right) . $$

.Xbetahat(X, betahat = NULL, y = NULL)

Arguments

X

n by k numeric matrix. The data matrix \(\mathbf{X}\) (also known as design matrix, model matrix or regressor matrix) is an \(n \times k\) matrix of \(n\) observations of \(k\) regressors, which includes a regressor whose value is 1 for each observation on the first column.

betahat

Numeric vector of length k or k by 1 matrix. The vector \(\boldsymbol{\hat{\beta}}\) is a \(k \times 1\) vector of estimates of \(k\) unknown regression coefficients.

y

Numeric vector of length n or n by 1 matrix. The vector \(\mathbf{y}\) is an \(n \times 1\) vector of observations on the regressand variable.

Value

Returns y-hat \(\left( \mathbf{\hat{y}} \right)\).

Details

If betahat = NULL, the betahat vector is computed using betahat() with X and y as arguments. If betahat is provided, y is not needed.

References

Wikipedia: Linear Regression

Wikipedia: Ordinary Least Squares

See also

Other y-hat functions: .Py(), Py(), Xbetahat(), yhat()

Author

Ivan Jacob Agaloos Pesigan